Dev Improv: Tell Us What To Build!
with Cassidy Williams
What happens when Cassidy and Jason take suggestions from chat and try to build something on the fly? Chaos, probably. Come join in on the fun!
Resources & Links
Captions provided by White Coat Captioning (https://whitecoatcaptioning.com/). Communication Access Realtime Translation (CART) is provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings.
Here we go. Hello, everybody, and welcome to another episode of Learn with Jason. Today on the show, we're bringing back Cassidy Williams. Thank you for coming on.
Thank you for having me on.
I'm excited, we're going to get weird today. And I think this is going to be, I'm always really excited about this. I think my idea of a good time is, like, just unfettered chaos, right? If we put me on an alignment chart, I'd like to consider myself to be chaotic good. That's where I fall. And I feel like you're maybe a kindred spirit in that sense.
Yeah. Indeed, sounds like a good time.
And so, I had a last minute change of plans today. I was going to have Nat Alison on today, she had something come up. I was kind of scrambling this morning, what am I going to do? And Cassidy, like a champion was, like, I'll come on. And so, we decided rather than trying to come up with something clever to do, we would off load all of that work on to you, chat. So before we get to that, well, actually, chat, start thinking of things you would like for us to do. Come up with silly ideas. We want you to help us co author it. We're going to do it on the spot. Do you want to give a background for everyone not familiar with you in your work?
Hello, I exist. So I work with Jason for better or for worse. And we we're on the developer experience team. I typically work with React and Next JS is what I do recently and make a lot of memes and jokes, that's just as important as the code. Yeah, that's is that a good enough high level?
Perfect. What I think people should know about you, you have consistently been one of the, like, kindest and funniest people I've known on the internet. From when I first got introduced to your work, I think the first thing I got from you was one of your early kickoffs. And so, you know, I saw some of that. And I've seen the work that you've done in the community when you were over at React training. I feel like you've always been somebody who kind of is, like, a you know, when you go to a community and, like, you start to notice that there are people who are always having a good time. And then, there are people who are having a good time when things are going their way. And then, there are people never having a good time. I feel like you fall firmly into that always having a good time. Even when things are terrible, you're finding it a way to make it fun. I really appreciate that about you. It's part of what makes it so great to have you on the team.
But yes, I think the chat is recommending what we try to do during this episode is exit VIM.
That's impossible. (Laughter)
OK. The way we want to play this game today. And I think it's going to be fun, we were kicking around the idea of just trying to build something. And, you know, our expertise is kind of in the React frontend kind of space. So let's maybe keep it there. But outside of that, we don't really have any rules other than it should probably be small because we've got less than 90 minutes to do it. And if it's silly, we're more likely to do it. So if you watch the previous episode with Cassidy, we built, Cassidy did, like, a CSS animation of me that you could grow or shrink my beard. And actually, you can try this right now. Keep it going. And if we go far enough, it will go rainbow. Oh, there it is. Look at it go. You can also make my arms flap. Oh, thank you. It's great to have you here. Make sure you spam that boot now that you've got it. At any point in time, you want to avenge Sarah Drasner, you can drown Cassidy in boops. Thank you for the sub, very much appreciated. So let's do this, let's switch over to pair programming view.
Also, I want to make it clear, I'm not just, like, day drinking, this is chi tea. I realized what it looked like as soon as I saw it on the stream. This is not going to look good.
I go the other way.
This looks like a coffee cup full of bourbon.
Oh, yum. Yeah, this is chi latte and the only cups I have because of my cups are still packed and I have not unpacked. But yeah, after move in.
That is funny. And I would have, like, never picked up on that. This is not the type of observation that I do. I always get crap because I'll walk into a room and Marissa will have completely rearranged it. And I'll walk through it and she's like, what do you think? Do you even live in this house?
Yeah, that sounds about right with my husband Joe. He kind of, I do things, and sometimes, I'm like, I wonder how long he'll take to notice that the entire kitchen is completely set up differently right now.
What's the average?
Usually, I have to bring it up. There's not a choice there.
The limit does not exist.
Oh, so another thing that I saw, that I just looked across my desk and saw this. Look at this cool thing that Cassidy made for me. I think, can you read it, this is a wooden Tweet that I like mentioned. There's a site called laser Tweets, let me see if I can find this, actually.
I think it's laserTweets.co.
Laser Tweets, oh, I spelled it wrong. LaserTweets.co, and I believe this is run by Joshua Pickford, maybe. You can get custom Tweets and this is really fun and we can do fun things with it. And Cassidy just dropped one into my mailbox like a boss. And it's a Tweet that I thought was funny. And I'm taking that as a reTweet. So I just wanted to bully you on stream into saying you think I'm funny.
You're very funny.
And I mean that.
Thank you for the bits. Blink twice if you're in crisis. (Laughter) All right. Chat. Oh, we've got a hype train going. So several things we can do right now. One, if you want to, cheer, sub, any of those things, you will help drive the hype train. Apparently that's a fun thing to do. I think we've gotten it up to level 3 is our record right now. You can keep rolling that way. We can start pitching ideas for the app that we want to build. So the pitched Tweets, too. We're always looking for new material. So here's the idea for today. You know, something small or silly that you want to see us build. Somebody recommended Freebird, we're not going to do Freebird. That's always the first reply when you ask for requests. Freebird. So an app that creates TikTok skits, I don't think we have enough time to write the conditional statements to do Machine Learning today.
So many ifs.
Tomagachi would be fun.
That is an awesome idea, that is probably not silly enough for today.
Yeah, I love the idea some day.
Yes, thank you for the bits. A flappy Jason Tomagachi. Oh, my god. OK. A Chrome extension that places a done seal of approval on every page ending in nullify.app. Sound equalizers are cool. Aggregating wish lists from other sites, tracking COVID cases. A meme generator is always kind of fun. What's the J query snow query?
I've seen that twice now.
J query snow plugin. Is it just snow on your page? Yeah, it is. OK. It already kind of exists if you see the boops on this channel.
With the marquee tag if it's vertical. This is, if you want to take a look at how the boops work on the screen, this is how that works. A poopymon tracker.
I assume it's Pokemon and poop, but we need to ask more questions.
A mechanical keyboard if when you're not using a mechanical keyboard but you want the clickies.
Yeah. You could have it, yeah, it's like a soundboard, but you type, you pick the switch, and as you type just on your key down or whatever. That could be funny. A build plugin to add a page visitor counter. Back to the '90s web, I'm kind of into that.
I wasn't sure how, because there's sound, there's smell, there's length, there's too many
Too many options there.
Yeah, that seems like that one's.
Perhaps another time.
Makes you want to say, it's going to get messy.
That one's going to itch later. (Laughter)
Oh, my god.
Makes every person who sees Google AdWords automatically like our Facebook page. What?
I feel like that's illegal.
That is super illegal. A campus based game. That could be fun. What do you got, chat? What you got? Fart book is a good one.
Poopymon tracker and Tomagachi to raise a poop emoji. That's a good one.
Poop jokes follow me wherever I go. Probably because I make them.
A React me TRI gnome that doesn't keep time correctly.
I was thinking it might be interesting to do a soft block Chrome extension where it does, like, it blocks a person and unblocks them so they follow you.
So I do that manually a lot, it might be nice to have a button.
OK. I don't know anything about Chrome extensions, but I'm down to do it if you are.
I mean, I have made one. And so, I can look at my past and see how I made them. That's how development works, but that's the most I've done.
So soft block would be very useful. It might be more than we're able to do in
I think it might. And does the Twitter API allow you to block people? Because otherwise, that would involve us like scraping a web page.
Which is fine, it could be fun. But we might want to check.
It's a little too practical. I tend to agree. Little too practical.
You know, let's not. We're supposed to be fun. There are a few fun ideas. There's some poop emoji stuff we're doing. One thing we're doing that could be fun, we could allow someone someone toed a photo and replace their face with a poop emoji.
Oh, that's kind of fun. There's nice open source detection JS things.
There's also, like, so we could use something like Face API and look at all of the images on a web page and, like, emoji the faces. Call it poopify. That could be really funny. If we were going to do that. Let's look at Face API and see if that will work on any image. Face API.JS.
So this is the library. And I think it might just work on any face. Live demos. Let's see.
Some of it's kind of scary. The fact it traces the nose and stuff.
Use my camera. So this is the tiny, so here is here we go. OK. It's got live face detection going, which is pretty cool. And then, we could if we wanted to, yeah, we could basically use the box and replace anything on the page it recognizes with a face with any image we want, which could be a fun way to wreak havoc. This could be laying the ground work for pretty excellent trolls.
I do like, oh, wait, didn't you have a Learn with Jason with Gant adding masks?
Yeah. Mm hmm.
Something for Halloween or something like that.
Yes, so what we did was, at least I think this was the one, we did Halloween masks where we took a bunch of photos where we run it through face API, and looks like it missed one for some reason. Maybe it timed out. No, doesn't work for some reason. Maybe because he's making a face. It puts a mask on the person. And this one was interesting because we did a bunch of math to figure out the angle of the eyes and the tilt of the face and stuff so the masks always ended up over the eyes. Which was pretty fun. We could do something similar to this and just put, you know, whatever we want on there.
I like that.
OK. So I think the way we would do this, then, is we've got this that we can use for a reference. I'll drop this in the chat if anyone wants to look at it. When mouth moves Emojis wore out.
Not unlike a lot. Doesn't the I think the Snapchat API allows you to do something with that.
What? OK, Snapchat API.
I like that even more.
I know there's Snap Camera that allows you to do stuff. But they have an actual app thing you can use. I have to request access.
Hmm, I think there's something you can do without requesting access.
I'm also looking it up, as well.
Oh, there's an AR studio.
Yeah. Hey, even if we don't do this, this is pretty neat. And I would like to play with it.
White Coat Captioning is doing the captioning, I think. There it is. Yeah. So White Coat Captioning is made possible by sponsorships from Nulafy, Santy, thank you for making this show possible for more people. So, yeah, thank you, so, so much. If you want to see those they're at LWJ.dev/live. We have made the captioner type poop so many times today, I'm so sorry. (Laughter) OK.
All right. So this is probably more than we can manage. But I do think that we can do a lot with the Face API. I also think that it looks like we could do something like Face Expression. I'm neutral, what if I open my mouth? So we could make the Emoji show up when you do surprise. Let me keep my eyes dead. Yeah.
As long as your mouth is open, it thinks
Now, that is that was like the equivalent, the human equivalent of the Pikichu face.
As long as I'm face on, also, watch, you see how it thinks I'm angry? I do have resting murder face. I'm not even doing anything. It's just like, you're angry.
I saw happy for half a second. I was happy once. All right.
Yeah, we could do something we're just like, man, it would be fun to combine this with speech or something so that way, let's say someone says a bad word, you can make poop cover their mouths whenever they say it.
That would be so fun.
I wonder, we might not be able to do the actual word recognition, but I bet we could do something where we use this and audio so that whenever somebody makes noise, like the mic is hot, we can trigger the emoji then. Check this out. If we're smiling and talking, it shoots out rainbows, but if we're angry and talking, it
Actually, I can't seem to make this thing think I'm angry. Hold on.
What if you take your glasses off. Does that help?
No, it just thinks I'm neutral.
I'm not capable of anger.
Oh, that is a very scary face, though. How does it not see that?
I like this constant feedback.
We just need to have that running in meetings so we can be, like, OK, keep it neutral.
That would be really funny, like an alarm filter that would let you know when you're not making a happy face.
What if we did, like, some kind of mansplaining bot where it would just occasionally be like, you should smile more. (Laughter)
That would be so great.
That would be so great. OK. All right.
And then, we could have it, like, criticize us, too, it would be perfect.
Oh, no. It'll just, you should smile more, and then, it can tell me something, like, you know, you'd be a whole lot prettier if you didn't wear glasses.
Oh, perfect. Yes.
We could probably start just by pulling down this example. So if I go and look at the code, which I believe is up here somewhere, yes? Then, we can look at the examples and let's see what's in this, how much is going on your web cam, expression recognition. OK. So pulls in a bunch of stuff. And then, what happens?
What is this?
So here's our video. And there's a canvas overlay. And then, in the canvas overlay, here's the hard part. It chooses all of the things on the fly. That's OK. We can make this work. And then, so these are controls. Where's the script?
Oh, there it is.
Check if the caption model is loaded.
That result, is that where the square is drawn? Right there. 155?
Yes. OK. So we draw, OK. Draw facial expressions. So when, document.ready, we were talking about J query earlier. So we set up the nav bar, the controls and then we run it, which changes the face detector. Loads the model. This is just a native browser API to get the video.
It seems like a relatively simple API.
Some of the things are, it does a lot of the heavy lifting for us.
Yeah. So do you want to do this in a code pen?
Sure. Yeah. And then, we can collab on it.
Let's do a new Code Pen, I'm going to pull this thing.
I was going to say, it says connection lost, I think you lost your mouse.
Plug it in, I guess.
The cable? Hey, look at that.
That's the magic of this track pad, you don't have to charge it, you can flip it upsidedown like a track pad.
That's so courageous.
Whatever you want to call this and I'm going to save it. And let me pull this off screen really quick, I'm going to get the collab view. Because, look chat, I like you a lot, but also, you cause problems when you're allowed to touch the code.
Send it to me secretly.
All right. So I think now, if I take this one and make it full screen, then I should be able to pull it over here. And here we go. All right. So, I think the thing that we're going to need the most here is a really simple video, no, video set up. We have a video and canvas. Let's set that up. And then, inside, we can do a video and it was auto play, muted, I'll have to figure out how to make that do stuff. And then we've got a canvas with an ID of overlay.
Love it when the magic happens.
That's all this needed. This one had an onloaded metadata, we'll have to figure out if we need that? You know plays in line is?
I have no idea. Wow. I've never seen that before. Sometime, I see things and I'm like, how have I done web development for many years? Or have I just been sleeping?
Within the elements playback area. Oh, like instead of full screen.
Don't know that we really need that. Actually, I guess we probably would. If we full screen this, it's going to break the
Yeah, it could move the canvas or not show it at all.
And then, we need the video to be query selector, video. And then, I think maybe the first thing we want to do here is get the web cam video pulled back in, right?
Yes. Which I feel like we're going to end up copying and pasting a lot of this example.
Yeah, and I think that's probably OK. We can, let's see, the video element, video starts with so we get a stream by using navigator. OK. Let's create a stream. And we're going to have to put this inside of a function so we can await it. So the screen will be await navigator, media devices, get user media. Was it just video as an empty object? Good. And then, I believe we set the source of the video, source object of the video to be the stream.
Mm hmm. That seems reasonable enough.
We could probably import the APIs and stuff at the top, too.
Up here? Or did you want
Like, we could do it in settings and either paste it in the head of the pen, which we could do, or we could do it installing. Yeah, that thingy.
All right. So let's grab the Face API, is it just up here at the top? It is not. So if I want to use it, there just CDN?
I don't see one.
We've got to clone it. Command of CDN, no.
We could probably use like unpackage or something.
I'm looking at the documentation page in case there's something not on the GitHub.
Oh, thank you. Oh, thank you, Kurt for gifting the subs. Five subs. Welcome another fathead. And Xenof. Thank you all very much. Make sure you span the hell out of that boop. Boop boop. OK. If we look at the Face API, maybe I can get away with dist face js.
Yes, if you go into the documentation page, include the latest on the dist face.
If I hit that on unpackage, looks like I've got it. It looks, this looks like a, like, a browser, like an iffy. I think I can use it. It is big. I guess, we'll just hope for the best here. And the part we're going to have to figure out is where the tiny face detector is. Is it in here?
I just got the browser notification it wants to use my camera here.
So we've got the Face API, where does the tiny face detector come from? Is it built in here?
I think it might be built in.
Tiny Face Detector. This is going to be a monster of a include. But I guess that's not really a huge deal. Let's see if it works. So then, the next thing we need to do is go look at got the Face API, we don't care about the detection controls and I don't know if we care about the common js. So we come down here, we can look for where tiny face detector gets
There's face detection control.js, too.
So the controls are this stuff down here, I think, and I don't think we care about those.
That is not sanitary, no.
You can use any emoji, but the poop one is just so easy.
So then, we need to load the Face Expression Model, which is located at root, which seems confusing. Oh, boy. All right.
Are we in over our head?
We might be. Let's take a look at this one to see how we got this stuff in because that might help us. Index.
Where does Tiny Face Detector come from?
This actually pulls it in as a module. And values OK. So we need to load those. I think what I'm going to do to figure out where it's coming from is let's make a directory and get into it, so I can NPM install, Face API.js, because if it's packing up the face model for us, we might need to import the way that
So let's take a look here. Here's the dist folder.
Finian says it looks like it's on .API folder.
As is not defined. Where is as?
I wonder if it's being imported with
Do you have your processor set for Babel?
I don't have it set for anything. Let's set it for that.
That might fix it as someone who implemented if Babel preprocessor.
Let's face log Face API.
That's undefined. Yeah.
Undefined. That's undefined.
The ors seemed to have gone away.
That must have been something else.
Oh, it's supposed to be lower case, someone says.
All lower case.
I did it.
This was too large for our console. That's fine. At least we're getting an object. So then, we would need to get that tiny face detector.
Oh, dang, yeah. We got a ton of stuff in the log. If you go to the browser console instead of the Code Pen one. We've got face match, face recognition, labeled the box, face expression, tons of stuff. Yay.
OK. Wait. Hello? Not logging for me.
What? How is that possible?
Maybe I need to refresh the page. It should just say object and you can expand it, supposedly.
Yeah, I think the log might be delayed somehow. So let's pull this up and here's Face Detection. Tiny Face Detector already exists. So I think, then, we should be able to just use it. Load face detection model and we'll just pass in Face API.tiny face detector. Why not? Let's try it.
OK. So let's get nope, where do I want to go here?
Finian said create Tiny Fat Face Detector.
OK. Face expressions. I want to see. Yeah, we've got to do all that stuff later. But first, we just want to get the Face Detector.
Oh, there we go. At some point, I'll get the sound turn on, but for now, some are closer some are further away.
OK. I'm a little concerned where these models come from.
I'm looking at the documentation.
There's a whole bunch of stuff loaded that's not. I think it might be in this commons js, we might need the examples browser, public. Under options. No. Where the heck did that come from?
I don't know if this can be done in Code Pen.
I can set up a a live share.
It looks like you have to load a model first. And then, once you've loaded the model, then you can do things. Unless that's just for learning and not necessarily for detection. One second. I might have just lied to you. Detect single face. That might be it, I think.
I didn't do much. I just made a Face API variable and then, I have this comment there.
That's straight from the docs. And it said, detect a face with the highest confidence score in an image using this detection, detect simple face.
OK. And maybe we can log that out and see what happens?
Yeah. Might have to pull up the console to see that. Anything? Bueler?
Oh, nothing on oh, wait, I am getting a thing.
Oh, yours is
I did get a thing and it went away. I don't know why.
I wonder if we need to run it inside the video? Oh, input is undefined, as well.
Oh, that'd do it.
So I think for the input, it looked like we had somewhere in here there was an input for face detection, and the input came out of the video, I think.
I think you're right.
So we've got video on it, we matched and mentions of the video element.
Yeah, input is pulling from the canvas. So, like, input, element ID canvas or whatever our ID is. Overlay. Come on, input equals document.
In the video element with the options. OK. OK. This is starting to click. Things are starting to fall into place.
So I feel like my brain is like, a ha, you're understanding things.
OK. So the first thing that we need is we've got to do these detector options.
Looks like, yeah, keep going.
I think I'm learning this as you are learning this. But in different directions.
Yeah, I think you're right. So we've got the on play. So the on play needs to fire, because the video needs to be running before we do anything. And so, the way that would work is that we need to have our video element, which we've already got on a global scope. And then, I mean, I think we can hit it with a hammer here. So we need to get our face detector options. Face detector options. And then, once we have our face detector options, we need to actually detect the single face, which you've written down here. Input would be the video element and our options would be an object of some sort, maybe defaults will just work. And then, with the result, then we would be able to set the canvas. Oh, you've already done that. So the output.
Someone said, are we building a poop detection system? Not yet.
Let's match dimensions of the canvas. To the video element. And I don't know what the true is for, but I'm going to copy/paste blindly. Make this a little easier with the parts here.
And so I drew this draw rectangle function purely based on the documentation.
For drawing a rectangle around the face.
OK. Perfect. So then, what the output, so I think we might need to pass the output in the function as a parameter.
I think so.
Although, it's just using the Face API. It might not even need to be its own function. Might just be a thing that happens.
Face API.draw. So once we get the dimensions, it just says, face API.draw.drawdetections and then, it wants the canvas element and resized results. Basically, we give it the dimensions of our canvas and what size to draw it at.
OK. So here's our output. And face API resize results. Of result dimensions. Theoretically speaking, if we, now this draw rectangle.
There's multiple ways to draw. And this one was the box with text. But looks like there's also just a draw box function that takes in detection.box.
OK. And so would we want to, where are we getting the X and Y and stuff here? Do you know?
I think that's coming from the canvas size and then the X and Y would just be zero probably. Just going to start with zero for now.
OK. So then I could once we get our result, I would just be able to draw a rectangle?
I think so.
And then the width and height, we'd have to pull from the campus. Which, I think you can just do
The output is, yeah, we have the canvas and the output variable. We're drawing a detection using boxes with text. And I think the part that, yeah, I think you can do canvas.or
Canvas.width output.height. Yeah, you can. I've Googled it.
I have done the Googling. OK. Output.width output.height. And that puts in some text yeah, we can do that one and see what happens. Behold! My bucket! I forgot that one existed.
I was like, are you yelling at me? Got it.
Something is broken. I think it might be that we're trying to console log something that's too big.
So let's do that. Why isn't it showing the
Oh, I do see the Corgis, that's cute.
When we admit the video.
Oh, that'd do it.
And then the other thing we need was on metadata loaded, I think. Is that all lower case? One of these is the video. On loaded metadata. I was in the neighborhood.
Yeah. And it's all lower case.
Metadata equals on play.
This. I haven't used this keyword in a very long time.
This is definitely not firing well.
What is result online 16? How is that coming along?
That should be detection. So that would be part of it.
Are you doing slam poetry right now?
Yep, that was it. (Laughter) And we are currently, we're making errors, but the goal is to make Face Detection, which I feel like we had working for a minute and it stopped working on us and I'm not sure what changed. And I'm getting errors with no actual errors. We wanted to get face detection or put poop emojis on people's faces or when their mouths are open, things fall out. At this point, I'm going to be thrilled to get this thing to run before we run out of time.
Our standards have taken a dive, but that's really all we want.
OK, let's see what I can get to work. I want. I'm going to start commenting things out.
And let's see if we can just get it to conflict. Let's get that video back live, again. That's working. So where does this break down? Let's have the on play, just console log. Whoops.
For sure. Let's maybe go do that. Let's go take a look. And this one, I'm still focusing.
It appears to make an Avatar based on your face. So it's neutral right there? Yay.
OK. This is cool. All right. So this one gets the expression models from here. So we could theoretically load the models from just hot link, which is probably not what we want to do. But maybe in the interest of getting this thing to function.
Temporary. Yeah. There's so much that doesn't appear to be happening here.
We need to load a bunch of stuff that we're not doing. 100% hot linking this.
Forgive me, GitHub. We've got our models here. And then, we need to load a bunch of stuff. So we're going to load all right. So let's grab some of these out and decide the ones that we actually need here. I'm going to put these here and then, let's see what we actually need. I think we only need basic expression. We don't need the age or gender. So I think that's all we would require. And then, once we have those, we would be able to run this function because we would actually do the enit. Where did it go? The video? So once we init the video, and that's all working, it appears. No errors.
I want someone to make the version of the Hot Pockets jingle for linking. Hot Linking or something like that. (Laughter)
OK. So we've got starting the video, we don't care so much about that.
So we get the tiny face detector options, which I think we're going to need.
So there's a whole bunch of stuff happening here that we didn't do. So we want to or the detection.
So for our options, we need to get tiny face detector options. And then, for our detection, we need to await Face API and we're going to detect single face, which you've already found the code for. And then, we need to add this with face expressions, which is the part that I think we were missing.
Right. So then we've done that. If there's a detection, this one gets zero but we would only get the one. Then all of these
I wonder if detections is always an array or if it's an object if there's only one of them.
Great question. And what I'm going to try to do to see if we can figure something out. I'm going to see what happens if I console.log detection.expressions, which if I'm correct oh, wait. Here we go. So we've got. Yeah. Now, does this happen I make so many faces. OK, but it comes back as neutral. And that's fine. So then, we would need some kind of an interval to detect this every once in a while. So, I guess, we could set that up to be a request animation frame or something. OK. But so, then, now that we know that's working, we would want, how does, so you just had this on straight interval. Set interval. Every 800 milliseconds. So we could do that. the part I'm wondering about is
I wonder why he chose that number. I'm fascinated by other people's code.
I like that it's reacting to me enjoying this code. This is fun. I also just really love this idea because the idea of making a website react to your mood when you come to the website, that's a fun idea.
That is fun. I once was talking to some people, it was like some panel and people were talking about how you get user feedback. And what's the best way to get user feedback. And someone was like, wouldn't it be great to figure out how a user's feeling when you go on? And there was one guy that was the CEO of an error reporting software and he was, like, yeah, we tried to ask their feelings when they came to our dashboard, except when you go to an error reporting dashboard, chances are you're mad. All of the survey results were
Mad, mad, mad.
Why do you think I'm on the dashboard right now? How do you think my experience is because they were not there for fun.
That's too bad.
It was amazing.
Results, we did init video twice.
Did we init video twice? Oh, we did. OK. Let met get rid of that.
I just want to screen cap all of your different facial expressions as I can.
OK. Let's see. I'm going to save, let's refresh. Face detection.
Why aren't you detecting? What are we missing? Must not like something. So we've got our output is the canvas. And detection, detection, detection.
I'm enjoying the chat right now. Face detection code. Notice API. (Laughter)
Yeah, that's how I'm feeling right now. OK. So, this runs and when it runs, user media, the user media calls so this just runs on a straight loop. It calls a set time. But I'm wondering how face API.
Also, one of the expressions it has trained is fearful. I'd like to see your attempt at fearful at some point.
Let's go try it. (Laughter)
Nothing, I got nothing. No, I'm just neutral. Like, no expression in my face. I will never
You're providing me with so many opportunities for screen grabs, and I can't wait to go through this with you.
Really looking forward to it. So what am I missing here? We've got the regular, like, loop, there's a set time out. We can request animation frame this, I think.
Let's just do that. Request animation frame on play. So we're looping it.
That should run every day forever. So now we're getting results. What we're not getting I don't think we styled this up to it's going to be position relative, and then, we can make it with, like 400 pixels, 300 pixels, and order solid red. So we can see what's happening. And our video, we can make position absolute, op zero, left zero. Hype, 100%. Nope, 100%.
Ten. There it is.
OK. Then we want our canvas. We can probably use both.
Yeah. We should be getting something. But I'm not
I want to see. OK, got it. Just wanted to confirm that.
This canvas is gone. The canvas is like inside the video somehow.
Let's see, I'm curious. Oh.
Nope, no errors. Oh, things are happening.
All right. So now, we know something is going on. And the thing that we know is going on is, so I'm going to
It's a thing.
A couple quick things here. I'm going to do a box sizing, order box, which should clear up that out of alignmentness, I think.
Yeah, it should.
Let me save my changes here.
We've detected a face. What we're not doing right now is showing the expression. And I believe that's what you're showing here with this rectangle function.
This was just drawing rectangles, which we don't need anymore because you have rectangles.
OK. We have the face detection.
If I want to show the
You can do with face expressions, looks like.
Where do we have to set that? Do you see?
Where did it go? Where did it go? Just had it. With face. Expressions, under detect all faces. Well, we did that.
Well, we did that, you big silly. OK. Let's see.
How do we get the text of the face expressions? Oh, there's an exp value that is returned, detections, detections.expressions gives us, or detection.expression should give us the expression.
Show me potato salad.
That's where as sort of array, looks like, we could just pop the first one.
Yeah, are they in order? If they are, you can just do the number that is the highest.
I think that's what this as sort of array. I think that's what it was here. Let's give this a shot. See what we get. Yep, so we get all of our objects.
Yeah. You're looking fearful.
And then, we could just find, we could honestly just take the first one.
Nope. Nope. Oh, boy. Zero.
Expression. And that should give us the actual expression.
And we don't even need the probability. We can just get
Right. Whatever the most likely expression is is the one that we pull.
Oh, my goodness.
Finally cracked angry. You've got to do the lips. Behold, my bucket! I've got to turn the down the volume on that. Now, we're getting the expression, but we aren't getting the I guess we don't really need the overlay, right? So the other thing that we get, though, is let's take a look at what these dimensions are. Because I'm curious this might give us everything you need to position images.
Right. Because we could do a little bit of CSS even. Never mind.
So that means that we would need the, let's see if we can do this without exploding Code Pen's console. So let's do object.keys detection. And that should start giving us our options here. What do you got? Anything? Anything? There we go. Only detection and expressions. OK. Let's look at detection.detection. That's probably why they call this result. That would make sense.
Yeah, that's why.
Image dimensions. Box. So let's look at box and see what's in there.
OK. Do that, as well. I think that's all of them. You see any chat? Did we miss any? Does it do better without glasses? We tried it without glasses and I think it's more that my, you know, my dead eyes don't
It's just his face.
What my face looks like. OK. So box is too big. What's inside the box? XY width height. We can deal with that. Let's log.y. We don't want the keys, though. As I move further away, sorry, this is the Y, as I get higher, and as I get lower. So this is giving us the actual position of the face. So the X and our Y and we'll get a width and height. Also, why doesn't that one that should not be too big for Code Pen to log.
We could have it replace your face with a different emoji, so when you're mad, it'll show the mad emoji and if you're happy, it'll show the happy emoji.
Yeah, yep. So we get XY width and height so I can let's do face location, I guess. And X equals result detection box.x and then, we can duplicate that four times and we'll get a Y. It was a width and a height. And now, if we want to hit this thing with a hammer, we could actually just add something. We could do this a few ways, right? We could draw in the canvas. We could absolutely position something inside of the video wrapper.
Mm hmm. Would you, how confident do you feel in your
So I'm more confident in the div thing. But I don't know what is more performant. What is something that Sarah Drasner won't yell at us for?
She'll yell at us for everything we did today. Canvas would be more performant, but we've got eight minutes. I know enough canvas to be dangerous, but not enough to be, I think, effective.
Let's do the freakin thing with the emotion.
So let's do this. We'll do I hate this
And then, we could just do some manual crap.
Let's do some basic. The ones I seem to be able to pull off are neutral, angry and
I have pasted the emojis.
Yes, champion. Excellent. So let's just pull these right in as
Get fearful just in case.
You really just want me to make those faces?
I do, yes, indeed. I want you to do them all in quick iteration. So that's our emotions object. And I think what we have to do you already put the emotion thing?
Yeah, it's just id emotion.
And then, we would be able to do, we figured out how to do this and then I deleted it because I'm a dumb dumb. We need the current emotion equals, it was result.expression.as sorted array. And then, we got zero crap, do you remember what it's called?
I just got the emotion div so we can use that. I think it's just .emotion, wasn't it?
That sounds right. OK.
Have an emotion div.
Let's do the emotion div. oh, I'm on the wrong line. .enter text equals emotions current encrypted.
And then emotion div.style text, right? Because then we can just do it like this.
We haven't done this in a while. This is testing my brain.
It would be top. Face location.x.
Put in the position in CSS.
Left, that's X, top is Y. And then, the last one would be width. And this might be close enough. We'll probably have to do some font size stuff to make this actually function. Show me.
Let's try it.
Show me an emoji at the top of my face.
Did we break it?
I don't know. Let's log to make sure it's not fully broken.
I'm also adding a Z index just in case it's below things.
So something probably, it's probably this as sorted array. So let's log this instead.
I'm also adding a yellow border just to make sure it shows up on there.
I don't know if yellow's the right color for this. Cyan.
It doesn't like any of this. Skip result expression?
This might be me, but is style text correct? It's CSS text, it's style.css text.
Excellent. Thank you.
Thank you Google. There you go.
OK. Do a thing.
Please, thing. Oh, OK.
We're getting closer.
It's working, again.
Do we need to actually, here's a great question, can we even see this emoji? Let's maybe put a different one in here.
That we won't see. Where is this one by default? It is not visible by default. Mid width, mid height.
I'm going to comment out the the stuff first just to see if we can get it to show up.
That seems like a good idea. Let's, oh, boy, we're so close, out of time. So I think you can't, I think I've been doing this wrong like my whole career.
That's always fun to realize. Yeah. That was it. OK. Great. Well
OK, so there it is. So now once this starts running. Oh, it did a thing. Undefined. We're so close. We're so close.
Oh, gosh. OK.
What's undefined? The current emotion. OK. Let's get as sorted array. Is it emotion or expression?
That's definitely it. It's expression.
OK. It's expression. Are you doing it? Hey, angry. What was disgusted?
I don't remember. We're almost getting it. Do you see it, it's almost working?
Oh, my gosh.
I can't do that one. Neutral, I managed to get an expression we didn't code. Also not getting the, I think our CSS text isn't working. Oh, no, we need to add pixels is why.
Oh, doy, that's why. And I was going to make the font size bigger.
Yeah, probably a good call. That's going to be
It might be massive.
So this is far from perfect. But this is actually wonderful. I'm so happy about this. There are definitely things we need to clean up here. But we managed to make a thing that works, right. And now if I can make it doesn't like my angry face. You've got to show your teeth. That's the trick. Show your teeth. Are you ready? Ready? There it is. That's how you do angry.
This is the angry face. This was an absolute blast. I'm super happy we were able to do this. We are all the way out of time. So we've got to wrap it up really, really fast. Cassidy, I'm going to drop a link to your Twitter, anywhere else people should go to learn more from you?
I have a newsletter if you'd like to share it.
All right. Go get on that newsletter. It is wonderful. I get that newsletter, and it's a delight. One of the ones I actually read. It's short and useful, which I always appreciate. OK. So
Thank you for having me.
Who do we have coming up on the show? Later this week, we've got Contentful's Stefan coming on, and we're going to do stuff in the browser with Ken. This is going to be a whole lot of fun. A lot of practical, weird content. Another shout out to the sponsors Nulafiy, Sanity, thank you White Coat Captioning for doing the captions. Thank you, Cassidy for hanging out with us today. Chat, stay tuned, we're going to raid. And we'll see you next time.