Generate Dynamic Images with node-canvas
Creating dynamic images unlocks a whole world of powerful workflows. In this episode, Ulf Schwekendiek will teach us how to use node-canvas to create our own custom, dynamically generated images.
Links & Resources
Click to expand the full transcript
Captions provided by White Coat Captioning (https://whitecoatcaptioning.com/). Communication Access Realtime Translation (CART) is provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings.
JASON: Hello, everyone, and welcome to another episode of Learn With Jason. Today on the show, we’ve got Ulf. I didn’t even ask you how to pronounce your last name. So I’m not going to try.
ULF: It’s all good. Nobody can. It’s a mystery.
JASON: I was like, Ulf, that’s it, right?
ULF: It’s basically one breath in and Ulf. It’s pronounced Schwekendiek, for the ones who care. You don’t have to try. It’s hard.
JASON: Thank you so much for joining us. So I’m super excited about what we’re going to do today. Before we get into doing thing, let’s talk about you and your background, if you don’t mind sharing.
ULF: Of course. Of course. As you might have heard from the weird name, which is, by the way, the Mike of Sweden. Such a common name. Funny enough, I’m not Swedish. I’m German. That’s what that accent is. There’s, I think, to all I know one famous German Ulf. He was an astronaut, I think, in like the early ’70s or ’80s or something like that. I might have totally gotten that wrong. But there’s an astronaut with the name Ulf. One other German in there. Exactly. So I’m a trained engineer, software engineer, went to undergrad back home in Germany right afterwards. Left for the promise land here and went to study CS in grad school in Montreal. Throughout all my time of going to school, I always worked. Like during my undergrad, I was a software engineer at Siemens. Through grad school, I worked for Ericsson Research, Microsoft Research at my second time of grad school, where I switched over to study a lot more UX, which was amazing, to think of how people interact with the software you’re using. So I think these two skills go hand in hand together and kind of go a little bit on what we’re doing here. So a little more visual and programming and whatnot later on.
ULF: Did some really cool companies I was part of. Luckily ran into. I was on the original Siri team before Apple acquired us, and then at Apple.
JASON: Wait, hold on. A quick aside. Today I learned that Siri wasn’t born inside of Apple.
ULF: Oh, my god. You did not know that? No, it was not an Apple original. We were 21 people, and Apple acquired us for an undisclosed amount.
JASON: Wow. That’s exciting. Part of internet history there.
ULF: Yeah, yeah. We were doing this, I believe, for a little over two years. Actually, it came originally out of SRI, the Stanford Research Institute. They had a very interesting virtual assistant program that Adam was running, one of the founders. Amazing person. So yeah, it was acquired by Apple.
JASON: Very cool. Very, very cool.
ULF: Turns out once I was in my first real start-up from like all these big companies beforehand, in Germany you say you smell blood, and you wanted to do more of that rather than go and work for the big corporation. So I left Apple actually pretty quickly and started with a friend of mine, our first company together. It was called Ditto. I don’t know if you remember these buzzwords back then. The social, local, mobile days of like the Four Squares and the Koalas. Had a launch party at South By Southwest. Eventually ended up joining with the team at Groupon and sold our company to them. We opened up, actually, the first San Francisco office for Groupon and did something that wasn’t about coupons at all. It was about the intersection of consumers and merchants and how, you know, Groupon was one of the only companies that had a large customer base in both. And like what cool, unique experience could we develop or research and actually connect these two together. Think of like going into a restaurant, opening your menu on your phone. Like, now that’s kind of during COVID times. That’s the new normal, right. You scan a bar code, see the menu on your phone. We went a little farther, and you could order from it. Then just walk away and we would detect that you left and could charge your credit card on file for it. Some really cool experiences.
JASON: You know, I love that kind of stuff. I think it’s so interesting, the way that you can push the boundaries now that we all have a small portable computer that’s location aware and identity aware and it’s got our credit cards in it. So many cool things can happen now that we’re starting to hit that space. Yeah, that’s really fun. So now, that’s not what you’re doing now.
ULF: That’s not what I’m doing at all. I moved on. I loved working for Groupon and eventually got a couple more times the startup bug. I did a company with the CEO of Groupon, Andrew Mason, afterwards called Detour. That was a location-aware audio tour guide. That’s the best I can explain it. Think of Ken Burns walking you over the Brooklyn Bridge. Everything is location aware. So, you’re listening to Ken Burns walking over the bridge and all the sudden he says, stop. You stop. All your friends are with you. All the audio is in sync. He’s like, notice the people just walking alongside of you. They don’t know what they’re missing. Just look down, and he like points something out on the bridge that nobody else of those tourists knows. And you get these goosebump moment when that happens.
JASON: I’m not going to lie to you, Ulf. I think of Ken Burns walking me down the Burnside bridge at least twice a day. This sounds like a company I missed out on.
ULF: It was really, really fun. We ended up selling it to Bose. Sadly, it’s not been revived since. Last startup that was really fun, mostly also from a technology stack, that I had my hands on was called Descript. If any of you do podcasts, you should probably have tried it or used it. From a technology standpoint, I’m super, super proud of it because I pushed the team to use electron to build like an entire audio editor. And now a video editor, which is quite hard to do. Especially in the early versions. But the speed you gained by iterating quickly through the product, by like using a web stack versus a native stack, is phenomenal. So it was really, really fun. Then just before I started doing what I’m doing now, I was at Postmates. Anything from fulfillment, a handful of back-end and front-end teams. That’s what I was overseeing. From there, it really crystallized to see how inefficient tech workers are working. You know, you and I spend probably too much time in meetings and discussions and don’t get actually a lot of time to code. And that’s why we became engineers, right? We love this feeling of getting deep into flow, forgetting type around us, building something with our head. Because, I guess, our hands are not that good. At least we’re creating something. And we don’t get to do that much. Yet, we’re paying hundreds of thousands of dollars to these extremely smart, intelligent people to sit in meetings most of the time. I felt there has to be something that helps everybody to feel more connected back to their code.
ULF: While you said, yes, it’s great that we have a mobile computer in our phone, I also think it dramatically changed our life in a way that we are losing attention to detail and that we cannot focus for a long time on anything anymore. We’re living in a time of infinite scrolls through social media, like through Twitter or TikTok or whatever it is. And all of the sudden an hour went by and you have to go to the next meeting or whatever. So I think really focusing on reclaiming that loss of attention is important.
JASON: One of the things that I’ve always joked about is that somebody needs to make a line chart that just shows, like, average time spent on the toilet by human beings. And it would just show the spike when smartphones came in. It was like almost no time and now it’s a significant portion of your day. (Laughter)
ULF: I don’t know what you’re talking about. I have great reading materials.
JASON: Right, I’m just going to read all of Twitter before I leave, right. I’m very on board with that. I think that’s a very true thing.
ULF: I’m seeing this even more now. I’m a first-time father of a 9 1/2-month-old. Sometimes, I kid you not, the bathroom is kind of like your safe haven of nobody crying, nobody wanting anything from you. So yeah. You know.
JASON: I’ve heard that only lasts until they reach the door knob.
ULF: I hear that as well. Yeah. There’s already like the rambling outside of the door that might happen. But yeah, and so with — you know, with where our society is going — and I know you’ve probably seen “The Social Dilemma,” for instance, like social media brought us closer together, brought a town-square-like feel to connecting people, it also really created a generation or even modified a generation into kind of brainless zombies to some point. Like, you just keep on — just one more hit. One more dopamine rush. It’s like emails. You just get so quickly addicted to it. I believe if we can imagine a world where we do that in a healthy way only, like we do with anything else that’s not good for you — alcohol, any drugs or whatever, TV — and you focus more on the time that you’re getting back to do something meaningful with your life. Just imagine a world where people would not spend 20 minutes on the toilet and instead get 15 minutes of coding in. What would we have created? Like, where would we be? What new companies would have come out? What new amazing libraries other engineers could build on would have been invented in that time? We remember how to focus again.
JASON: And what — like, there’s two sides of that, too. Because there’s — if we focus from a productivity standpoint and stay, look, throughout your day, you have these micro-distractions. If you took that time back and were productive, what could you build? I love that. I think that’s a good way to look at it. I think there’s another side. A lot of days, I feel like by the end of the day, I’m just ragged. I have no — I’m completely drained of all of my executive function. I am exhausted. And I look back at what I did, and I’m like, I didn’t get very much done. So in addition to not being productive, I also wasted all of my relaxation time, my break time, my time to just be disconnected on these small spurts of scrolling through Twitter or trying to read Reddit or doing something that just took 20 minutes of my time that I didn’t really value. I just couldn’t bring myself to go back to work. Then I find myself, well, I screwed around for three hours at work today. I should work later. Now it’s 7:00 p.m.
ULF: And you’re exhausted.
JASON: Right. I’m exhausted, and I go sit in front of the TV and keep looking at my phone. I heard somebody make the joke that our lives are working on the medium rectangle so at night we can look at our big rectangle. That is a very depressing framing for it. But I do think on days when I get up and I focus and — and I work 7:00 to 3:00 for my normal hours. I get up at 7:00, shut down at 3:00, focus the whole day. I can also completely be disconnected from 3:00 until 9:00 when I go to bed and get that solid six hours of being a person that’s not working, not looking at a screen. And I have energy left, right. So yeah, I think it goes both ways. It’s productivity, and it’s recovery.
ULF: So, you said something really important that I want to underline. That is that you feel extremely tired when you have these small interactions or context switching. I think through the rise of computers, frankly, I feel like we — especially engineers — believed for a long time that if a computer can multitask, you know, it’s probably a good thing that I can do that too. There actually has been research done around it. We cannot. Our brain is wired differently than a computer. We can do two things if we’re really good and well trained at the same time, actively. Even that, the rarest amount of people can do it. So by just focusing to do — to tell yourself, for the next 25 minutes, I’m going to do one thing. Put everything on do not disturb. We have these boxes that we give everybody where you put your phone in and just lock it away for that time. Just 25 minutes of uninterrupted focus time, no matter what you do. If you code, if you design, if you read something, if you do your laundry. Whatever it is that you need to do, do it uninterrupted. Then give yourself a break. The craziest part is if you just practice that for one day, you’re not exhausted. You have no problem recollecting what you have done that day. It is such a simple solution to something that sounds so hard to achieve. But if you put yourself in that right mindset and say, look, almost nothing in the world can — pretty much anything in this world can wait for 25 minutes. If it’s not, you know, somebody will continue to call me. My box will buzz. They will get through to my watch or whatever to really, you know, stuff is on fire kind of thing.
ULF: You know how often that happens? Once a month, at most.
JASON: I’ll be honest. I’ve had my phone — my phone doesn’t make noise. It’ll vibrate if somebody texts me or calls me. I think I’ve had somebody call me with something that was time sensitive enough that it couldn’t wait 30 minutes maybe twice in my entire life.
JASON: There’s just not that much that happens that can’t wait 20 minutes. But you said something that I thought was really interesting about context switching. Something that I saw with context switching, because in a previous life for me, I did a lot of speaking in education around productivity. So one of the studies I found was from someone called Gerald Weinberg. If you’re doing five simultaneous tasks, 80% of your time is going to go to context switching. So if you take a ten-hour day and you’re doing five things that you’re switching between all the time, you’re actually only going to get two hours’ worth of work done. The other thing that you said that I really —
ULF: And you’ll be exhausted.
JASON: Yeah, you’ll be totally drained because you did ten hours of effort but only two hours of work. That’s such a depressing way to think about your life. You spend all your time doing the switching of your mental frame so you can do a different thing, but then you never get any actual work done. You just start switching to another frame. There’s a word for this. You were talking about how people think they should be able to switch contexts like computers can. But if you have a computer, like a CPU switches too often, it does something called thrashing, where it gets into this loop where it tries to switch between tasks and actually never does any work because it’s switching too fast. It just locks your whole computer up. So even computers can’t multitask the way that we sometimes ask ourselves to multitask. We have to allow for the fact that once you’ve done that frame switch, you have to actually do meaningful work, or you’re just blowing a bunch of time getting set up to work and then never actually working.
ULF: The other piece in my mind that’s super important is to take breaks and give your brain time to rest. I have been many times, and I’m sure you have been too, coding late at night, wanting to get that one bug fixed. You change some code and I think it’s fixed, then something else broke because of that. You just keep on going at it, and it’s not happening. An hour or two flies by, and it’s still not happening. You’re overtired, go to bed at 3:00 in the morning, get up at 7:00. You take a shower, and within a minute, it comes to you. It would have come to you in the first place already if you would have stopped and said, I’m tired, I need a break from this. I need to go on a walk. I need to do something. So important. Not a lot of people do it because we also have this “I need to finish” drive. Sometimes it’s actually much harder to finish and will take you a lot longer than if you give yourself a small break, being five minutes, being it asleep, being it an hour of a walk, being it just doing something completely different and challenging the other side of your brain. All of these things will help you to achieve getting your work done happier and faster and feeling less drained so you can take some of your active time in your day and not sit in front of big screen. Do something more meaningful because you’re not drained. You don’t need to have your brain on auto pilot because you’re still trying to process all the things that happened in that day. Have you heard of the (indiscernible) effect? If you don’t complete a task, it lingers in the back of your head. Even though you’re not actively thinking about it, your brain keeps processing that task. So the easiest way to get rid of that is to split your task into smaller, achievable pieces. Basically say, hey, I just got this done. Fine. There’s other stuff I don’t have done yet, but that’s okay. I at least have this completed. By completing something, instead of completing this big block, your brain is not distracted by like, oh, god, how can I complete all these other things? Because our brains want to complete their work. Super interesting.
JASON: Yeah, and I hadn’t heard of that. Boy, have I felt it.
ULF: Right? Yeah.
JASON: So before we fell into that rabbit hole, you were going to tell us the startup you built to address this problem.
ULF: I mean, yes. So the startup is called Centered. We built an app for your desktop that helps you with these methodologies. It’s basically a work companion. You said what you want to do in the next work session. You hit play, and we automatically turn your phone, turn your desktop into do not disturb, tell people on Slack that you are also away and you won’t get Slack notifications. We basically automated a bunch of this process. We have custom-made, research focus music. It’s music that actually triggers flow states. That is one piece of it. A certain frequency that helps you stay focused for longer. Also, music that’s not too boring or exciting at the same time. A certain beat pattern that a lot of people it will help to get into this state of flow quicker. You can also not use that and link your Spotify account, if you want to.
JASON: There’s an app I’ve used in the past called brain.fm. It only does the music.
ULF: Yes, it only does the music.
JASON: But that music is a cheat code. It really is.
ULF: And their music is great. We’re putting everything together in one package. So you have basically your virtual assistant. Think of like a Siri for productivity. His name is Noah. Noah actually will chime in. He has this lovely British voice. Noah will tell you when a task is due, when you should move on, when you should take breaks, but our desktop app actually also looks at the applications that you use. Noah will nudge you when we think you get distracted. So say you switched from VS Code to Twitter. After about 15 seconds, Noah will be very gently nudging you and asking you if Twitter could wait because we detect that’s a distraction for you right now. You can always correct that and say, no, I’m a social marketer, this is for me, this is what I do.
JASON: Sure, sure.
ULF: But you can basically customize your assistant to help you. There are other voices that you can use as well. For instance, with your friend Cassidy, you can flow together. She’s actually provided us with her voice to get you, to scold you when you’re getting distracted. So if you ever wanted that, you can join her group.
JASON: Oh, my goodness.
ULF: Maybe we’ll have you scolding us one day.
ULF: So, you can really pick your crowd of people with you and work with them. Last piece that Centered does is connected you together. Think of it like a virtual cafe. You can see other people working, really, really small. You can hide it, if that’s too much for you. But the effect of, okay, I see you on camera or not. It’s up to you to turn your camera on or not. I cannot talk to you. We have a break chat feature, but during flow time, I cannot talk to you, but I see other people being concentrated on their computer. The moment you turn your camera on yourself, you feel like, okay, there’s a spotlight on me. I better not get up right now and do something else. I should really focus for the next 25 minutes. So we’re helping you to get into the state of flow and stay there as much as you can. People spend hours every day in our app just to get more work done. The craziest part is you know engineers are amazing at estimating how long things take. We never get it wrong. I’m joking. We always get it wrong, basically.
ULF: Here’s a crazy thing. The one piece that’s specific about Centered is that you have to time box your word. Hey, I want to code with you for 25 minutes, and I think we’ll be done in 25 minutes. In Centered, if you use our methodology of working in Centered, taking breaks when we tell you to take breaks and so on and so forth, people get their work done 30% faster than they estimated. So we can actually be good at estimation. And most of the time we’re overestimating because we get distracted. Out crazy is that? People get their work done 30% faster. It’s mind blowing.
JASON: Yeah, that’s — I mean, that is really — and I have had anecdotally days where I really focus down. I can’t tell if Cassidy is trying to scold us to go faster right now, actually.
ULF: She just wants us to code, I guess.
JASON: (Laughter) But anecdotally, I’ve had days where I shut the phone down, and there’s nothing really going on that I need to pay attention to so I can really focus. You get to the end of those days, first of all, feeling like a million bucks because there’s nothing better than having one of those ultra-productive days. But also, you look at your list and go I never would have expected that I would have gotten this much done in a single day. And that is such a very cool — it’s a very cool feeling to be able to make that type of progress that quickly. But yeah, all of this is a good lead-up to what we’re going to build today, which is completely unrelated. We’re going to be working on building dynamic images, right.
ULF: We are. So, specifically, we had a handful of problems here at Centered. One of them is how do you make pretty emails? Most of the time with images, and specifically — for instance, we send you a daily email of your tasks you have done, the people that started following you. It has a pretty header that has customized text in it. You get that from your designer, usually. Have you ever coded for emails? It’s terrible. Like, you cannot do this. Then you come up with solutions very quickly, hopefully. And one of our solution was to, okay, cool, we can generate custom images and embed them in those emails to have parts of this email, like a beautiful leaderboard graphic or a task graphic or whatnot. Just generated and added to that email. Other important piece of that is social share images, OG share images. You want them to look for every page difference. And this is kind of another way of taking assets you have and auto generating images for that. Kind of makes sense, right?
JASON: Yeah, it absolutely does. And think what’s interesting is that it’s that hidden work. When you look at images, you never think, you know, I need to write a blog. And that includes generating a social share image, putting an email together I can put into the email and all these other things that need to happen. So it’s little stuff that adds up. And if you’re a developer, you don’t do design. Well, now it’s like this piece of overhead that you either have to choose to ignore and your stuff goes out without images, which we know is a little less good than if you send it out with images, or you have to go and ask the design team if they’ve got bandwidth, which they never do because they’re always running at 120%. So, you know, this type of automation is — I guess it does relate back. It’s a huge productivity booster because it lets you stay in the flow of building the thing instead of doing the meta work that allows you to ship the thing.
ULF: Yeah, like do you really want to have — at the end, have a system, say a block, for instance, and have the content creator of that block every single time make sure they gotten a asset from a designer to do it and so on and so forth. It’s just yet another thing somebody has to do, and it might just result in not getting as many articles done. Or your article not launched on time. It’s just adding more and more dependencies. We also know that content creators love to customize their pages because it makes them feel lake their own. If you can change the background, if you can change parts of the feel for your page, it actually converts a lot better. It brings you more fans. People immediately recognize your brand that you created about yourself, of yourself. We’re guilty of it ourselves. We do not — we have to lovely feature of study groups and work groups, and if you share any of these out, they don’t have custom share images right now. But we’re coding them today. So hopefully I can steal some of the code that you’ve done and actually use it and ship it this week.
JASON: All right. So this is maybe a first.
ULF: What you didn’t know is we’re actually going to make him work for Centered right now. It’s great. He didn’t know that. It’s fine. Productivity here. We can’t just have a fun conversation and not have an outcome, right?
JASON: (Laughter) All right. With that being said, we should hop over and start building this. Today on Learn With Jason, we’re going to do feature work for Centered. (Laughter) All right. Let me switch over to pair programming view here. And before we get started, I’m going to do a quick shout out. This episode, like all episodes, is being live captioned. It’s on our homepage of learnwithjason.dev, if you want to follow along. That is being handled by — who’s with us today? We have Rachel today from White Coat captioning. Thank you so much for being here. That’s made possible through the support of our sponsors. We got Netlify, Nx, and Backlight all kicking in to make the show more accessible to more people. And just put a little more money behind Learn With Jason so we can do fun things. We are talking to Ulf today. You can go follow Ulf on Twitter @sulf. And we were talking about the Centered app, so let me drop another link to that. And that is about everything I know about what we’re doing today. So Ulf, what should I do first?
ULF: So, I shared with you a Figma file. That’s usually how —
JASON: Yes, this Figma file here.
ULF: Yes. Shout out to Cassidy. Thank you for your image. Turns out she likes keyboards.
JASON: I have heard that Cassidy likes keyboards.
ULF: What we’re going to do is we’re actually creating an OG share image, dynamically, with node-canvas. So if you want to create yourself — actually, we’ll start here first. Let me see if I can get my mouse back here. I’m lagging here too. So what does our very first OG share image look like? First of all, they’re fixed in size. They’re 1200x630. That’s at least what I found on the internet is supposed to be the standard for these share images. Then what we already have from our — here, I can actually pull this over in one second. And I’ll put this here. If you can just follow me around a little bit, you can just observe me in Figma. This is already a group page that we have. So we have assets already set for basically this beautiful cover, background image on the footer, a custom header image. So we’re going to reuse these assets to generate an OG share image and just slap our logo on there for good measure and have it recognized as well. What does our image consist of is a center-cropped background image, a cover to the right, a logo to the left, and I put a — in this one, I put a blur layer on it. I think in our example, we’re not going to blur it. We’re just going to put like a black opacity layer on top of the background image. So we can make sure — it’s a good methodology because we don’t know what our content creators upload, but we want to ensure contrast. The easiest way to do that is put a semi-transparent black layer on top of that image to ensure it has a certain amount of contrast if you put something on top of it.
JASON: Right. A quick welcome to Alex and friends who just showed up. You’re right on time. We are going to take this image in Figma here, this frame 2, and we’re going to dynamically generate these using node-canvas. So we were just walking through the image here on the right. That’s what the Centered app looks like. And this is Cassidy Williams’ group, Freakin’ Nerds. And if someone shows a link, this is the image we want to show up for the social sharing image. We have the logo on the left, image on the right. Then Ulf was just saying we were going to do a semi-transparent black overlay to make sure we have enough contrast.
ULF: That’s it. Any questions around that so far? Is this super clear?
JASON: I’m feeling great about it. Chat, how are you feeling? Any questions so far? And while we wait for the chat, I think we can just start doing any setup we need to do.
ULF: Yeah, go create yourself a Node project any way you’d like, whatever you’re comfortable with. I’m a TypeScript person. So I probably would do that in TypeScript.
JASON: Is there like a create TypeScript project? Is that what you’re thinking? Or should this be a server?
ULF: It’s whatever you like it to be. If you’re super familiar with express, why don’t we just use that. That’s kind of neat, if you can just know how to build yourself an express server real quick that serving an image route, perfect.
ULF: The way we’re going to render this image out, we’re piping the output of our image from our canvas to our response.
JASON: Got it, got it. So, I’m going to see if they have a quick start for us. It looks like they do.
ULF: I’m going to do exactly what you’re doing. Just Googling a little bit.
JASON: So a basic API is probably the default, right?
ULF: Start yourself a node package. I think that’s a good start.
JASON: So git init. It’s been a while since I’ve set up a node package from scratch. Let’s go with node canvas images.
ULF: Oh, this is super readable. I pasted you some code.
JASON: Okay, great. I’m going to npm install. I’ll just go back here to Express.
ULF: You know, the typical Express TypeScript starter, I’m sure there are 700 node packages that do exactly what we want. Here’s a good one. It’s called TypeScript Express starter on npm. I think that’ll do what we need. Hopefully. And probably way too much. We don’t want to create controllers and routes and models. Who makes these things these days?
JASON: Let’s roll with the very basic. I’m going to skip TypeScript because I don’t want to have to set up the build stuff for it.
ULF: I like it.
JASON: So what I will do is open this up, and then we’ve got — here, if I create an index.js and paste this in, then we have a basic route. So we’re going to pull in Express, set up an app, use port 3000. The base route — so this is the get. So we’ll use a get request. And if someone hits that route, we’ll send it to hello world. Then we tell it to listen on our port, which is 3000. And we’ll console log a little bit. So if I take all of that and do an npm start — oh, I mean npm run start.
ULF: One thing, it looks like you are missing something in your package.json. Let’s look at it.
ULF: Yeah, we only have a test.
JASON: So now it should be — no, it’s not running. Needs to run node index. Oh, so I need to tell it to run this.
ULF: Come on, auto pilot.
JASON: If I come out here and go to local host 3000, we get our hello world. Which is great. If I come in here and edit this —
ULF: Got to restart. You know, we don’t get fancy hot reloading here.
JASON: That’s okay because we’re kicking it old school today. It’s going to be great. All right. So I have a very basic node express server running locally.
ULF: Absolutely. Next thing you want to do is you want to add the canvas project. Let’s see here.
JASON: Is that node-canvas?
ULF: It’s just called canvas. Let’s see if that worked for you. I see a bunch of errors.
JASON: My mic is — like I’m peaking? Let me cut that a little bit. I did turn on the input filter as much as I could. It was real bad the other day. Uh-oh.
ULF: Yeah, I think it got better. Dun-dun-dun-duh. Canvas, command failed.
JASON: Oh, no. This is always my favorite problem. Let’s try — what don’t you like? Response not found. Oh, wait.
ULF: Try to run arc and run it through —
JASON: You’re going to have to tell me what those words mean, unfortunately.
ULF: There’s a really easy way to — especially in node, where any binaries might not be compatible. So what you can do is — I’ll show you — and I’m going to send you something in the chat. You can prefix any command line command with arch and say x8664. That will basically run everything in Rosetta.
JASON: Oh, even more fun.
ULF: Oh, yeah. That usually works, at least on my end. Depends also on how you have node set up.
JASON: I have node set up through Volta.
ULF: Oh, okay.
JASON: Let’s see. Volta node m1. Let’s see if I can node-canvas m1. Has anyone solved this problem?
ULF: It’s kind of — node m1 is a little bit of a pain sometimes, especially if you install it not quite the right way. We still have a handful of x86 dependencies. Turns out this might be one too. So we install — at least here at Centered — node in x86 instead of on m1.
JASON: Okay. Let me see if I can — volta install x86. Currently don’t support downloading 32 bit.
ULF: We want the 64 bit.
JASON: Set node distribution — what? That’s something else.
ULF: You can also go super, super old school and just download and install node from node js.org.
JASON: Oh, my goodness. Let’s see. I feel weird about that. What if we use something like CodeSandbox?
ULF: Oh, let’s try that. Let’s see what happens.
JASON: I think the Sandbox is just going to work. That’s their whole thing. That’s the thing to do. So, I think it’ll be pretty straightforward. Let’s give it a shot. I’m going to do an express. I’m going to do a Node — simple Node. That’ll at least get us started. Then, let’s see what comes in. We’ve got a package JSON. We have very little going on in here. So I’m going to add express. Okay.
ULF: And let’s see if they allow us to do canvas.
JASON: Yeah, let me grab this and just make sure this is going to do what I expect it to do. Let’s save that and you restart.
ULF: We just have to refresh, reload maybe. Is there a console? There’s a terminal.
JASON: There’s a restart in here somewhere. Restart Sandbox.. example app listening on port. There it is. So there’s our running app. So let’s install canvas. Which was where? Dependencies. We’re going to get canvas.
ULF: Okay. Fingers crossed.
JASON: Here we go. Success.
JASON: Great. All right. So let’s get in here. I think we’re ready to roll.
ULF: Cool. What you want to do is you want to import create canvas from the canvas package.
JASON: Require canvas.
ULF: And it’s probably — you probably want to have some curly braces around it.
ULF: I assume, at least. I’m so out of old-school Java syntax these days.
JASON: The good news is we’re still going to get some of that TypeScript.
ULF: Love it. Beautiful. So let’s create us a canvas. First of all, what we want to do is actually not just get a slash there, we want to — I mean, it doesn’t really matter. We can return a specific image, for instance. Say OG image.png.
ULF: Beautiful. Now we’re creating ourselves a new canvas. You can create — yep, nothing for now.
JASON: Let me just get over here. I’m going to restart our Sandbox. We can go to OGimage.png.
ULF: Cool. Create a canvas with create canvas as a constructer. And it wants two parameters, the width and height of the canvas, which is 1200x630.
ULF: You got it. Cool. So, lastly what we want to do is we want to say Canvas in the next line. We want to say create PNG stream. Exactly. And we want to pipe that. And you can get rid of your line 50.
JASON: And this should give us a PNG, right?
ULF: A PNG image.
JASON: Oh, here we go. Reloading too fast.
ULF: You’re too fast!
JASON: What does that mean? Go back to the home page, okay. Let’s go to OGimage.png. Might have —
ULF: There you go. That sounds about right.
JASON: So it’s giving us nothing, but you can see here that there’s a scroll, which means that the image does exist. It’s just currently blank.
ULF: You can probably also try to click drag it away from there and you see an image. Maybe not.
JASON: There it is.
ULF: There you go. Woo-hoo. There’s our canvas. Beautiful. So what’s the first thing? What do we want to do? Node-canvas is basically a Node implementation of Canvas. So we can do anything we want to do on normal front-end Canvas. The very first thing we want to do is draw an image. With that, we have to load it first. There is a load image function being supplied by the package as well.
JASON: Before or after the pipe?
ULF: The pipe is the very, very last command here. So plenty of lines we’ll fill in here. You want to import load image, in addition to create canvas from canvas. Beautiful. Then we’re actually loading an image. I’m going to give you an image to load. So we actually have a fun image here to load. Give me one second. Here you go. So it’s an async function.
JASON: Oh, here it is.
ULF: I just sent you like a UL.
JASON: It’s an async. So do I need to make this —
ULF: Yes, you can. You want to load that image. It’s just a pretty nature image. Okay. So next piece what we want to do is center crop these images. You’re going to name that image background image.
JASON: Got it.
ULF: Or BG image or something like that. Yes. So the next thing is that — so we don’t have to just talk through some boring math, I’ll send you — if I finding our conversation again. Where are you?
JASON: The challenges of working across seven windows here.
ULF: Seriously. I sent you a little bit of copy/pastable content.
JASON: All the line breaks got weird. But that’s okay because I can read code. We’re going to have vertical and horizontal ratios.
W see if it’s landscape or portrait image and take the larger ratio. Then we transpose the image away from the canvas width so we’re basically centering it. Then lastly, we still need a context, but we’re drawing our image into that context. So before line 13, we want to create a context and say canvas — exactly. You know exactly what we want.
JASON: Wait, canvas?
ULF: Nope, we want to get our get context. And the parameter is 2D as a string. This works in both ways. Load image can take a file UL or another UL. It’s good.
JASON: Okay. So here is our image. And if we reload, it’s cached. Oh, no, it’s not. It pulls a different one. Very cool. Okay.
ULF: It’s a little API end point that we build ourselves that gives you a bunch of pretty curated images.
JASON: That’s really fun. Thank you for the sub, Charlie. All right. So now we’ve got a background image. And so if this was me, my gut says the next thing I need to do is overlay that semitransparent black to give us enough contrast.
ULF: Oh, man. You’re hired.
ULF: Good. So let’s do that. The first thing we want to do is — so, we draw things on the context that we have from our canvas. So this is a very, very simple operation. It’s called racked. So ctx.rect. And we start on 0, 0.
JASON: Then it’s going to be 1200x630 so it’s the full thing, right?
ULF: I hear you. The software engineer in me would like to call you out on that and say, gosh, now I have to — if I want to generate an image in a different size, I have to change it on two places.
JASON: Magic numbers. I’ve done magic numbers. You want 100%?
ULF: I would do canvas.width and canvas.height.
JASON: Oh, I didn’t realize that was a thing.
ULF: It is a thing. We already used it for our center fill.
JASON: Oh, great. This is what happens when you copy pasta, friends.
ULF: Still got to understand the code.
JASON: That’s right. That’s right.
ULF: We want to set the fill style. Next line. Not as a parameter. Next line. Welcome to canvas programming. It’s not a function. We’re giving it a hex value. So # 000000. All black. And now we’re giving it a transparency value. Like try AA.
JASON: RGB alpha. That eight-digit hex throws me sometimes. But it’s so nice that’s supported now.
ULF: I think you can also just write RGB. I’m just not — I’m so used to eight-digit hex. But I like my transparency in there. Last piece is you want to fill the context. So you call context.fill. No arguments. We’re all good. So, try to regenerate your image.
JASON: All right.
ULF: But don’t do it too fast.
JASON: Okay. Much darker. If we refresh, we should see all of them kind of come out — yeah, we’ve seen this one before. So this is the darker version of these ferns with that overlay on them.
ULF: Beautiful. Fun. So we’re already halfway there.
JASON: Yeah, dust our hands off. Head home. I’m going on break.
ULF: Not quite yet. Not quite yet. One work stretch first. So we can take our break and celebrate our wins. Let’s get Cassidy’s face in there. Or at least partially covered face. So you tell me what we’re going to do next.
JASON: So, my guess is we’re going to do something very similar to this. So I’ll take this and pull it down here, and we will say image.
ULF: Let’s call it like a poster image, maybe.
JASON: Poster image. Then we’ll get a URL that’ll load in that image for us. And then we’re going to need to position it. To position it, we want — let’s see. Draw image. So we put in the image itself. It’s start X, start Y, and the width and height. So we need to figure out here in our Figma this image is at 814x51. Those will be our arguments to put the image in place. Do we need to crop these? Or are these going to be —
ULF: We don’t. They’re already cropped.
JASON: Perfect, perfect.
ULF: So I sent you the image.
JASON: Now what I can do is context draw image. We’re going to do poster image, and we’ll start at 814, 51. Then we know this is going to be 336x528. If we were going to make this more resilient, we could do it as a percentage and calculate these sizes so if we resized our OG image, it would be the right blend of aspect ratios. What else needs to go in that?
ULF: That’s it.
JASON: Center shift. There is no center shift.
ULF: We’re not moving it or changing anything. This is it. This is all we needed to render this image in there.
JASON: Okay, perfect. So then that should be it. Should be done.
ULF: Do you want to try
JASON: What don’t you like? Oh, I forgot a comma.
ULF: There we go.
JASON: Look at it. Look at that, everyone. Like magic.
ULF: But here’s the thing. The designer came and says, well, this is great, but we don’t want these. They get hurt physically, and I think mentally, when they see any non-rounded corners. They don’t feel safe if there are no rounded corners around images. So what we need to do is so actually build a mask around this. And mask that image, give it a little bit of rounded corner look.
JASON: Just a little bit of TypeScript.
ULF: It’s just a function that draws a rounded rect.
JASON: Okay. I’m going to stick this up here. Got to do the new line thing. Got to drop out the TypeScript because we didn’t set up for that.
ULF: I feel so dirty. It’s great.
JASON: This is my role in life, to introduce more chaos via lack of types. Let’s see.
ULF: So we’re moving the context. We’re creating an arc, another arc, closing our path. You can see there it’s just slightly — like, the last argument is a radius, basically.
JASON: Got it. Yeah, the arc to, these are SVG drawings, right?
ULF: They’re not this is all based on Canvas. So, slightly different. Similar concept but not anything SVG. This is really just rendered out into pixels. We actually want to clip it. What we’re going to do before we draw that cover image, if you go back to your main function, before we draw the poster image, we’re going to save our context. So we say context save. Next thing is we draw our rounded rect.
ULF: Just call, what, rounded rect. Just the helper function we built.
JASON: Oh, I got you. Okay.
ULF: And where do we draw it from? Exactly the same position as the image. 814, 51 — exactly. Like, eventually it would want to pull that out into its own variable. The first parameter is context. And the last parameter is radius.
JASON: Radius. So our radius is, what do you think?
ULF: Ten. Beautiful. Next line, now we actually have to tell Canvas to clip it. So we say context clip. That’s it. Then we draw our image and restore our context. So we get out of this clipping. What is that, 66? It’s not save. It’s restore.
JASON: Restore. Gotcha. So, let me step through this. We’re going something I’ve never seen before. When we save the context, we are — okay. So, I’m going to try to reason my way through this. Then you can tell me where I’m wrong. So we are saving our context, which basically tells it to pause on actually rendering things. Is that correct? So we’re saying, okay, we’re about to do a group of things. So we save the context. We draw this rectangle. Then we say use that rectangle as a clipping path. Then we drop the image into the clipping path. Then when we restore the context, that’s like committing all three actions at once.
ULF: Exactly. It’s committing all of these together. So the important part is that the moment — anything that happens after context clip, it will be clipped by that rectangle.
JASON: And so this —
ULF: Go for it.
JASON: The save and restore is effectively saying we want to do this clipping path, but we need to make sure it’s contained to only the image we want to clip and not to everything else that we do. Because we’re going to add text and do some other stuff. Which if we didn’t do the save and restore, it would get caught by the clipping path, and therefore be invisible because we’re going to place it outside of the clip path.
ULF: And you would have one of these 2:00 a.m. debugging sessions trying to understand why your text is not showing up. You did everything right. Because it was clipped away, yes.
JASON: Okay. So with that, we have rounded corners.
ULF: Now our designers will be happy again.
JASON: Beauty. So, I think the next thing is we need to place the logo, right? Now, the logo, same kind of deal. Is that correct?
ULF: Not quite. What I would like to do with you, and this is — still, I haven’t prepared for this. We’re both doing this together.
JASON: Doing it live.
ULF: We’re doing this live. So, I would like to get this exported as a PNG. Actually loaded not from a UL and loaded from disk and see if that’s in any way different. So you can get rid of the second SVG export. Exactly. It doesn’t let you, huh?
JASON: But that’s okay because I can export both and grab the one that I need.
ULF: I deleted it. Well, you’re faster. Beautiful. We want the PNG.
JASON: There’s the PNG I need. So if we come over here, I should be able to add a file.
ULF: Right. Upload file.
JASON: Okay. So now we have this dark.png, which looks great. If I come down here, I want to load that. Now, if I do a load image, can I use just a relative path? Or do we need something different?
ULF: I believe you can use a relative path. You’re just a little out of context. You want to write this line above 68.
JASON: Here. Let’s do the same thing here. We’re going to context draw image.
ULF: Yep, super simple.
JASON: Stick that down here and change out our values. It’s 121, 252. And the width and height are 462 and 114. Okay.
ULF: And you want to draw the image.
JASON: So theoretically speaking, if all goes well, this just does what we want. So let me go over here, restart our Sandbox. Reload the page. And Centered. All right. Let me pull this out here. And look at that, friends. So, I mean, chat, you can see how freaking powerful this is, that we were able to do this, you know, this quickly, by just placing things where we want them on the screen. This is very, very cool. But yeah, I —
ULF: Yeah, and how quickly you can translate it from something like Figma, something that a designer gives to you. Just have it render up.
JASON: Mm-hmm. Yeah, really, really cool stuff going on here. So this is fantastic. And chat, yeah, I think the audio crackling is because I’m peaking, and I need to turn down the gain on my mic, but the gain on my mic is over here, and I’m not going to make you look at a blank screen while I figure out how to do that. So I’ll fix it for the next stream. Okay. So, Ulf, is there anything else you want to add to this, or are we calling this a success?
ULF: I think we call this a success. There’s something I would like to point out, which is very cool. You know, one more thing that a lot of people want in this. That is you can use Canvas actually pretty successfully to render text as well. We’re not going to do this right now, but I urge you to look into the load font — wrong. The registered font function that the Canvas project has. You can basically create a font and register it and use that font, draw basically pixels in that specific font anywhere in your Canvas. Really cool if you want to do, like, custom text. Like hi, Jason. Or whatever is in your email headers. Super powerful. Again, this library is very fast, too. Which is the other big piece. Again, it’s like a production example. We probably don’t want to generate these images every single time. You know, you could think of generating them once or a next deploy, for instance or actually have a database, put them somewhere, upload them. We showed how to pipe them directly from Canvas out. But obviously, you can get a buffer. You can get a PNG. Whatever you need.
JASON: You know — I muted myself because I have contractors in my house. So one of the things that I see that’s really high potential here looking at this code, we could actually run this in a serverless function. I think if you were to look at this — I believe I’ve seen some examples that are pretty similar here. But I have a real-world example that I can show you that doesn’t use node-canvas but that does generate images on the fly. If you go to learnwithjason.dev and go to any one of the episodes — so we’ll go to episodes here. Click on one of these. Like, we just had Chance on the show. Then just add /poster.jpeg. It will build this. This is automatically generated. This felt doesn’t actually exist. It’s a serverless function, right. And if you go and look at the code on learnwithjason.dev — I can share this in the chat for anybody who’s interested — this is the — we have the posters somewhere. What did I call it? API episodes. Yeah. And this is a little messy, but buried down here under poster is — you can do a schedule. You can do starting soon or poster. If it is one of those, then it will build out, in this case, a Cloudinary URL. General same idea. You could have this load up a Canvas, and you could load the images you need. I load these images out of Sanity. A combination of Sanity and Cloudinary. So the guest images are in Sanity. The images that don’t change are in Cloudinary. Then I have — you know, we load text in and load in the title of the episode and stuff like that. All on the fly, on demand. Then the really interesting thing here is I use Netlify’s on-demand builders, which means after this is generated once, it’s in the cache. It’ll get served like a regular static asset for every other request until the site gets built again. So you can do exactly that same thing with what we just built here, where this could be rendered once in a serverless function by node-canvas, then using an on-demand builder, it would never have to be rendered again until you build your site. So a lot of things for custom images. If you’re interested in learning about how those work, I can drop a link. There we go. And I’ll just drop one of these in the chat for y’all. Anybody who wants to learn how those work. Then a link to the sites. You can find those, and if you want to play with these, you can change the poster to schedule, for instance. It’s almost the same, but it includes the date and stuff like that. So little fun things you can do with these images to very quickly generate a lot of different versions for all these different things. This is how I power Twitter and all of that stuff. Now you have this code so you can do exactly the same thing and build these beautiful images on demand. Work with your designer once on a template, and then you’re off to the races. You never have to do that meta work again.
ULF: Super cool.
JASON: So, Ulf, for somebody who wants to take this further or learn more, do you have any recommended resources or anything that we want to link to here?
JASON: And that’s like — I don’t know if y’all have seen meme generators, right. But if you look at a meme generator, a lot of these kinds of tools are sort of doing that thing. You can say, like — text one, text two. So it’s like chat, Cassidy. You can do stuff like that. And you can do these very quickly. When you go to generate, it actually builds that as an image. So this may not be using node-canvas, but it’s using something very similar. It’s taking Canvas and placing things around so you can edit and move them. When you’re done, it commits that. A very friendly client-side editing process.
ULF: You can all the sudden share the code you use on the back end to generate this image with your front end. You don’t have to wait for a back end. You don’t have to pay for any server costs. Just generate your canvas. That same code could be used on the back end to actually render it out and save it and make it shareable or put it in a cloud function.
JASON: Yeah, yeah. Really, really, really powerful stuff. All right. So Ulf, I’m going to send people to your Twitter. Where else should people go if they want to find you online?
ULF: Yes, they can find me on Centered. I’m working on Centered every single day. Our whole team is on it. So if you ever want to hang out in our break chat, we have this whole concept, when you are on a break, you can talk to anybody else who is also working. You can ask me any questions there, send me a DM @sulf. Email is first name, three letters, at centered.app. Many ways to get ahold of us. If you check out Centered in general, we have a Slack community as well on our website. So you can join there and have direct access to us.
JASON: Great. All right. Well, with that, I think we’re going to call this one a smashing success. We were able to build a very cool automatic image generator. Let me send this to the chat, actually. Public, everyone can view, copy Sandbox link. Here you go, everyone, if you want to play with that. On top of that, we had — what a wonderful conversation about productivity and giving yourself some space to not only get a lot of work done but also have less of a sense of exhaustion after you do that work. Really enjoyed that chat. So ulf, thank you so much for taking time with us today. Chat, thank you for hanging out. Let me do another shout out to our live captioning. We’ve had Rachel with us all day from White Coat Captioning. Thank you, Rachel, for being here. That’s made possible through the support of our sponsors, Netlify, Nx, and Backlight, all kicking in to make this show more accessible to more people. And while you’re looking at things on the site, make sure you go and check out the schedule. We’ve got some really fun stuff coming up. We’re going to talk to Faraz from Railway on Thursday. We’re going to deploy a site with self-hosted analytics. So if you want to use analytics we don’t want to set up Google Analytics or something like that, we’re going to walk through how you can do that. We’re going to do it on a Next.js site. That’s going to be a fun one. Then next week, we’re going to dig into building and deploying React apps from monorepos. That’s going to be a ton of fun. If you work on big teams, this is a big challenge, figuring out how to do code access control without having hundreds of repos. So this is going to be a lot of fun. With that being said, I’ll have a few more episodes up soon. Follow on Twitch if you want to get the notification when I go live. Add on Google Calendar, and that will show you what’s coming up. Those are automatically updated. And you can always subscribe on YouTube. We now have just a ton of YouTube content. Let me get to this channel. Where is it? Learn With Jason. Oh, boy. Should have been ready with this link, y’all. Make sure you go and subscribe. Me and 26,000 of my favorite friends are over there on Learn With Jason’s YouTube right now, posting new videos as they come out from this episode. I would really appreciate a subscribe and do all the things. Smash that like button. Ring the bell. Whatever it is you’re supposed to do. With that, I’m going to call this one a win. Ulf, thank you so much for hanging out.
ULF: Thanks for having me.
JASON: See y’all next time.
Closed captioning and more are made possible by our sponsors: