skip to content

Advanced Effects for Live Streaming

Custom shaders, animations, "hacker mode", and more — Mark Dobossy will teach us how to build extremely cool OBS experiences to level up our livestreams.

Full Transcript

Click to toggle the visibility of the transcript

Captions provided by White Coat Captioning (https://whitecoatcaptioning.com/). Communication Access Realtime Translation (CART) is provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings.

JASON: I'm going to throw a quick note up to Twitter that we're live. Okay. We're live. We should be able to see -- I guess we can't see that. So I'm going to keep the chat over here. And hello, everyone, and welcome to a chaos-mode edition of Learn With Jason. Today on the show, we have finite singularity Mark Dobossy. Thank you for being here.

MARK: Very welcome. Glad to be here.

JASON: So everybody please let me know how the audio, everything like that is coming through because we are -- we're trying something fun today. We're going to attempt to make different filters and effects within OBS, which means that I am actually on a PC instead of a Mac today. So I'm on a different set -- a different instance of OBS. We're going to be running a different instance of OBS. I had to set up my audio differently. If things are weird, not functioning as expected, please do let me know because -- we're making this up as we go along. So Mark, before we start getting into all that, can you give us just a little bit of a background on yourself, who you are, what you do?

MARK: Yeah, so I'm actually one of these folks who had a really atypical journey into this whole software, dev, tech thing. I hold a PhD in structural engineering, of all things, and did a whole lot of work on high-performance computing. We called it back then Bayo Wolf clusters, which I guess is cloud computing today. We did a whole lot of distributed computing kind of things. You know, it was a learning journey. Came out of my graduate work, did a post-doc in a couple things, started up a little environmental engineering company. That lasted for several years, simulating subsurface fluid flow stuff. It was kind of fun programming problems. I did my obligatory stint in Fintech and suffered through that for a couple years. The happiest day of my life was when I jumped out of that and started doing what I do now, which is a construction logistics mobile app, of all things. So I've jumped around a lot. A few years ago, I tuned into Michael Jolly, and he had encouraged me to start streaming. I'm so glad I did. It's really one of the highlights of my week, every time I can flip on that stream button and kind of co-work with people and share with people what we're doing. One of big things I like to do on stream is interactions with chat. So building things, physical and software based, where chat can affect the stream and change things on the stream. It's a blast. It's an absolute blast.

JASON: That's so much fun. I have seen a little bit of your stream, and I've enjoyed how creative you've actually been with all of that. You know, I've done a little bit with interactivity. I've had a couple fun games and stuff, but they've all been pretty limited in what they are. You know, you can make something kind of appear on the screen, but it's a looped GIF, right. For a while there, you were capable of if you used the "boop" emoji in chat, it would drop and fill the screen. I'm looking at ways to bring that back. Probably not for the episodes like these but for the Tuesday streams when things can be a little more chaotic.

MARK: Absolutely. I will warn you up front, it is a rabbit hole. When you start going down the rabbit hole, sometimes you don't pop your head up for air for quite a while because it can be a lot of fun. It can be a lot of fun. There's something about being developers, I think, where software development in and of itself is a creative kind of work or task. We don't think about it that way a lot. One of the things I love about what I do on the stream and the things that we build is it lets me kind of flex those more specifically creative parts of my brain and work those out a bit, which I think is really, really valuable as a software developer.

JASON: Yeah, I think a huge portion of what makes me feel good about doing this job is the playfulness of it, right. I think you can find a way to be playful in any career, any discipline, but I feel that the web is uniquely suited to being playful. It's such an inherently human medium, such an inherently connected medium. So no matter how you get on to the web, whether it's through websites or through streaming software or any of those thing, there's this amazing opportunity to be playful and to be playful in a group, like in a community. That's actually what originally made me aware of your work. It felt very much like we were maybe kindred spirits in that way in terms of just you've got so much playful stuff going on, on your stream. What really piqued my interest is you were doing things I thought were impossible.

MARK: And they were things I probably thought were impossible a day before, right. I get these ideas, and I go, ah, I don't know if I can do that. And that just makes me want to do it more. So I start to research how might we do this and then shaders hit you in the face and it's all over.

JASON: And speaking of the community, thank you so much for the subs. That's wonderful. We have a few first-time chatters on my stream. Maybe they're coming over from yours. Hey, McNets, abridgewater. A lot of commentary about being playful. Bobby Tables is saying there's some industries where maybe playfulness is unwise.

MARK: Yeah, civil engineering is one of those, structural engineering. They don't encourage a lot of playfulness there.

JASON: So, okay, before we get into the -- and I just want to acknowledge, I know that our audio is a little bit out of sync. I do apologize for that. I think we're having a bit of an audio -- like a Wi-Fi weirdness thing. So apologies, but we're going to soldier on. So let's talk a little bit about just in general at a high level for folks who haven't seen your stream before, when we talk about these effects, what sorts of things are we talking about? What's happening?

MARK: There's a lot of different ways you can do effects on stream. Sort of at the highest level, there are a couple of plug-in developers. There's a bunch of plug-in developers for OBS, but there's two that really come to mind. There's a fellow by the name of Exeldro, who's built a ton of amazing plug-ins. One of the ones that's sort of the gateway plug-in to getting into this kind of crazy stuff is a plug-in called move transition. And it will allow you to animate moving things around your screen. It allows you to animate moving, like, values. So if you have an alpha value on yourself, you can animate it going from zero to one so you fade in. It allows you to do really sort of neat keyframe animation kind of stuff on your stream. Additionally, he's been porting over an old shader filter, which is what we're going to be playing with in a little bit here, that will allow you to do shaders, visual effects to each frame of a video source coming out of the machine. So you have really, really fine-grain control. You can program it however you want to program it. You can do some absolutely incredible effects with shaders. Shaders are actually what all these big AAA games that are doing all their effects of things on fire and all that, a lot of those are from the shader stage of how a GPU renders things. So we're just going to harness that last stage and apply those effects to the video that's coming out of OBS.

JASON: Yeah. And this is something that you can -- I believe you were prepared for a quick sneak peek of the sort of thing we're going after.

MARK: Yeah, certainly. What I like to do is tie my shaders up to redeems on my channel. So one of my redeems is Macintosh classic. So someone redeems it, and now I look like -- and this is probably going to absolutely kill our already almost dead bit rate, but I look like an old, old video on a Mac. I'm seeing it's extremely blurry here, but that's because of network issues, I think. It looks a lot better in person. You should see my arm right now. It's pretty amazing.

JASON: (Laughter) Amazing. Amazing. Yeah, 160p. We're rapidly approaching potato quality. But it's in the name of fun. So we shall potate, and we shall learn.

MARK: Yes, absolutely.

JASON: So honestly, looking at what you just did, my first instinct is, well, I can't do that.

MARK: Sorry, your first instinct was what?

JASON: My first instinct is I can't do that. How do I do that?

MARK: Oh, you easily can do that. It's not rocket scientist kind of stuff. It's actually fairly straightforward. So today we're going to walk you through two shaders. We're going to write two different shaders. One is going to be Jason's first baby shader. Just a really simple shader where we're going to learn a little bit about color channels and how we can kind of move things around in the canvas of the video. The second shader we're going to write is an ASCII shader. I have a redeem, but I won't do it because it lasts two and a half minutes. It basically turns you into ASCII art. You can color the ASCII green or amber like a terminal, and it gives a pretty cool effect. It's a fairly complex shader built of a lot of simple parts. So it's a really good example where we're going to kind of step through what are the simple parts and how we think about each of these steps. Once you get that down pat, it's really not too difficult. The thing about shaders on GPUs, they can only do very, very simple math. So a lot of times, the challenge is trying to take something that seems very complicated and simplify it. So the actual code when you see it isn't that crazy. We're not doing like huge linear algebra kind of things and that. We're doing super simple math. So that's kind of -- you got to go from complex to simple, which is a skill in and of itself. That can be a challenge too. But the actual programming itself isn't too difficult.

JASON: Mm-hmm, yeah. So I'm super excited to learn. I'm also very eager to try some things. Just in case things blow up, I want to see how far we get before the whole stream self-destructs.

MARK: Absolutely.

JASON: So I'm going to take us over into pair programming mode. Everybody bear with me because we might have a little chaos while I get this scene set up, but let's try. Okay. I think we're issue I think that just worked.

MARK: That seemed pretty smooth.

JASON: Hey, look at that. Okay. So here's what I have. I have my -- can you, plain what this is? You had me set this up beforehand, which was wise because it took a little time.

MARK: Exactly. So what we're doing here, OBS can run as like it's your base app in Windows. Or you can do a standalone install of OBS. So what we did here, I talked to Jason last week, and we got a second version of OBS running as sort of standalone on his machine, which you can kind of think of as a sandbox. One of the things about shaders is we're going to be taking code and sending it straight to Jason's GPU. You can imagine that if you mess that up, you can very easily crash OBS. So early on when I was doing this kind of stuff on stream, I'd be making edits on my shaders directly on my stream, and we would see the blue screen with the guy with his head in his hands multiple times a stream because I crashed OBS, right. In this way, because we're using portable mode, we'll be able -- if it crashes, the stream stays live, and then Jason just has to fire that version of OBS back up.

JASON: We shall see. (Laughter) The computer did just slip out a little note that was like, help me. (Laughter) Okay. So what I have here then is a mostly blank install of OBS. We did a couple things ahead of time just to make sure that it would work on my computer. But I actually kind of forgot what they all were. So it is as if --

MARK: Yeah, so we installed a couple of plug-ins. We installed the move transition plug-in. So if we get to it, we can start to animate some things. And we also installed Exeldro's fork of the shader filter plug-in, which is what will allow us to provide shader code to the GPU.

JASON: Okay.

MARK: So if you go ahead, if you just for right now right click on scene, the blue highlighted scene, and just go down to filters, which should be towards the bottom there. If you click the plus button, we just want to make sure that we see move action, move source, move value, and that we also see a user defined shader at the bottom. So we've got all of those. So we're good. We're good to go.

JASON: Okay.

MARK: I had sent you a link to a GitHub repo. Were you able to grab that? That's where we're going to put our code.

JASON: Okay. So I'm going to --

MARK: And I don't know if you use WSL or WSL2. I would not recommend doing that here. I would make sure you grab the GitHub repo in Windows. Otherwise, OBS is going to have a hard time finding our shaders when the time comes.

JASON: Okay. I mean, to be completely honest, I don't do anything on Windows. So I don't know what any word you just said means, actually. (Laughter)

MARK: Okay. No worries. No worries then. Absolutely no worries. Why don't you just make a folder then on your desktop. We'll just do some copy/paste stuff here. I think that's going to be the easiest. So make a new folder on your desktop. Just call it shaders. And do you have VSCode installed on this copy of Windows?

JASON: Great question. Visual Studio Code. Apparently I do not.

MARK: Okay. Well, maybe get that installed real quick.

JASON: Okay. Download now starting. Oh, thank you for cooperating, internet. We'll see. Did manage to crash my scenes, but we can fix that real quick. Okay.

MARK: Yeah, this sort of work is not for the faint of heart when you're on stream. We've had some pretty humorous moments on my stream. There was a great one where someone clipped it where I just go "oh, no, did the crash the stream," and halfway through saying "stream," the blue screen comes up. (Laughter)

JASON: Okay. So we are extracting VSCode. I could have sworn I had it. It's probably somewhere. And now we're launching it. Okay.

MARK: Yeah, let's just open that directory in VSCode as a workspace.

JASON: Okay. I'm going to open folder. I'm going to my desktop. I'm getting my shaders.

MARK: Yes. Perfect.

JASON: Yes, I do trust -- I am the author. That's me.

MARK: I am the author, yeah. Okay. Let's make a new file. Just call it, like, firstshader.shader.

JASON: Okay.

MARK: There we go.

JASON: What? Create file. Okay. Okay. All right. There we go.

MARK: And what I would do is down at the bottom where you can select the language that the file recognizes, it should say something like shader lab. Just type in HLSL. And then you're going to get some nice syntax highlighted. The shaders in OBS use a language called HLSL, which stands for high-level shader language. There's also apparently something called Shader Lab that uses something slightly different. So if you go back to the repo now, there should be a first --

JASON: Where did it go?

MARK: Uh-oh.

JASON: Okay, well.

MARK: I think I see it in the background there.

JASON: Aha.

MARK: So click on view code.

JASON: View code.

MARK: Then just click on the first shader.shader file.

JASON: Okay.

MARK: And maybe click on raw or copy/paste it. We're going to paste that sample code in. This is just kind of a getting started. So this is the most basic shader you can have here. Go back to the GitHub repo and look at that readme file. Just click back on the LWJ shaders. And click on the Exeldro filter fader fork there. If you scroll down, this is going to start to introduce these shader files, how things are installed, and if you scroll down a little bit further, keep going, right here, the standard -- oh, scroll up a little bit. The standard parameters. So these are all of the things that automatically get passed into your shader that you don't have to worry about. So image is the frame of the video that's currently being processed.

JASON: Okay.

MARK: Elapsed time is a time loop that you can grab how many seconds since the shader's been initiated. So you can kind of do animations and things.

JASON: Okay.

MARK: There's a lot of these different inputs that go along here. So I'll be talking about some of these as we type all this stuff in. So let's go back to our shader. An what we've got here, there's a main image function. This is what the shader jumps into. The shader looks for main image, and it's going to pass the vertex data, which is basically your details about your frame and the current location that the GPU is processing right now. Then you're going to return a float4, which is just a vector of four floats which represent RGB and A for that pixel.

JASON: Okay. Okay.

MARK: So being a pixel shader, this shader modifies one pixel at a time. You might look at that going that seems really, really inefficient. But what you've got to remember is there's like probably 7,000 or 8,000 pixel shader cores on your GPU. So what happens when a frame comes in, it just gets distributed to all these different cores, but those cores can't talk to each other. So all you know is your current pixel and what the whole frame is before any changes were made to it. You actually can't access what other pixels are doing with the shader, which can get a little bit tricky in terms of how you want to think about things. So just something to keep in the back of your mind as we go through this, and I'll talk about some gotchas as we --

JASON: I'm going to bring up real quickly that we have done a little bit with shaders in the past. We did that with -- where is it? Build your own shader. Yeah, we did it with Char Stiles. It had a similar kind of thing, the core concept being -- I'm just going to repeat this back to make sure I understand it and maybe reinforce it for anybody hearing it the first time. The way a shader works, it starts at the top left of your screen, and it moves pixel by pixel. You are able to, like, basically draw each pixel in a custom way. Then you have to use math basically. This is where I started seeing things I hadn't seen since high school. Tangents, cosigns, things like that. You can use that and your current X, Y position on the screen to determine where you are and make different pictures. So in the case of what Char does, she's creating images that react to audio or whatever she's working on. And in our case, what we're going to try to do today is we're going to try to take various video inputs and modify them on the fly, which this is where it starts to get really wild to me, knowing that once you've kind of learned this baseline of what is a shader and how can you manipulate something using shader code, it can apply to nothing. You can just draw cool things with it. It can apply to literally any input. Image input, audio input, video input. And you can just kind of -- it can be realtime. It can be recorded. It can be whatever you want. Once you've learned shaders, you really do have kind of a superpower in terms of what you can do creatively on computers.

MARK: Absolutely. Well, I would challenge you to just change one little piece of your thinking on that. When you said you start at the upper left corner and work your way across, in the case of GPU shaders, you don't know where you start or end. Because it's completely randomly distributed across the GPU. So one GPU core might be processing a pixel in the middle. One might be processing on an edge. So you can't go look at what your neighbor is doing.

JASON: Right. You get no reference. You have no frame of reference. You're looking at one pixel at a time.

MARK: Correct.

JASON: Okay. So I thought it like was always scanning just because I thought --

MARK: No, it's not sequential, which actually can run into issues. If it was sequential, you could make peek back at the last pixel, but you can't do that, which makes certain things you'd want to do much, much more difficult.

JASON: Or operating in a complete vacuum. Each pixel needs to build its own vision of the universe.

MARK: Exactly. Exactly. Which seems incredibly inefficient until you realize that you're getting your efficiency by throwing it at thousands and thousands of operating cores to do this. So that's part of what's so incredible about these GPUs.

JASON: That's also why we're limited to simple math, right. You can basically do, what, calculus?

MARK: I mean, super simple stuff. Yeah. Algebra basically.

JASON: Yes, yes. Exactly. I think that is --

MARK: And yes, making up for it in volume. Abridgewater said make up for it in volume. That's exactly what we're doing here.

JASON: Yes. So I understand the core of what's happening. I'm going to repeat this back. The main image is the magic name for a function.

MARK: It's the entry point, yep.

JASON: And you need to create something called main image that's exporting default in JavaScript, you know, Node land or whatever. And it gives us the current pixel we're dealing with, is what's passed in. Then we need to return a vec4, which is four float numbers that represent --

MARK: Or float4.

JASON: Color and alpha. So RGBA.

MARK: Yes, exactly.

JASON: Okay, got it.

MARK: So just looking at this blind now -- well, darn it. I put the comment in as to what it was doing. I was going to say look at this and tell me what you think it's doing. Essentially image is one of those automatic inputs that our shader filter gives us, which is the current frame we're looking at. It has a function on it called sample, which means we're going to sample from image. There's a little thing called texture samplers. Samplers get kind of complicated. We won't really go over them. What it might do is if you don't land perfectly on a pixel, it might interpolate the color between two pixels. You can define how to interpolate. Or if you go off the edge of your frame, do you want to pick back up at the bottom and continue? Do you want to use the top-most value that you just missed? You know, there's different ways you can clamp the edges of the image as well, which is what you define in the sampler. Texture sampler is the most basic sampler that says linear interpolate the pixel and clamp on the side. So if I tried to go to the point negative one for X, it would reset that back to zero. So you'd always get the edge pixel if you try to go off the edge. So we pass in our sampler. Then you'll notice we have this V_ in.uv. It's in what we call a normalized space. So we're not talking about pixels right now. If we have our frame, left to right goes zero to one. Top to bottom goes zero to one. So the values in UV are going to be between zero and one, where 0.5, 0. 5 would be the center of the image. So it's a relative mapping. It's not pixel by pixel.

JASON: Which is actually great because that means that I don't have to care about resolution. I just have to care about -- I guess I have to think about things like aspect ratio, but I don't need to worry about like, oh, if I make this for a 720p and later I upgrade my stream and it's 1080, I don't have to rewrite all my shader code because it's using the relative size.

MARK: Exactly, exactly. That said, it's going to cause us some difficulty later when we're trying to map a certain size to a certain size character for ASCII art. So we need a way to convert between. A lot of times we actually convert that UV coordinate to pixel coordinate. So that's the first code we're going to write here. What we're going to try to do here, we're going to just talk about colors and stuff. We're going to make a little plug-in where we're going to have an offset value that as you increase that offset value, the red, the green, and the blue channels will move away from each other. So you'll get a real trippy rainbow effect. You'll see how it comes together. So the first thing you want to do, we're going to make a new float2, which is going to be our coordinate. So you're going to type float2. It's a typed language. So for everything we specify, we have to say whether it's an int or a float2 or whatever. Let's call it coord. And we want to figure out what our coordinate is based on V_ in.uv. There's another input we have called uv_ size. And uv_ size is 1280 by 720 or whatever your actual pixel size is. So it's a float2. If we just multiply that by v_ in.uv, it's going to do element-by-element multiplication. If we get 0.5, 0.5 --

JASON: For easy math, 100 by 100 image, 0.5, 0.5, you end up at 50/50, which is center.

MARK: Exactly. Exactly.

JASON: Got it. Okay.

MARK: Is shader a language, or is that what the language -- so Robert, shaders are an operation that gets applied in a GPU to an image. The language we use to describe what shader operations we want to do is a language called HLSL, or high-level shader language. There's also GLSL, and there's other ones as well. They all get -- it gets complicated. Okay. So this coord will give you your X/Y coordinate on a frame. Let's define a float now, which is how far we push the colors away. We're going to think about this in pixel space. So let's say 30 pixels.

JASON: Sorry, say that again.

MARK: We're going to think about this in pixel space. So let's just say that the offset is 30 pixels.

JASON: 30 pixels. Okay.

MARK: Because 30 will tend to look good. And you'll want to do a "point zero" because it's a float. Okay. So now we want to figure out, given that we're in the center of the image, what we're saying is we want to sample the red pixel 30 pixels over, the green 30 pixels the other way, and the blue maybe 30 pixels up. So what we're going to do is we're in our location, we want to sample offset, and combine those pixels to what our overall output pixel is.

JASON: Oh, I see what we're doing. Okay. I don't know how to do it, but I know where you're heading. (Laughter)

MARK: Let's make the coordinates where we want to sample. So coordinate is always going to be a float2. So we'll make our first float2. Let's call this one r_uv. So this will be our red. We're going to convert it back into UV space. If you look at the bottom, when we sample the image, we have to sample it in UV space. We can't sample it in pixel space. So r_uv. If we think about this a little bit, we're going to have our current coordinate. Let's say we're right in the middle, 50/50. Then let's just say r_uv is going to be over in the X direction 50 pixels or 30 pixels. Whatever our offset is.

JASON: Right.

MARK: So what would that be? If you wanted to take your coordinate and add that offset to just the X?

JASON: So I remember having done this, but I have a hard time with the sets. Because I don't -- this isn't a language I'm familiar with. So I know what we have to do is we basically need to create a new UV value that is like X plus 30 and the Y. But I don't remember what the mechanics are that make that work.

MARK: Yes, exactly. Okay. So let's start. We're going to start with our coordinate. We know that's our place of truth. So coord. Then we're going to add to it. But coordinate is a float2, so the only thing we can add to it is another float2. So we can inline just say float2 with parentheses. Just like a con instructor.

JASON: Oh, then we do offset zero. Right?

MARK: Exactly.

JASON: Okay, okay. I knew we were going to end up somewhere like this.

MARK: Yep. But now because we've made this in pixel land, we want it in UV land. Now we have to go back and divide by UV size. So wrap that whole thing in parentheses.

JASON: And I just made this a float because Farshid in the chat said it needed to be.

MARK: Good job, Farshid. Then at the end of that line, you're going to divide by uv_ size. That's giving us a point that we're going to sample our red channel from. And let's go ahead and copy that line and paste it twice. Then let's do one for G and for B. For G, let's set the X offset to negative offset. Because you want to go the other direction. And then for B --

JASON: We want to flip it, right?

MARK: Well, we would want to do -- yep, exactly. I would do negative offset because your face is kind of towards the bottom of the panel. So we want to shift it up. Otherwise it just looks a little funny.

JASON: Gotcha.

MARK: Okay. So now we have our coordinates. We want to sample them all. We're going to make four float4s. One for our R sample. I would just call them R, G, and B.

JASON: Okay.

MARK: It's that image.sample you see right down below. So you've got your texture sampler, which is going to stay the same. But instead of sampling it from V and UV, you're going to sample it from your r_uv value. And you're going to do the same for G. There we go. So now we have three different points sampled. Obviously, we don't want to return the original point. We want to return some combination of these. So in your return statement, instead of returning image.sample, we're going to make a new float4. So delete the image.sample part. Then we got to pass it four values. So what do you think we're going to pass for red? It's not going to quite be that. R is a float4. We only want the red channel from our red sample.

JASON: So like the first channel. But I don't know how to get that.

MARK: Yep. So it's super easy because it's just a color. It's r.r.

JASON: Oh! Look at that.

MARK: Yeah, it's really cool. Then what I like to do for this particular look, I like to add the alpha channels of all three of them together and divide by three. So you get like an average alpha from them. So just do parentheses. Hey, Exeldro. Good to see you here.

JASON: Then divided by -- oh, boy. Oh, my god.

MARK: Make sure you get the decimal in there.

JASON: And I think I have an extra parentheses in here. So now what has happened is we have figured out -- okay. So we figured out what the actual size of our canvas is. We have decided we're going to do an offset of 30. We grabbed our R values by looking at the -- not the current pixel, but we're moving over 30 and grabbing that value of red. Then doing the same thing with the G and B in different directions. Then we picked a point 30 pixels off and use this image sampler, this texture sampler, to actually go and get the value of how red is this pixel 30 points to the left of me.

MARK: Exactly.

JASON: And same for green and blue. And now we're rebuilding the pixel value we're returning this float for by saying give me the red channel from the red sample, the green channel from the green sample, the blue channel from the blue sample, and the average alpha of all three.

MARK: Exactly.

JASON: Okay. Okay. I'm with you.

MARK: And this is the process we go through when we build shaders a lot of times. Get your coordinate system, sample the things you need to sample, transform them in some way, and then return the resulting color from that transformation. It's almost always that step-by-step process. So let's see if we did it right. Let's go and save this file, obviously.

JASON: Okay.

MARK: Then we'll go back into OBS. And let's -- can we add, like -- I think you're able to add your video source or something.

JASON: Should be able to. Let's see how this goes. We're going to grab my camera. Don't explode, don't explode, don't explode. Hello?

MARK: Uh-oh. You know what, you can even grab that if you want.

JASON: Yeah, let's use this one because this seems to be working. And we'll just kind of see what happens here. Okay. So a little bit of inception. Very fun.

MARK: Yep. And then go ahead and right click on -- or even just click on video capture device. Then click the filters button above. However you want to do it. Then we're going to add a new effect filter. User-defined shader.

JASON: Okay.

MARK: Then you're going to use -- sorry, load shader from text file. Exactly. And then browse for that file.

JASON: Oh, look at it. We did it! Y'all, that's dope.

MARK: Isn't that cool?

JASON: And this is the sort of thing that's also really fun because from the little bit I've been able to do with Char in the past, I know that we could use time, for example, to make this kind of like shift a little as time goes on. We could say for every 30 seconds, I want you to -- or every second, I guess, increase or decrease the pixel based on where we are.

MARK: Not only that, but with move value, we're going to make one quick change to this in a minute that'll let you give a slider that you can slide that offset and set it. Here's a great thing in move value where you can take your mic level output and use it to change a slider in your shader.

JASON: (Laughter)

MARK: So as you talk, you can have your head explode in rainbow colors. So let's go quickly back. We're going to do an input quick because we're going to need to know inputs in order to do the main show here, which we're going to start in a minute.

JASON: Okay.

MARK: So go to the very top of your file.

JASON: Yes, like above?

MARK: Return down a few lines. You're going to make what's called a uniform float. If you put uniform in front of something, it's basically saying we're going to use this as an input to the shader. And name it offset, lower case O. Before you do your semicolon, you're going to do an open less-than sign. Yeah, angle bracket.

JASON: We call them pointy boys.

MARK: Pointy boys, there you go. Then do a close one on the next line. Then set that equal to 30. So that'll be your default.

JASON: Can you explain what this is that we just did?

MARK: We're going to put some settings in there.

JASON: Gotcha. Okay.

MARK: So these are called -- oh, my gosh. My brain just went dead. They're not tags. I'll remember it later. You'll hear me shout it out. But there's a specific term. The first thing you're going to do is give the thing a label. You're going to say string label equals, and then in quotes, offset. And maybe do it with a capital O so it looks pretty. Then you'll do a semicolon. Then the next line you're going to do string. It's not generic. It's a specific for how OBS handles these things. This is going to be widget_ type. And it'll be in quotes, slider, all the way lower case. Annotations, Exeldro, yes. Annotations. Then we're going to set a minimum and a maximum. So you'll do a float minimum. I would do like minus 100. Then you can have them cross back the other direction. And then a maximum of 100. So these are putting bounds on what that field can do. And finally, let's do a step size of 0.1. So you do float step.

JASON: Do you need a leading zero?

MARK: I think .1 should work. Then get rid of your offset in your main image function. Otherwise they're going to collide.

JASON: Can you set defaults in here as well?

MARK: Well, you have set the default where you said offset.

JASON: Oh, this is the default.

MARK: Where you set it equal to 30, yep. So go ahead and save that file. Now go back to OBS. Now if you click on your video capture device and filters again and then click on your user-defined shader, now if you scroll down, you might have to hit reload effect. So now you can drag that.

JASON: Look at it go. This is amazing.

MARK: So it's just a super-simple little deal. Yeah. So now you could use things like move value and things to have an animation of this. You could have it based on sound from a mic or an audio source. There's a ton of stuff that you can do with this now thanks to Exeldro's move transition plug-in. It's just incredible. It lets me do so many awesome animation-y kind of things. Okay. Well, I'm guessing what you really want to do here is turn yourself into hacker man, right.

JASON: Of course.

MARK: This is the goal. This is the goal. Let me see if I can -- oh, you know what, I might not be able to make this work. What do we want to do with hacker man? Let's kind of talk about what the effect kind of physically needs to do in order to give that ASCII art effect.

JASON: By the way, I completely forgot to tell everybody. We've got captions and other goodies going on, on the site. If you head over to lwj.dev, we have Rachel here with us today doing live captions. Because I'm on my other machine, the closed caption button isn't working today. But it is available on the stream here, and the captions will be available, as they always are, on the recording on the site. And that is made possible through the support of our sponsors, Netlify, Nx, New Relic and Pluralsight, all of whom are kicking in to make this show more accessible to more people, which I very, very much appreciate. Okay. I realized I forgot to do that. I wanted to make sure I did that before I forgot.

MARK: Yep, no worries at all. No worries at all. Okay. Sop the idea behind ASCII art, if you look at especially like old-school terminal fonts, there's a specific number of dots. They were these monospace fonts that might be 8 pixels by 12 pixels. There was a certain amount of that space filled with font and a certain amount of that space filled with white space. So something like a period took up very little of the space. It was a very low-density character. Whereas the at sign takes up a ton of pixels. So it's a much higher-density kind of character. So the idea behind ASCII art is to map the sort of apparent brightness of a pixel or a group of pixels to the density of a character and then replace it with that character. Right? So things like my hair that are dark might be like speckled with periods, whereas the shine on my forehead here might be a bunch of hashtags. As that stuff sort of comes together, you can get an idea of the shape that you're looking at, and we want to build a shader that's going to do all of this in realtime. Which is going to be fun. So the first thing that we got to think about doing is how do we actually do that? The first thing we want to do is we don't want to look at our individual pixels. If we put a character in each individual pixel, you wouldn't be able to tell, right? We want to take this down to much, much lower resolution. So we want to take our whole grid, like if you look at the box I'm in here, and think about splitting it into a grid that might be 20 by 20, right, if this was square. Then we're going to kind of sample each of those 400 pixels, get a value for them, map them to a character, and output that. So that's the general idea. When you start thinking about ASCII art, it sounds so difficult. But when you actually look at what it's doing, it's really not that difficult of a process thought-wise.

JASON: Yeah, the idea is straightforward. The execution is probably straightforward once you've done it once, but it's kind of mystifying. I know what we need to do, right? We're going to build a grid. We know the grid is going to be, say, if it's a monospaced font, I don't know, 10 by 10 pixels. Then we need to grab groups of 10 by 10 pixels, get the average brightness, and figure out a character map. So I understand it, and then I start thinking about it and I'm full cross-eyed, no idea. (Laughter)

MARK: Yep, yep. Well, we're going to get you there. It turns out, it doesn't take much more than what we've already done.

JASON: Oh, excellent.

MARK: The hard parts are behind you. So let's go and make a new file in Visual Studio Code here. We're going to call this ASCII.shader. And again, you're going to want to switch it to HLSL so that the syntax highlighting looks okay.

JASON: Right, right, right. Okay. Over here. There we go. HLSL. Okay. And then I need --

MARK: And why don't you just copy your first shader and just paste it in because we're actually going to make some inputs right away, and we're going to -- we'll delete out everything that's inside of the main image function to start. But leave your open curly boy there.

JASON: Open curly boy.

MARK: Yeah, okay. And leave the offset for right now because we're going to make a few of these. Let's start. Rename your offset one to be scale. So we're going to have some sort of scaling factor we're going to use. And actually, why don't you leave it as a capital. I tend for my inputs to have them capitalized just so I know that they're inputs. I only did the offset lower case because that's what we'd already written before.

JASON: Oh, I got you.

MARK: Then the minimum on that, let's set the minimum to .1 and the maximum to 20.

JASON: Okay.

MARK: Then we'll do a step size of .1 and set the default to 1.

JASON: Okay.

MARK: And so the one thing we got to consider here is HLSL, your GPU cores, they know nothing about fonts. They just don't exist in this world. So what we're going to use in that GitHub repo I gave you, I gave you some images where we're going to sample characters from those images. So if you go take a look at textures, and if you just click on one of them, I think they'll give you a preview of it. Yeah, you can see how they get denser as we go along. So what we're going to do is we're going to sample from some point along that spectrum to grab the pixels that we'll use for that font. You'll probably want to grab both of them because they both do give pretty different looks.

JASON: Okay. So I'm just going to put these into the shaders?

MARK: Yeah, you can just put it in shaders. Just somewhere you can access them.

JASON: Okay. Let's get the other one. Okay.

MARK: There you go. So one of the things that we need to know in order to do this sampling is how many characters wide is my image file, like how many do I have across. How many pixels wide is the image file? Because that tells me where I sample and what size I need to sample a character. So we're going to make a few integer-based inputs. So if you go ahead and copy the scale -- no, the uniform.

JASON: Oh, inputs.

MARK: Yep. Because we're going to want to be able to specify those. So copy that.

JASON: Whoa, whoa, whoa. What button am I pushing?

MARK: You're good.

JASON: Okay.

MARK: And I'll just do it above -- well, yeah, you can do it below scale. So do three of them. And make them integers. And the first one here, we'll call this font texture height. Sorry. The first one we'll do width. Then height. So capital F for font. I would do font texture width. Yep. Then give it a nice label. Unfortunately, your minimum, maximums and stuff will have to change to integers.

JASON: Okay.

MARK: There we go. Then a minimum on that might be 1 pixel. Then a maximum might be 2,000 pixels, if you had a ton of characters you were using. Then the step size, we'll just do 1, which I guess is default. Then do a default of 120. A lot of times fonts are 12 characters -- 12 pixels wide, and you have 10 of them. Then we'll do font texture height. Then make that go from like 1 to 500.

JASON: 500.

MARK: Then maybe set the default to 10.

JASON: Okay.

MARK: Then finally, we're going to do the font texture num chars, like the number of characters we have in the texture.

JASON: There we go.

MARK: Then I would set that ranging from like 2 to 256. You have to have at least 2, otherwise you don't have any difference in color. Then maybe set the default to 12. There was an old file I was using that was 10 pixels by 120 pixels and had 12 characters in it. That's why the defaults are what they are.

JASON: Okay.

MARK: So we're not going to use these quite yet. Actually, we're going to use them pretty much right off the bat. Some of them. So now let's go down into our main image. Again, one of the first things we always do is grab our coordinate. So that's the float2.

JASON: Equals -- uh, it was v_ in.uv, divided by -- no, no, that's not right. Times -- you're going to have to remind me.

MARK: It's uv_ size.

JASON: Oh, I'm already out of place here.

MARK: No, you were right. It was times. You can do it that way too. Okay. So that's going to give you your current coordinate. We're going to use that later. Now, we want to take our character height and convert it to a float. Because right now it's an integer because the image has an integer number of pixels. But we're going to be doing everything in float math. So let's make something called char height. And it's just float. Then in parentheses, you're going to pass it in font texture height.

JASON: Okay. And I'm going to assume we're going to do this again.

MARK: Well, we're going to also make a character width, but we don't want the whole texture width. We want the character width. That's going to tell us how wide things are. So we got to do just a little bit of simple math.

JASON: So this one is going to be the width divided by the character -- the number of characters?

MARK: Exactly. But you got to convert them both to floats. And you're off the bottom of the screen a little bit with the code, just FYI.

JASON: Oh, let me bring that up. Then it's float font number characters. Like that?

MARK: There you go. So that's going to give you the width of one of your characters. And now what the scaling factor I put in here for, this is planning for the future. Just because your font file has, like, 10 pixel by 10 pixel characters, doesn't mean that's the resolution you necessarily want it to display as. You might want them to be 20 pixels by 20 pixels because you want it to be super chunky. Or you might want it to be 5 pixels by 5 pixels. So this scaling character, the scaling value is going to let us change that scale regardless of what our font resolution is.

JASON: Okay.

MARK: So that float up there at the top. So we're going to do a quick calculation of what our grid size is going to be, which is now going to be based on our character height and width.

JASON: So a float2.

MARK: Yep.

JASON: Yeah, I'm getting this.

MARK: You are, you are. And we'll call it grid. I just call it grid.

JASON: And now this one is going to be float2, and I'm going to take the char height times the scale.

MARK: I would start with width.

JASON: Oh, right, right, right. Okay. So take my width.

MARK: So X, Y kind of.

JASON: Then take my scale, because that's already a float.

MARK: Exactly.

JASON: Then I can do the same thing for the height times scale. And that's the grid.

MARK: Perfect. So now we want to get a uv coordinate that we're going to sample. So just imagine if my frame here was a 3 by 3. We had a vertical column right here. The grid is telling you the width of each of these blocks, one-third of the screen. So what we need to do is we need to say we want to sample not at the grid lines but halfway between two grid lines.

JASON: Okay.

MARK: So imagine 3 by 3, and my head is basically the middle block. I'm doing a little Madonna dance here.

JASON: We're Voguing.

MARK: Yeah, yeah. Vogue, exactly. The point we want to sample isn't like this upper corner here. The point we'd want to sample would be like the center of my nose, which is the middle.

JASON: Oh, oh, okay. Now I understand. I thought you were talking about taking the average. Yeah, I'm with you.

MARK: Yeah, yeah. No, no, it's simpler. Way simpler. So we want to pick out what that UV sampling point is now. So let's make a new float2 and call it UV. We'll set that equal to a float2. And this has a little calculation. So do float2 in a parentheses. Now hit return. You can break it up by two lines. So we're going to go back and do the equivalent of, like, modulus math a little bit here, which I know is totally painful. It's end up not being too hard. So the idea here is in our first, our X value, we want to take our coordinate for X, and we want to divide it by our grid X, which will give us how many grid points over we are. Does that kind of make sense?

JASON: Right. Am I doing this correctly?

MARK: Exactly. But we don't want a value of like 3.6 grid points over. We just want to know that we're at grid .3. So we want to floor that value.

JASON: Okay. And we do that in the same way I would do it in JavaScript with like one of these?

MARK: Exactly. You just wrap that in a floor.

JASON: Okay.

MARK: And now, because we know that we're three grid points over, the next step is how big is a grid point?

JASON: So we need the UV size divided by the grid?

MARK: It's simpler. It's just times grid.x.

JASON: Oh. Times grid.x.

MARK: It looks like we're dividing by grid and multiplying by grid, which would just give us back that coordinate. But because we're flooring that first part of the calculation, that'll move us from like partially over on my forehead here to the start of the grid point here. And now we got to add half of the character width back in to get us back to the center.

JASON: Okay. And I do that with char width divided by two.

MARK: Yep.

JASON: And it's a float.

MARK: Yep, and it's a comma because we're in the constructor. Then the exact same for Y.

JASON: And for this one, we do Y, and then this is going to be the height.

MARK: Are you adding grid.x times character width there? Or is that a multiply? I can't -- it's a little fuzzy on my screen. After the floor.

JASON: This is multiply. Let me make this a little --

MARK: It's got to be a plus because you're adding. You're just shifting over by that amount.

JASON: This is a plus?

MARK: Yes. Yep. Because we got our start of grid point, and now we're adding half of a grid block to it.

JASON: Wait. Hold on. I'm wrong.

MARK: Oh, no, no. I'm sorry. You're right. I'm a character off. That is a multiply. The next one is a plus. Apologies.

JASON: Great. No, no, that's good. Is this big enough to be clear now?

MARK: Yes, it is. It is. Looks much better. But now this is all in coordinate space, and we want to get it in normalized space. So we just divide it by uv_ size.

JASON: And we do all of that here?

MARK: Yep, yep. We divide the whole float.

JASON: And now we're back to --

MARK: A return value across our whole frame.

JASON: Is it this?

MARK: Exactly. So let's see how this works. How would you expect this to look? Before we apply it, how would you expect this to look?

JASON: Because we're not actually using the images yet, I expect this to look like absolute chaos. Because my guess is that we're going to kind of mosaic the image here by pulling weird samples from weird points.

MARK: Okay. Let's see how it looks.

JASON: Okay. All right. So I'm going in. Should I do a new scene for this? Or just turn one off?

MARK: I would just go on that and go to filters. Then you can just change the filter that one is loading. So if you click on user-defined shader and then browse to your new shader file.

JASON: Shader, ASCII, open. Okay. Then we're going to reload.

MARK: And are you getting anything? You're probably not because we probably have an error. So now we get to learn how to debug. So unfortunately, debugging can be a little bit of a pain in the rear. If you go back to OBS and go up to help, in the help menu there's a show -- under log files, you want to show log files. Then double click the bottom one there. Then open it in Notepad. That's fine. Then scroll all the way to the bottom.

JASON: Missing pixel shader.

MARK: Yeah, if we drag that out a little bit here, let's see. What is that saying?

JASON: It's saying I'm doing a redefinition of char height.

MARK: Okay. So you must have two char heights in there somewhere.

JASON: Oh! Look what I've done.

MARK: There you go.

JASON: Okay. Well, in good news -- oops. That's not what I wanted to do. Good, good, good. Okay. Try again. Filters. Reloading.

MARK: And we're still not quite there. There's probably something else. And then you actually have to close that text file down and reload it again.

JASON: Oh, okay.

MARK: Sometimes code, it's not too hard to figure out. Other times, the debugging process through log files can be a major pain. Sometimes it just does not want to agree with you.

JASON: It says missing pixel shader, unexpected token. Okay. So I have an unexpected token somewhere. I have an extra closing round boy. So let's just look around in here for a second. Floor, that's doing what I want it to do. These are correct. That looks correct. Line 43 at the end.

MARK: Oh, yeah, you got another comma. You have an extra comma there.

JASON: Oh, you can't do that? Oh.

MARK: No, it's -- I do it all the time because I live in -- yep, yep. I do the same thing. Plus I do a lot of Python. That's always, you know, best practice in Python, to leave your trailing commas.

JASON: Oh! We pixelated. So, that makes sense because we're simplifying our image by basically replacing all of the stuff in the grid, being replaced with that one pixel because we're going to the grid, figuring out the center pixel, we're then using that center pixel for every pixel in the grid.

MARK: Exactly.

JASON: Cool. Okay. So we have now learned how to do an anonymizer.

MARK: Pixelate shader. If you go to the filter screen again, and you click on user-defined shader, you should now be able to grab that scale. You can drag it big, drag it small. You can use Exeldro's stuff to animate it as you change scenes. There's all sorts of stuff you can do with that now. I have some effects. I actually do this with hexagons on my stream, where it doesn't convert it to a square grid space, it converts it to a hexagonal grid space. That's my secrets coverage, when something might be up that I don't want people to see. It wipes hexagons across the screen for me.

JASON: Oh, cool.

MARK: They're showing the pixels through, but in this way it's anonymized.

JASON: Yeah, yeah. And there's a note in the chat that AI is actually really good at depixelating things these days.

MARK: Yeah, the one thing that we found, which is what I tend to do -- I never show anything too, too sensitive on screen, but if you make sure your pixel size is more than two lines of text tall, it can't -- there's not enough information. It doesn't know where a line ends and where a line begins, and you'd be covering like 10 or 12 characters per block. And there's just no way from the density to tell what characters they might be.

JASON: And it is kind of hard to imagine what -- maybe it's possible with AI to decode what this says, but that seems pretty unlikely at this point because of how small -- like, these don't appear to be something that would actually work. But yeah -- oh, blur is easier for AI than pixelation. Yeah, cool. So definitely some things to consider there if you do decide to use this. My preference is to have a second monitor, and I just put all the secret stuff on the second monitor because I have been bitten before. (Laughter)

MARK: Well, don't ask me about the time that I was actually showing my lat and long coordinate of my home office for two hours on a stream in a log that kept going by. We were working on a telemetry data thing for my ski streams. I didn't realize I was literally broadcasting my location to the world for two hours.

JASON: Yeah, I think at some point I've thrown my phone number up. I've thrown up API keys to really expensive stuff. So I had to panic roll those.

MARK: Yep. Okay. So let's move on to the next step.

JASON: Yes.

MARK: The first thing we want to do is rather than just returning that image.sample, let's pull that out as a variable because we're going to have to do some stuff with that color sample that we get.

JASON: Okay. So this is going to be a float 4.

MARK: I would call it col for color. Equals. Then you can add a return call for now so that OBS doesn't complain.

MARK: Holy cow. The Primagens are coming in.

JASON: Hello. So what we're doing, let me get this filter up. We've built this one that does colors. Here. So this one you can color shift the output to get some cool effects like this. And we are currently working on -- we want to do like a hacker mode with ASCII shaders. So we have figured out how to pixelate. We're sampling the -- we've broken the screen up into a grid. Then we are using the some math. Let me pull up the math. We're using the math to figure out our grid size, figuring out the center point of each grid, and replacing all of the pixels in that grid with that one point that we've sampled, which gives us this kind of pixelated feel. And the next step, which we have to pull off in about five to ten minutes, is to get the ASCII text into this image. Or into the place of these pixel boxes that we've made.

MARK: Okay. Let's do it. So the first thing we need, we don't want to convert these pixels in terms of color. Remember, I talked about the pixel density relates to sort of the brightness of the pixel. So we need to grab the luminance of those pixels. So let's go up above. We're going to write a new function. Up above our main image, yep. We're going to write -- it's going to be float, and we'll call the function gray for the gray value of it. And its input is going to be a float3.

JASON: And you're going to have to remind me. I don't remember what function syntax looks like in HLSL.

MARK: You're getting it perfect. It's going to be col for color. Then open curly boy and a closed curly boy.

JASON: Open curly boy, closed curly boy.

MARK: Okay. And we're just going to do the standard RGB to luminance calculation. It's going to be col.r times 0.299. Magic numbers. Plus col.g, times 0.587. Plus col.b, times 0.114.

JASON: Now, this is just like somebody figured out how to determine the brightness.

MARK: Yeah, it's been used since like the beginning of computer graphics as a conversion from RGB to luminance. Then real quick what we can do is in your return -- actually, first, we have to call that function. So just say float G, equals gray. Then you're going to feed in col.rgb. We didn't have it send the alpha channel.

JASON: Oh, right, right. We don't need the alpha channel. Got it.

MARK: Then just return. We'll make sure this worked right. Float4, and it's going to be ggg -- g, comma, g, comma, g. Col.a, just to grab that alpha value. Then save it, and let's make sure we should be basically getting what we got before, but it should be black and white, if we did it right.

JASON: So let's reload.

MARK: There you go. Now we're black and white. Okay. Now on to the fun part. The next thing we need to do, we need to locate the character that corresponds to our luminosity. So the lighter pixels get the lighter characters, and the darker pixels get the darker characters. So under your float g, let's add -- yep. And those are the images that we're going to use.

JASON: I just wanted to show for everybody who's joined since we talked about this, we're going to be using these as our light-to-dark characters. We've got two sets. We'll see if we actually have time to get through them. We can't go too much over time.

MARK: We got five lines of code to write.

JASON: All right. Let's roll.

MARK: Let's go through it. Right after -- oh, wrong one. Go to your ASCII shader. Right after your float g line there, we're going to make a float charUv. That's going to equal some magic math. It's going to be floor g times float, font texture num chars.

JASON: And that's our input from up here, for anybody who's not seen that.

MARK: Yep. And that's going to be divided by -- hold on. Where are we dividing? The floored bit is what's going to be divided. So outside of the floor -- there you go. Float font texture num chars. So it's similar to what we did with the grid, only we're selecting how many grid points over our character is going to be.

JASON: Right. Okay. Yeah, so this is letting us, no matter where we are, locking to the start point of one of our characters.

MARK: Yes, exactly. Based on its g value, its gray-scale value. Okay. We're going to make a new float2 now. We're going to call this subpos, like subfloat position. That's going to be float2. The first value is frac. Then in parentheses, it'll be coord X over grid X. Then the Y is going to be frac coord Y over grid Y. That's the fractional component. Like if we're at grid point 2.3, that'll give us back the .3.

JASON: Okay.

MARK: Okay. Next line. Oh, god. I hope these work. Float2, and this will be our sample UV. So this is actually going to be our sampling point for our character. And that's going to be -- this someone the complicated one. It's going to be float2.

JASON: Okay.

MARK: CharUv, 0.0, plus, and it's another float2, and we'll do 1.0 over float font texture num chars, 1.0. I can explain what all these mean to you off stream. Then that -- after that, we're going to multiply that by subposition.

JASON: Okay.

MARK: Okay. And then -- oh, we got to go up to the top. And we got to add another input, which is going to be for our font texture. So go to the very, very top. It's going to be a simple one. You don't want to do a copy/paste in this case. It's going to be a uniform, and call it texture 2D, all lower case, is the type. And then it's going to be called font texture, capital F, capital T. Then we have our pointy boys.

JASON: Pointy boys.

MARK: And the string label, equals font texture. Then do a semicolon at the end of the pointy boy. Yep. There's no default. Now scroll down to the bottom. Now we just got to sample that thing.

JASON: Okay.

MARK: So we're going to say -- our float4. Character. Call it cha. Equals font texture.sample. It's got to be a capital S.

JASON: That's right, that's right.

MARK: And then texture sampler. And we're going to sample UV. Yep. And then just return cha. And this'll be your most basic. I had another step if we were going to get to it, where we could change color and a bunch of that. But this should do the trick, if we --

JASON: All right. Everybody cross your fingers because I think we got to be done whether or not this works.

MARK: It worked. So click browse for font texture.

JASON: Oh, font texture. Browse.

MARK: Yep. And then grab one of those.

JASON: Okay. Whoa.

MARK: If you scroll down, the font texture width on this is 1260. Now it's not sampling this from the right spot.

JASON: All right, 1260.

MARK: It's 36 pixels high. And it's got 70 characters. And you can -- yeah, I don't know. You can start to see it.

JASON: Yeah, if I make it really small, right. And so this is definitely something that would work better if it was a video input. Let me see if I can switch this really quickly to just my camera. My camera doesn't want to -- I'm using it in multiple places right now. So I think I might have hit the limit of how many it's going to let me use. But you can see that it is -- you can follow the mouse around here. You can see that it is doing things as I kind of mouse over buttons. The texture is changing.

MARK: Yep.

JASON: This is really cool.

MARK: It's a lot of fun.

JASON: This is awesome. So I love this. This is so much fun. I feel like we have been able to do so much more than I thought we would have pulled off in 90 minutes. We are, however, out of time. So I want to throw everyone to your GitHub repo. I have that -- I'm working across multiple computers today because I usually work on a Mac. But that is okay. Let's see. Dropping this in the chat. Then I'm also going to do a shout out to your channel. There you go. So everybody who's here, if you want to see more of this, this is what Mark does. Finite singularity channel is very much tons and tons of fun things and a lot more. Mark, where else should people go if they want to learn more about this, learn more about you?

MARK: You know, in terms of learning more about me, it's pretty much -- I mean, you can go to my Twitter, which is @ -- it's even spelled wrong. Just search for me on Twitter or go to my -- I think I got the link on my page right there. Finitesingularty, I think. Someone can pop that in chat if they happen to have it. Probably my stream is where I'm most present. The other thing I would say if you want to play around with shaders, there is a website called shadertoy, which has all sorts of -- they're those HLGL shaders, which are a whole lot of fun to play with. You need to change the syntax a little bit to get them to work in OBS, but reach out to me. I'm happy to help you with kind of figuring out how to change it. It's not too, too difficult. But I've learned a lot about effects and things by looking at what people have done on shadertoy. There's all sorts of awesome, awesome devs over there that make some really cool stuff. So if you want to learn more about the shader stuff, I would definitely head over there. But thank you so much for having me on. I've been looking forward to this. And it was an absolute blast.

JASON: Yes, it was so much fun. I'm so glad you were here. I cannot wait to continue digging into this. So everyone, this show, like every show, was closed captioned. We had Rachel from White Coat Captioning here today. That's mode possible through the support of our sponsors. Netlify, Nx, New Relic, and Pluralsight all kicking in to make this show more accessible to more people. While you're checking out things on the site, go and take a look at what's coming up on the stream. We have so much good stuff coming. We're going to be talking about API development. We're going to be talking about databases. We're going to be talking about testing. We've got so many good things coming. So do the follow, do the like and subscribe, do all the things you want to do. But come back and hang out with us again. With that, I think we're significantly over time. So thank you, Rachel, for staying a little late. And we will see you all next time.

Closed captioning and more are made possible by our sponsors: