skip to content

Building Web Demos + Q&A

In this casual solo session, join Jason while he works on code projects, banters with chat, and answers your questions (ask them at https://lwj.dev/ask). If you’ve got questions or want to help choose what Jason teaches next, don’t miss these sessions!

Full Transcript

Click to toggle the visibility of the transcript

Captions provided by White Coat Captioning (https://whitecoatcaptioning.com/). Communication Access Realtime Translation (CART) is provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings.

JASON: Hello, everyone, and welcome to another episode of Learn With Jason. It's me, your friend Jason, and I am very excited today to be talking about what I'm considering to be a learning double whammy, because, first, we're going to learn something that feels very computer science y, so, just learning stuff like that always makes me feel really smart. And then we're also going to have a super practical use case for it, where it's going to fix something that most of us probably do every day on the web. So, to teach us, let me bring to the stage our guest expert today, Charles Lowell. Charles, how you doing?

CHARLES: Hey, how's it going, Jason? I'm doing well.

JASON: I'm super excited to have you here. I'm really looking forward to this one, because I don't have a computer science background, right, and I never went to school for this stuff. It's all self taught. So, just about everything I have is practical experience, and that means that a lot of things that sound computer science y feel really intimidating to me, and I feel like I'm not oh, that's for somebody who went to college. And, so, to be honest, when I first got wind of what we're going to talk about today, I was pretty intimidated by the subject matter, right. But with that little cliff hanger, before we talk about that, let's talk about you. So, for folks who aren't familiar with you and your work, do you want to give us a bit of background on yourself?

CHARLES: Yeah, so, my name is Charles Lowell. I'm the founder and CTO of Frontside Software, which really now at this point encapsulates most of the arc of my career, which has pendulumed back and forth between the backend and frontend, and now I'm doing a lot more backend work. But the idea that got us started was, you know, we were doing all this backend work, and this is, you know, we're talking about the bad old days of 2005. And we realized that there were a lot of architectural patterns that were kind of just received knowledge, just accepted, in the backend that weren't there in the frontend especially, like web development. There just wasn't much architecture. There was just you, NetScape Navigator, and

And a pocket full of dreams.

CHARLES: Right. Anybody done debugging by clicking, hitting through alerts, this is before console log in Internet Explorer. I was one of those kids. How many times can you click through 10,000 alert boxes.

JASON: Oh, geez.

CHARLES: So, we tried to bring these architectural patterns over into doing frontend work, and we've bounced back, like I said, between the frontend and the backend, but, really, we're just all about trying to find the best, simplest way to do things. And I actually want to speak directly to what you were saying about, you know, being intimidated with something that's computer science y, because I think one of the things that we're about to talk about has a really great answer, not just for paired dramatically, but for that particular way of thinking about computation. If you want, I can just go right into it, because

JASON: So, why don't we kick right into that then and use that as an introduction for our topic today, which the computer science term for it is "structured concurrency." So, for folks who like me had never heard of that term before I met you, you want to give a high level of that?

CHARLES: Yes, so, the idea is actually one of those absolutely profound ones that you can use to make complex things simple. So, the concept of structured concurrency actually dates back, or is a derivative, of the idea of structured programming. And we actually don't think about structured programming. It's a computer science y term, so to speak, but it's now the way that we do things. So, structured programming literally means if I have an if statement, and I enter an if statement, I go in between to the curlies, there's no way I can go anywhere in the computer's instruction set except out the other end of those curly brackets. Basically, if I enter a curly bracket, an opening, flow control is going to come out the other side, right.

JASON: Got it.

CHARLES: That's how it works in while loops, that's how it works in a function definition. That's how it works on if statement. That's how it works on a switch statement. There's structure. You have the structure of your program determines how the instructions are going to be executed in it. And what's funny is, like, it's so obvious to us now. Right. Because it's intuitive. But it wasn't always that way. You know, it was so, there was this guy, who was actually a professor at the University of Texas called Esther Dykstra doing work in the '60s and '70s. There, the computer programs were just a big hairball of machine instructions, and there was no structure to it. So, he was the guy who rolled out the ideas.

JASON: When you say it was a big hairball, this is when you start looking at like the go to instructions, right?

CHARLES: Yes, the go to, exactly. So, go to was kind of the way to do it. You know, you would even if you had a subroutine, you'd have your program would execute, then say, okay, I need to go to this subroutine, then I can go over here, can go over there. It became unimaginable. Everyone is familiar with why go to is a bad idea. He was the first one to say that. He's the guy that wrote the paper "Go to Considered Harmful," we need to stop it, never even do it.

JASON: Is he the originator of the considered harmful format?

CHARLES: Yes, he is.

JASON: I love it.

CHARLES: And what his key intuition is, and this is what's important when you think about things that are computer science y, is he said the most important thing, and this is the principle that allowed him to derive the whole idea of structured concurrency, is that the execution of our program has to follow the text of the program. Like, if we see it, then we know what the program is going to do. And that seems, again, so obvious, right. But what he was saying is if I see a function call, you know, if I call like console.log, you know, console.log is going to take that string, and it's going to go do something with it, but at the end of the day, I'm going to return back to the point where console.log was called. Program execution follows program text. It's such a simple idea. So, then he would say, okay, with this principle in mind, if I can't see it in the program, it's not happening in the program execution, what does that imply? So, that implied the entire concept of structured programming. So, structured concurrency really is actually a simplification in the same manner of if I do not see an asynchronous process in the program text, then it is not running.

JASON: Okay, okay. So so let's talk a little bit about we talked about structure, I think that makes sense. And I love this idea that in retrospect, these choices are so obvious, but at the time, they are groundbreaking, and I always think about it like wheels on a suitcase, or a handle on a knife. After we saw it, we were like how did we never not do this, right. But up until that point, we had just accepted that you carried a suitcase, you know, you had to rent a cart to carry your suitcase around if you needed wheels. It's this same general thing where we live with patterns, because a lot of times we haven't had the opportunity to sit down and really question them.

CHARLES: Right.

JASON: So, this is really exciting. So, structure makes sense. Let's talk a little bit about concurrency. So, for somebody who is sort of uninitiated to the term, what are we talking about when we talk about concurrency in let's stick to JavaScript today specifically?

CHARLES: Okay. So, what I would say when you're talking about concurrency is that you are especially in the concept of JavaScript, is you basically have multiple functions scheduled to run at the same time. So, in JavaScript, this would be like in your event loop, you've got a queue of things that are going to run. And you're doing multiple, multiple ones at the same time that can possibly touch the same data. And possibly be living in the same context. So, you know, an example is let's say I'm making I've got a web page or something, or let's say I even well, we can do two examples, one on the backend and one on the frontend. I guess we can start on the frontend, where I have a page that I need to load, and I want to fetch some API responses in order to render that page. I can do those in sequence. That would not be concurrent. Or I can do them at the same time. So, I begin the process of getting one API response. Then I begin the process of getting the second one sorry, I begin both of them at the same time. And then when they both come back, then I am able to continue. But the idea is that both of those things are happening concurrently.

JASON: And, so, this is the reason that this is interesting in JavaScript and a lot of other languages, but particularly in JavaScript, is that by default, JavaScript is single threaded, right. So, we are writing instructions and those instructions have to be executed in sequence, like you said, as like a default. And when you get to something like you are talking about where I need to make two API calls and both of those are going to be, I don't know, maybe 200 milliseconds, I don't necessarily want to do one, then wait, then do the next one, because when we talk about network waterfalls and stuff, those issues are there.

CHARLES: They are real, yeah.

JASON: But this is where I think things get really interesting, because we have options to do things asynchronously in JavaScript, where we go out of sequence, and we introduce, like, a promise. And I know for me this was a huge mental hurdle that I had to overcome, when they first introduced promises in JavaScript. I was like how do you even think? Your program breaks, because you've got your program, then you've got this thing shooting out to the side, and there's .then, .catch, .finally, but that's not related to my program anymore. How do I get these things connected and it started turning into me to like this kind of tower the pyramid of doom, you know. Everything was nested inside of one thing, inside of another thing. Laying some context to what I think we're going to talk about next, I think the impetus for things like async/await in JavaScript were the challenge of the mental model of promises in an otherwise linear programming flow. And async/await lets us make it linear again.

CHARLES: If you could just say what's interesting, async/await actually brings structured programming to asynchrony, right. Dykstra's original idea is the application of async programming using async functions, because now if statements work as expected, try statements, while statements, all that stuff. It's an attempt to bring structured programming to asynchrony. So, overall, I think it's definitely an improvement. It was a huge improvement from the status quo.

JASON: Absolutely. Okay, so, that's actually a really good example of a thing happening in I think a lot of our experience, where we went from promises to async/await and that's a really good example of exactly what you're talking about. It was hard, but we dealt with it, because that was what we had. Then an option to make it more structured, make it visually flow. Ah, okay, that makes sense, I'm going to do it that way.

CHARLES: Right.

JASON: But nothing's free.

CHARLES: Program execution follows program text.

JASON: Yeah. No such thing as a free lunch, so what are the tradeoffs when you introduce this type of programming into JavaScript?

CHARLES: Are you talking about

JASON: With async/await... I guess what goes wrong with async/await that would make something like structured concurrency necessary in the first place?

CHARLES: Okay. So, the fundamental problem with async/await is that there is no longer any way to say that I am no longer interested in the value that this asynchronous process is going to produce. Or I'm no longer interested in the side effect that this thing is going to produce. So, this is the fundamental problem, is that if I make an API request, and then I let's say I'm in a single page application. I navigate away from that particular page or that particular route that's loading that data. I don't care about that data anymore. It's no longer part of the computation. But this thing is going to continue existing for some time. This request, right. You can think of it like this. If I've got a function, and I have a variable, let's say I say const X equals 5, or, you know, const X equals new object, something like that, once the function returns, I don't need X anymore. It's not part of the computation. It's passed out of scope. And, so, in JavaScript we have this thing called the garbage collector, which is really, really fantastic. And the garbage collector leans on the structure of the program. In other words, the garbage collector understands when something passed out of scope. And once it has, then you don't have to worry about as the program anymore, the run time is going to clean it up for you. Better ways to do this. Rust uses ownership, but it's the same concept. You use the actual you can analyze the structure of the program, see how it reads, and say, oh, okay. We don't need this thing. And, so, we can reclaim it.

JASON: Got it.

CHARLES: Structured concurrency is the same concept, except applied to concurrent processes. And it's to say, oh, the value being produced by this asynchronous process is no longer needed. Therefore, it's kind of like if a tree falls in the forest, but there's nobody there to hear it, you know, did it make a sound? You know, you want to kind of it's a mixed metaphors, you know, burn that tree out of existence, if there's nobody there to hear it, right?

JASON: But this is interesting, because it sort of answers that question. If you fire off a bunch of asynchronous requests and somebody leaves the page, do those requests continue to exist, and the answer you're giving me is, yes, they do.

CHARLES: Yes.

JASON: Even if I'm on a page and I make a big asynchronous request to, say, the analytics page on the dashboard, has to make a huge query, lots and lots of data, and I decide I didn't mean to click on that and I click away, I'm still going to have to download all of that data, because as far as the request is concerned, it still exists until it resolves. Right?

CHARLES: Uh huh.

JASON: So, that's a big deal from... you know, obviously, in a contrived example, we can go that's probably not that big of a deal, but even in my the Learn With Jason site, for example, I have different requests that are multiple API requests that all come together, and they have to be done as one unit in order to succeed entirely. And if any of them fails, I don't necessarily want the other six to keep going. I want to immediately get an error and retry and not burn that extra memory or time.

CHARLES: Right.

JASON: So, you're saying that by implementing structured concurrency, we have the control to be able to do that.

CHARLES: Yes, and I would say even more so that you have the control, the default is that it does that. Because there are ways to mitigate these problems in JavaScript. You know, I've worked with all of them, you know, abort controllers and abort signals, you know, probably the biggest one. And, so, there are, you know, there are ways to cope, but it is always opt in. And, so, what you're talking about is you make it so it's not like you don't what's the... what's the the guardrails are there. It's kind of like in the same way that flow control for if statements and while statements. It's the air we breathe. We don't have to think about saying, okay, now when I call this function, I need to pass a parameter to it that's going to tell me I need this function to return to this point. Or when I allocate a variable, I need to tell it actually I want you to tear yourself down when you're going. You're just going to have foot guns. So, if we imagine a world without foot guns, then nobody gets their foot shot off. And, so, that's the that's the problem. I actually see people writing in chat about abort signal. I'm actually in the middle of writing a blog post called "The HeartBreaking inadequacy of Abort Controller," because, man, it has broken my heart so many times, which is where I ended up arriving where I was today.

JASON: And where you are today is a library called Affection. So, let's talk a little bit about that, and then I think we probably want to jump over and start writing some code.

CHARLES: Okay. So, Affection, in a nutshell, and, in fact, this is actually the slogan of our project, is it's structured concurrency for JavaScript. So, our goal is to provide the simplest API possible that leverages your knowledge of JavaScript to be able to write JavaScript with structural concurrency guarantees. So, we don't we try to use every concept that is in JavaScript and have a mapping of that into Affection world. So, for example, you know, for wherever you would use a promise, you're going to use an operation in Affection. Wherever you would use an async function, you're going to use a generator function. Wherever you would use await, you're going to call yield star. Wherever an async interval, use a stream. Any place you're going to use an async iterator, it's going to be a subscription. And the APIs are almost exactly the same. And we've only added as much as needed to make them give you this ability to inhabit this world without foot guns.

JASON: Got it. Okay.

CHARLES: So, you do have to kind of there's a learning curve, but our goal is to make that activation energy very, very low. Because there are other structured concurrency libraries out there, some really, really good ones, but there's a lot of API surrounding them. So, you know, we're trying to be our approach here is to be, yeah, something that you can sprinkle into your application today, it works really well, you can call it from async code, you can call async code from it, and it's really easy to embed.

JASON: Got it. Okay. All right. So, I have a lot of additional questions, but I think it will make more sense if we start looking at code. So, let me take us over into this view here. And I'll do a quick shout out. Make sure if you want more on this, have questions no, not that. Don't look at that. That's not what I meant to share. Here, this one. This is the one I meant to share. So, there's

CHARLES: My Twitter feed, you should all follow me.

JASON: Yes, so, this is Charles' Twitter. And then this is a big old document about structured concurrency. This is some academic reading, light academic reading, for anybody who is interested in such things.

CHARLES: Right.

JASON: And then...

CHARLES: I would add that's kind of like the blog post that was heard around the world when it comes to structured concurrency. So, if you really want to get into it, this is the kind of thing where I read it and the scales fell off of my eyes. Same thing with the developers of Java, same thing with the developers of Swift. This moved a lot of people technically from one world into another. If you have the time, it does take almost an hour to read, but for a computer science paper, it is very approachable. Very approachable.

JASON: Okay. I'm into it. I mean, I'm not going to lie to you, Charles, I'm not going to read this, but I hope somebody else does. But, okay, this is effection, the library that you built. This is going to be how we actually get into this. There are some good docs I clicked on, mental model, we've got all the sort of good stuff here, and I'm going to stop there and ask you, if I want to get in and get started, what should I do first? As long as it's not reading an hour long paper.

CHARLES: That's like the equivalent of eating your vegetables. Go read an hour long blog post.

JASON: What's the version that's deep fried and covered in cheese? That's what I want.

CHARLES: So, I would say, we do have examples that are there in the documentation. I would, honestly, the I would just start writing a little script in Node. That actually has state in it, like has long running things that then you can exit out of very quickly. And I think our first example is going to show that pretty well. Of like how you might dive in, because I think it's pretty simple.

JASON: Okay.

CHARLES: I hope it's pretty simple.

JASON: Wait, I need to actually create this folder. We'll go structured concurrency. Effection. And then I'm going to get in here. We are going to get init and then create a new little node project, and then I'm going to open this up.

CHARLES: Okay.

JASON: So, I have the barest of bones here. Just a plain old node project and I'm going to create a new file called index.js. Just to make sure that everything works the way that I expect it to. I'm going to save this, then I'm going to run node index. And it says there we go. All right. We got Node running, we're on version 20, and I'm ready to write some structured concurrency.

CHARLES: Okay. Tell you what, why don't you add Effection to your package.JSON project.

JASON: Straight up Effection?

CHARLES: Yeah.

JASON: Look at it right there. Okay.

CHARLES: Yep. We should be getting version 3 or above. Yep.

JASON: 3.0.3.

CHARLES: Let's create we can do it inside index. You want to import main from Effection. It's a named import. The only time I use this is a little bit of a diversion. The only time I ever use default exports is for plug ins. Where I'm requiring something. Otherwise I just find the name helps. So, if a computer is going to do this import dynamically, then go ahead and use a default export, but for people, let's just use names. They are nice. And then main is an entry point into like the Effection world. Another one that's low level, but main gives you a bunch of great stuff, whether you're on D Node, BUN, browser, it serves as an entry point to give you automatic cancellation. So, let's go ahead and make say await main and we're going to pass a generator function, which is the course of our thing.

JASON: Do I remember how to do this?

CHARLES: It's function star. That's a generator. You did it!

JASON: This is the first generator I'll ever have written, by the way. So let's go.

CHARLES: Fantastic.

JASON: I've seen them on the Internet.

CHARLES: Here's the thing about generators. Async, they are a super set of async functions. You can think of them that way. So, I'm going to drop a computer science y term there, but it's fine, they are both code teams, but a difference is a generator is a general co routine, whereas an async function is limited to the domain of composing promises. So, you can in terms of power, you can kind of think of it as a generator is a class of things. So, like, if you think of an async function as like a car, you know, you can it gets you from point A to point B. A generator function is like a vehicle, you know, including spaceships and airplanes and drones and boats.

JASON: Got it, got it, got it.

CHARLES: So, really, all it is, is everything you know about async functions can be applied to generators. It's just you need to actually zoom out like 10x, 100x. It's an infinite gathering of power.

JASON: Got it.

CHARLES: So... so, now what we're going to do is, first of all, we can just do our console.log here. All right. Now, that should work. Did it?

JASON: Oh, no. What did I do wrong?

CHARLES: Let's see, what happened?

JASON: Oh, I got to make it a module.

CHARLES: Yeah, put your type module in there. Okay. All right. Now I definitely saw that definitely worked. Okay, great. Okay, so, now let's do something Effection y. So, one thing that's different about structured concurrency is every function always returns. And this is something that's different from what you're used to. But if we put a suspend inside this main function... so, say yield star. Yield star.

JASON: Yield star.

CHARLES: Yield star, and then suspend is a function you got from Effection.

JASON: Yeah.

CHARLES: Now yield star, I usually put the star right against the yield, just because it really is, it's like await, okay. Now go ahead and run that again. But you'll see that your node program doesn't return.

JASON: Yes, we're stuck.

CHARLES: But hit control C. So, what happened there? And the answer is, is that you'll always return from an Effection function. Which is a little bit weird, but if you put a try finally around the suspend

JASON: In the yield?

CHARLES: Around the yield. Try finally.

JASON: This goes up here. And then this is the finally.

CHARLES: Yep, finally. And then you can console.log out of here or something like that. Right.

JASON: Okay.

CHARLES: So, now when you run that and hit control C...

JASON: Oh...

CHARLES: It's actually running to completion, okay. You can't do that with async/await. And this is such a simple thing, but what it allows you to do is it allows you the suspend is actually a concurrent function in the sense that it's going to go off and it's going to do something. And suspend is kind of like the zero in Effection it goes and always takes forever. But... but if what's happening is that there's a control C handler inside main, which then exits main, so then the suspend and this function passes out of scope, so it's exited immediately. So, this actually means that, you know, when it's time to go, it's going to begin it's going to start executing, so it's always going to execute the tear down, always, without exception.

JASON: Okay.

CHARLES: Let that sink in. This is

JASON: I think the thing that's always fascinating to me is when I see something like this, and this is like a stepping stone toward the bigger thing, but the fact that you can add a control C handler into a node program is already like an earth shattering revelation for me. I did not know that was even possible. So, we're going to learn a lot today, everybody, buckle up.

CHARLES: All right, all right. Well, fantastic. Fantastic.

JASON: Okay. So, I get this in the abstract. So, we're basically putting everything inside of this main function, great. Then that means we can hold waiting for things to happen. So, this is the async/await, effectively. And then we also have a handler when everything is done, which is very cool. I'm already imagining ways that would help I think Jacob even said, imagine if all your dev servers tore down like this, how many frameworks could benefit from having a control C handler that says, hey, here's what happened during your dev run or something. Really, really cool opportunities there. So, let's maybe go a little more practical. I think you had an idea for like a really simple demo.

CHARLES: Yes, so, actually, if you want to get started, I linked to a GitHub example repo in chat. Maybe we could use as a starting place, because I think there's some primitives there that I think we can kind of run through and then we can actually do the coding of making it of leveraging the structured concurrency.

JASON: Was that this one here?

CHARLES: It's the one right below. No, that's more like a this is actually kind of the teaser, kind of showing you how things might not be behaving the way that you think they are. How program execution might not actually be following program text.

JASON: Let's run this, actually, yeah.

CHARLES: This is a brain teaser, yeah.

JASON: Chat, here's a little challenge for y'all. Let me comment this part out, and then I'm going to put this up on the screen. And... so, looking at this code, how long will this program take to run. So, we're running a promise race, and whatever the number is, it gets passed in, gets turned into seconds. This one sleeps for one second, this one sleeps for ten, and what promise race does is returns with whatever the first one is that completes. So, put your guess in the chat, how long does this program run, if we run it? While y'all do that, I'm going to get ready to run it. All right, we got a couple votes for 10, vote for 1. Linda says 11. Let's try it. Did I not save? I didn't save. Hold on. Now we're going to run it for real. And... ten seconds. So, even though this completed after one second, we still had to wait for the slow one, even though this discards it.

CHARLES: Right.

JASON: So, that's a really good illustration of the problem.

CHARLES: Yeah, it's a really great illustration of the problem in one line of JavaScript. So, the question is, does anyone know why it took ten seconds? And the answer... you know, spoiler alert, or this is the reveal, is that the set time out is actually a resource that is basically on node is holding on to and saying I cannot exit until this time out is going to be resolved. But there is no computation in the text that depends on that ten second time out. You know, we already have our answer, one second. One second won the race, and yet we have to sit around waiting for nine more seconds in order to finally bail out. So, what's interesting about this is that it's a memory leak that's like has a life of ten seconds. So, it's almost more pernicious than a normal memory leak, because these things where you're waiting 500 milliseconds too long, 5 seconds too long, they only reveal themselves at scale.

JASON: Right.

CHARLES: When you have thousands of requests, or every request is just moving just a little bit over the threshold of acceptability, so then all of a sudden things start really to have problems all at once.

JASON: Yeah. And I think this illustrates, I think, one of the things that is really tricky when you start thinking in asynchronous patterns. Like when you move things to work side by side, when you have data that maybe is going to run, but you don't actually need, as you said, it only becomes it's really only noticeable at scale, but it's also this is that death by a thousand cuts thing that happens over time to a lot of big apps, is a little you won't notice 200 milliseconds, but you'll notice if 200 milliseconds happens 15 times. And over a long period of time as different people work on it, as more layers of complexity get added. I remember I used to work at IBM, and one of the apps that I inherited just over time sort of accumulated these geological layers of new code. And they were all blocking each other, because they had set them to render sort of like render and then load the next thing and start rendering it again. I'm sure when they did it nobody was doing this on purpose, but added up over time to this page taking 60 seconds to load. Each of the individual components is only a second or two, but the net outcome is really rough for the user. And this, I think, is such a good example of one of the teeny tiny ways that you wouldn't even notice you're adding that sort of stuff to your code.

CHARLES: Uh huh, yeah, absolutely, absolutely.

JASON: So, you told me to start here.

CHARLES: Yeah, I would start here.

JASON: Okay.

CHARLES: While you're cloning that, I would say what we're going to be doing, we've all used node mon, or maybe we haven't. Or you're using Veet or the create rack dev server, as you edit code, it will restart and reload. And, so, what we're going to do doing is write a file watcher. May seem like a Herculean task, but it's actually really easy.

JASON: Okay.

CHARLES: Here's what we've got that's something similar. So, here's a main function. I've written this use server, then it suspends. Let's see if we can get that running. And what it's going to do is print hello world over the browser. Yeah, run that start program. NPM install.

JASON: I need to install my dependencies. Okay, we're running.

CHARLES: Yep, start that. You should see...

JASON: There's our hello world.

CHARLES: There's our hello world, fantastic. And, so, as we're, you know, hacking on this thing, we could make it say hello planet or hello universe, but we're going to end up with a stale state. Where, you know, if you say hello chat and then you go and reload that page, it's still going to say hello world. Which is not something that we want. So, what we're going to do is implement a watch function, or watch capability, which will actually look at this source code and then it will restart the server every single time it sees a display or sees a change. So, if you go I've actually started up the watch thing, but it's left kind of as an exercise for the reader. So, if you look at that file right there, you can see it's going to use have this use command. Now, if you want to browse into use command really quickly, just to kind of see it, it's really simple. It's what uses what's called an Effection resource. So, this is going to create the child process, and you might recognize the kind of try finally, try suspend finally pattern here. But what it's doing here is instead of suspending, it's actually going to be returning a value back to the computation that called it. So, this is like a we want to use a command, and, so, we're going to implement it saying use command, but what we're saying is once we're done with this command, we're exiting from it. So, we need to finally say proc.kill. So, we're going to kill that command. So, whenever we use a command, we can guarantee it always gets killed when we don't need it anymore.

JASON: Okay.

CHARLES: So, it's simple, but again, profound. It's like saying I've got this resource, it provides a child process, and when finally, kill the child process when the resource is killed, when I don't need the resource anymore.

JASON: Got it, okay. So, I'm going to repeat back what's happening here to make sure that I understand. So, when I pass in this use command, I put in a command and that command could be node. And then I have args, which could be start or node index or whatever it is.

CHARLES: Right.

JASON: It then returns a resource, which is from whoops which is from Effection.

CHARLES: Right.

JASON: And that resource uses... where did

CHARLES: Resource basically gets passed there's a resource function, which gets passed, is provide operation. Also worth looking online for, you can see a resource is just an operation that returns a child process. An operation is like a promise. Operation is Effection parlance for promise.

JASON: Got it. What we're doing is spawning a child process that runs that command. And then we provide that to our application, presumably. And then if the application terminates this process or decides it doesn't need it anymore, this finally makes sure that it actually gets removed and isn't just left managing in memory.

CHARLES: Right, right, exactly.

JASON: Okay.

CHARLES: So, it's saying you can't not do this. You can't not kill the process after you're done with it. You might need that process for a really long time. You might need it for ten minutes, might need it for ten seconds, but whenever it's no longer needed, you'll receive the sigint signal. So, that means that you can freely use it. So, here's our watch script that doesn't actually work, but we're going to use this command and then we're going to suspend. So, if we if you look inside, you'll see we're passing the arguments there. So, there you see the dev command, we're calling watch, and we're invoking the tsx command with main.ts.

JASON: Here's our command. Here are the args.

CHARLES: Yep, there's the args.

JASON: That would mean then I need to get the command in args, or does it know to pick this up when you run it like this?

CHARLES: Right, so, main actually passes you the args.

JASON: Nice, so I don't even have to do that work. I'm used to if you use yargs or something like that, you have to actually retrieve those out of the process.

CHARLES: Right, right. You can still yeah, yeah, normalizes it. Whether you're using if you're using node or d no, it's going to normalize those for you. This script, all it does is straight delegation. Use command, then it immediately suspends. But this has the advantage that if we control C, it's going to actually sigint that process.

JASON: Got it. So, now if I stop this

CHARLES: Yep. Run your

JASON: Run dev now.

CHARLES: Yes. It's going to do the same thing.

JASON: We've got a different port. It's actually sigint when you get to the end here. Okay. Good. Good.

CHARLES: Yep.

JASON: So then

CHARLES: You can use that command twice and if you online five, you know, duplicate that line, for example... you would never do that. But

JASON: Why not? We're here for the chaos. So, we're running twice now.

CHARLES: Right. And when you hit control C, we don't need either of those commands, they both receive the sigint signal.

JASON: What is it, lsof... wait, is that right?

CHARLES: I think so. 50633.

JASON: They are gone.

CHARLES: They are gone. If you've ever done process programming in Node, it's hell, but this wrangles it by putting strict structure around when that command is existing and when it's not. Once we exit that main function, the command goes away.

JASON: This is pretty slick.

CHARLES: Uh huh. So, let's do let's start, we're going to make a watch function. So, basically, every time that we receive a file system change, we want to restart this command. Now, the way we're going to do this, instead of watching, let's just restart it every five seconds or every three seconds.

JASON: Okay.

CHARLES: What we're going to do is put a while loop around this. So, say while

JASON: Inside?

CHARLES: Yeah, while true. Around the whole thing, yeah. While true. Then we can move those commands up. And instead of suspending, let's call say sleep 3,000. Now, there is one thing we're going to have to do here. We're going to have to use call. Because we need to establish a scope around this while statement. So, if you inside the while loop

JASON: Inside the while loop.

CHARLES: Inside the while loop, yeah.

JASON: Okay.

CHARLES: You want to say call. And then that takes a function star. Takes a generator.

JASON: Got it.

CHARLES: I like to call them an operation function. Yep. And move those two things up into there.

JASON: Oh, and I need to actually do one of those. Okay. Now, so, effectively, what we're doing is we are just continuously calling the loop to start the command, and actually... how does this cancel the command? Wouldn't this just start a new one, because it would keep getting called?

CHARLES: Well, that's what's interesting. Once we finish our sleep, we exit the call, the command is no longer needed, and it just goes away.

JASON: Oh, so what you're saying is this call creates a scope, so that scope is continuously opened and then destroyed, so it I get it, I get it, I'm with you.

CHARLES: This is actually something to note as like a side bar, when you're using structured concurrency, you hardly ever explicitly say halt, cancel, tear down, shutdown, or this or that. It just happens when things pass out of scope. Same way you don't allocate variables, you let the garbage collector take care of it.

JASON: I think I broke something, because we're not getting our port.

CHARLES: Let's see... okay. Oh, oh, right. You need to yield. Call constructs the promise, but you need to yield star. Same way you need to await. This is also a key difference in Effection, is that operations are stateless. They don't do anything. You actually have to await them or yield to them.

JASON: Okay, so, here they go. Continuously doing their thing.

CHARLES: If you click on some of those old ones, you should see those old links are like don't exist anymore.

JASON: Got it, got it. So, we should probably set an actual port here, so that we get that update. So, our use server, then the port was... let's go with one that I'm pretty sure is free.

CHARLES: Okay.

JASON: Now it should always go to these. There we go. Okay. Now if I come back out here, and I go to 4444, we get that. And if I come in here and I say hello updates, it won't change anything yet, but when I refresh... did I do this wrong? There it is. There it is.

CHARLES: Right. So, we're only doing every three seconds.

JASON: I was early, yeah.

CHARLES: Right. So, what we're going to do now is we'll replace this sleep with something that's a little bit smarter.

JASON: Okay. So, let's go to our watch again. And, so, this then needs to be something that says if let's see, what do we get, the file itself whenever it gets changed is going to have a modified date, right. So, we should be able to just see if it's been modified.

CHARLES: So, what we can do is we actually want to get we want to be proactively notified of when this file changes. So, Node actually has a file system watcher. Tell you what, I can... let me go ahead and link to that. I probably should have so, what we're going to do is we're going to use this guy right here, and I'll post this I guess I can post it in chat. Here. Here you go. So, we're going to use this, and what we're going to do is we'll create a in the same way that you would use an async iterable, here. Tell you what, let's just go ahead and let's go ahead and copy that file, or copy that code from the Node example and just comment it out. We can use that, too, yeah.

JASON: Just like that, right.

CHARLES: Just like that. Uh huh.

JASON: Okay.

CHARLES: And then what we're going to want to do is create a subscription. So, what we're going to want to do is so, the way that you would consume this normally in Node is with an async iterable. And if we look back at our Rosetta Stone, we have the concept of an async iterable called a stream. And the async is a subscription. We want to convert this watch thing into an async iterable. Sorry, take the async iterable and convert into a stream and then a subscription. So, we could use that code. I would just copy that const watcher equals watch line there.

JASON: Okay.

CHARLES: And let's I guess we could just do it right in here, yeah. After line seven, we create our watcher.

JASON: After line it's inside the call.

CHARLES: Okay, that thing... oh. Right. We need to create. So, here's this actually is perfect. So, if you before line seven you say const signal equals yield star use abort signal from Effection.

JASON: Equals yield star.

CHARLES: Then it's use abort signal.

JASON: Okay.

CHARLES: Now, you'll notice this is different than the way you would use this normally in JavaScript, is we're only working with the signal. We're not working with the controller. So, we've already cut the API in half, because we don't need to care about when we don't need to care about calling the abort signal. When this scope is exited, this abort signal will fire. When you use this abort signal resource, what it's saying, does the same thing, create a controller, yield the abort provide the abort signal, and finally controller.abort. And, so, what it lets you do is now I've got a signal tied to a scope. So, when the scope dies, the signal dies.

JASON: Right, okay, okay. So, this is making more sense to me, and I had just connected a dot, because we're not going to be randomly running this anymore. We're waiting until something is changed at which point you would need to recreate the watcher, which is why we're creating it inside the scope. I'm with you, I'm back. We're on the same page again.

CHARLES: Okay, awesome. Now, is that watcher there? What type of is that an async iterable or an async iterable, okay. So, what we want to do is we want the equivalent, if we look at the Rosetta Stone over there for async iterable, is a stream. Okay. So, what we want to do is we want to say you can just say const watcher and then I would just wrap this call to stream, or let's just create an inline, so you can say const watcher equals and there's a function called stream, lower case stream. And that should be an Effection, right. Do that, now watcher should be a stream.

JASON: It is.

CHARLES: Okay, now what we can do is a stream is something that just in the way you use an async iterable to get an async iterator, use a stream to get a subscription. We want to say const subscription equals yield watcher:

JASON: Hardest part about generators is spelling yield.

CHARLES: Yeah, okay. So now you should have a subscription there. It's not a function.

JASON: Just like that?

CHARLES: Just like that. Now subscription should be a subscription.

JASON: And it is.

CHARLES: Okay. We can say yield subscription.next, just like we would an async iterator. Let's change on line 9 to circ directory. Underscore filename is only going to respond to changes yeah, there you go. So, now just to review here, what we did is we created a watcher, used the watcher to create a stream, we created an abort signal, which is bound to the current scope, which is going to kill the watcher when it's not needed anymore. And instead of using a sleep to get to resume our function, we are just waiting for the next change event. And then our function, our call, is going to exit and then our command is going to get torn down.

JASON: So, what we should see here then is we're calling the watch function, and we're passing in tsx, the name of the file, and then we're saying pay attention to the source directory, and whenever the source directory changes, because that's what this node watcher does, we have an async iterable that's converted to a stream, the stream goes into a subscription, and we get a .next, and the .next only fires when something happens inside of that stream. So, effectively, it will ignore everything except changes to the source directory, great. We are like five layers deep into new stuff for me, and this is very exciting. What should happen in practice here, then, is what we'll see when we start using this is right now, if you notice, we're running and it hasn't retriggered again. We haven't done anything that would trigger the subscription. When I update main when I hit save here, we should see this generate a new log. And it did. And if I come back out here... it's updated. That's really cool that we were able to build that in so few lines of code.

CHARLES: Yeah, I mean, those extra lines of code that you deal with cancellation, and halt, and dealing with, you know, all that stuff just adds up. You know, the noise is actually important. Just a shaving off 10%, 20%, it adds up. What it's about is the composibility, because when I use that command resource on line 12, what's interesting there is that then I don't have to worry about how it's going to get torn down. So, I don't have to include none of that tear down lodger has to leak into this. It's not my concern.

JASON: Yeah, I can definitely see I can see how this has compounding benefits, as you sort of continue to build, because one of the things that's hardest for me, and I think hardest for teams in general, is that as you start to build things, the implementation details start to sort of leak outside of where they are actually contained. So, as you said, if you've got a cool thing that's async, but in order to use it you have to wrap it with a try catch, now you don't just get to tell somebody use my thing. You have to say use my thing, but you have to use it like this, or else you're going to have problems. You have to understand what some of the underlying problems are. Again, it sort of starts that slow ballooning of things are hard to work with, hard to manage, nobody really remembers why it's written that way, so we just append at the bottom of the file instead of actually editing the code. Those sorts of things that are really, really challenging. If you know that this command that's being run when you look in use command, this is all self contained, so whatever is happening in here is fine, because it's going to kill itself once it gets out of scope, and we know that if we're in here, anything that's done in here is contained scope, suddenly all that cleanup, obviously, you still have to teach the team how to do this, which getting a whole team to switch over to generators, it's going to be a challenge. People are intimidated by these, and rightly so. There's a lot of mental these are new concepts, and we're getting away from the I think generators are one of the more challenging concepts, because they get away from you can sort of picture what a loop looks like with a physical analog. It is very difficult to picture a generator with any physical analog. As far as I know, there isn't one.

CHARLES: Well, so, true, although what's the difference between this and async/await? So, if you were to replace all those yield stars with awaits and you were to replace function star with async, what would be the difference in terms of the readability? I think what's what the generators let you do is precisely the thing we were talking about at the top of the conversation of you're taking callbacks, and you're able to stitch them together in a linear function. Because at the end of the day, we do have, starting on line 6, we're using a while loop. The only difference is, it's an infinite loop, which is also kind of crazy. But it's still what you're saying is I have a loop, and these things happen in sequence in this loop. And inside that call, those things have to happen in sequence, even though they are totally happening concurrently.

JASON: Yeah. Yeah. Yes. Like, 100%. Question from the chat.

CHARLES: So, here's what I would say with the generators. Is, one, there really is no difference between learning generators and learning async/await. Everyone forgets how hard it was for them to make that leap, but once you make the leap, it's like, oh, it's the wheels on the suitcase. You know. It's the knife with the handle. It's, obviously, we want to express things in sequence. Same thing when you go to generators. It's just a continuation down that path. Second thing is the way you get the team to adopt it is give them a taste of something like this where you're like I couldn't even imagine how to do that. I would have reached for some you know, I would have done NPM install hairball to get some off the shelf solution that's 1,000 lines of who knows what, versus oh, my goodness, we did this in 15 lines of JavaScript. Once people give it a taste, you know, the taste creates the appetite all itself.

JASON: Sure. Yeah. I think that's that's a good point. I think this and some other stuff that feels intimidating to start, I think state machines are another one, where once you use a state machine, oh, yeah, I should be using these all the time, but the initial setup is a little challenging, because it's a new way of thinking. It's very different from the, you know, more kind of imperative I see all of this in a line, and if I didn't write it, it's not happening, right. So, it's kind of nice to get that explicitness in you get the explicitness here, in something like a state machine, and if we can push through that initial learning curve, then there's a lot of benefit to be had in working this way. So, Linda had a question that I think is a good one. If we're creating and tearing down all these scopes, is there a way that you can is there a good way to debug how many scopes have I created, is there a way to see what scopes aren't closing down, or did I miss a step somewhere?

CHARLES: So, that is a really fantastic question. The answer is not currently in V3 of Effection. There was a way to do it in V2. We would like to bring that back. Step back to v2, if we step back to v2 Effection, and really just Effection in general, the key insight is that just like you have a stack frame of or a stack that is consistent of a bunch of stack frames, Effection maintains a tree of stack frames, effectively. So, taking that linear array and instead of each child you can have a tree. And those children in the tree represent the concurrent processes. Yes. Internally, inside Effection, there is an explicit data structure that represents the tree of all the scopes. Now, in v2, we had developed an inspector that was basically a React app that lets you connect to any Effection process and visualize that tree. In v3 we have plans to, we just haven't done it. I think we released it in December, and, you know, this being open source, it's just kind of the priorities of the day carry you where you will. But absolutely, yes, we want to develop the inspector for v3. We have a lot of ideas for how it could go. We want to get it right. We've had a couple different spikes. Some of the people in the community have had a few stabs at it. We haven't settled on one thing, you know, one particular approach, but it is very much a priority, because it really makes a huge difference. You know, with promises, promise is just a glop of code that has a bunch of people that registered callbacks, whereas Effection operation is always part of a tree. So, you know exactly where it stands. And that's huge information.

JASON: Got it. Yeah. Yeah. I can see why, you know, why you were so excited about this, and I know that Jacob has told me a lot about this, and is also very excited about it. I can see how this sort of, you know, can create a little bit of... oh, wait, I can write good software? Because I think a lot of my at least my experience with software is usually what I'm trying to do is solve a problem, and I know that I'm creating problems as I go, and I just trust myself to come back and fix them. There are very few places where I'm writing software where I'm like I know all of these edge cases, I know I've got this covered 100%, we're going to be fine. Having something like this kind of makes me feel more confident that I can, for the cases that this is for, feel pretty confident that I covered all my bases. That's nice.

CHARLES: Yeah, yeah. No, it is, you know, just to that end, we haven't talked about this explicitly, because what's interesting is I'm actually kind of surprised myself at how few hiccups we've had so far. So, you know, I think that's kind of a small validation right there. The other thing I would add, though, is that sorry, because of that, we didn't get to see one of the key benefits, is that Effection and structured concurrency in general is really good at propagating errors. If you've ever seen an unhandled promise exception

JASON: Let's break it.

CHARLES: Okay.

JASON: We got a few minutes. Let's break it. How do I break this?

CHARLES: Okay, so, what you can do is let's... let's spawn an asynchronous process inside here that if you don't make a change within five seconds, it's going to blow up. Let's say do like yield star and then spawn. So, this is the way you create an asynchronous process in Effection. So, now spawn is like call, where you pass a generator function.

JASON: Do I want spawn from child process or

CHARLES: No, you want spawn from Effection.

JASON: Got it, okay. Sorry, now what do I do?

CHARLES: It's like call, except call is sequential. And this is parallel. So, it's a generator function.

JASON: Generator, right. There's no shorthand for generator functions, is there?

CHARLES: There isn't. I think there should be. There's a TC 39 proposal for it. I think what's really again, another separate thing, but once you use generators, you don't need async iterables. There is a there is a what's the word I'm looking for? Chasm, a schism, in the JavaScript community or in the JavaScript runtime between synchronous and asynchronous. But the thing about generators, because they are unconstrained by the semantics of promise, in other words, remember who was it, Yoda Bite said I never thought about comparing async with generators, and that is really apt, because generators are a super set of async/await. They are like generators, but the only thing they can work with is promises. Generator is just like an async function, except instead of awaiting a promise, you can yield to anything.

JASON: Got it.

CHARLES: You can yield to anything. So, the domain of values that you can work with is unlimited. All of that is to say that operations, unlike promises, can resolve synchronously in the same ticket or run loop or asynchronously. So, there is no schism to hold in your mind, am I working with async code, sync code, is this an iterable or an async iterable, is this an async dispose resource. You know, those living on the cutting edge in implicit resource management, is this an async resource or sync resource. All that kind of goes away. So, working with generators really is there's a lot of practical advantages, too. Anyway, point being, spawn is the current equivalent of call. So, what we're going to do is let's sleep for three seconds. Again, we have to yield, which is the equivalent of await.

JASON: Okay.

CHARLES: And then let's just throw a new error. Maybe make something like that.

JASON: Okay.

CHARLES: What's interesting

JASON: Theoretically, this should fail any second now.

CHARLES: It should. No, because not the watcher function is not

JASON: Right, it's not watch.

CHARLES: Not being watched itself.

JASON: Who watches the watcher?

CHARLES: That's right.

JASON: Okay, three seconds have passed. Our error died. Here it is.

CHARLES: Yep, and the whole thing so, what's interesting here is errors in Effection and structured concurrency are always catastrophic. They will maximally crash everything. Which is you think is that really what I want, and the answer is yes. You want that error to be in your face, so that you either deal with it at a specific point to say I implicitly catch this error, or I'm like... you know what, crashes your entire process. So, it forces you to deal with errors, which is actually what you want. Because it's in Effection, it's impossible for an error to go uncaught, whereas you can open up your inspector console and see oh, I got all these unhandled promises, or this thing failed or that thing failed.

JASON: Very cool, okay. Is there a specific way, like if I want to handle an error, do I do something special, or kind of a

CHARLES: Put a try catcher on that call. Not there. I would put it around line 7.

JASON: Around line 7, oh, whole thing.

CHARLES: Yeah, put it right there, whole thing. Interesting is when that error happens

JASON: Here to here, okay.

CHARLES: Now, there's... yep. Before you do that, let me demonstrate one other thing, because I think this is actually important. If you could unwind that just briefly. What's interesting here is that let's say you touch that file before three seconds and you restart. It's actually not going to crash.

JASON: Oh, right, yeah. Because if I let's see... I will open up my main. And then I'll NPM run dev and I'll just save it.

CHARLES: Right.

JASON: If I do that, I reset the timer.

CHARLES: Yes, but I didn't have to cancel the timer.

JASON: When I stop

CHARLES: Stop for three seconds, it blows up.

JASON: Okay, all right, that's cool. I'm not manually we're not manually managing any of that.

CHARLES: Right. Just that spawn, that kind of time bomb that you've put in there, so to speak, is bound to that scope. So, when that scope goes away, so does the time bomb.

JASON: I like it.

CHARLES: Yeah, so, you can catch it. You can catch it by putting a try catch around that call, and then, you know, you can explicitly deal with it, and it's caught just like it would normally be in JavaScript. The only difference is... that's an asynchronous what's interesting there is that you have an asynchronous but concurrent process, but you're not actually awaiting. You're not actually awaiting that. And that's not the behavior. So, in in JavaScript you would get an unhandled promise exception. You'd get that logged out to the console.

JASON: So, in this case, what we should see then is just a log. And I did...

CHARLES: Got a log yeah message, uh huh.

JASON: Let's try that again. Oh, actually, it didn't even crash last time. That's cool.

CHARLES: Yeah, won't crash.

JASON: Look at that. It fails and just reboots itself. Hey, let's try again.

CHARLES: Right.

JASON: Cool! So, this is actually huge, because this means you can make resilient you can, without doing anything wild here, like a try catch, I think, is pretty standard.

CHARLES: Right.

JASON: We have a resilient process.

CHARLES: Uh huh. Yeah.

JASON: This isn't ideal where it continually dies, but we have the opportunity to catch an error and change something and try again, so that when we don't get the error, or have a maximum number of retries before we yeah. This is very cool. This is very, very cool.

CHARLES: But again, what's nice is you're able to just use the simple primitives of JavaScript, if statements, while loops, functions, to express your program. Including all of the error recovery.

JASON: Really cool stuff.

CHARLES: So, in this you start to get into the space of like supervisors, where you can have a single process that's managing ten different servers. And if one of them crashes, I want to restart them all. You know. Stuff like that. But I want to make sure that they finish handling any requests that they were doing. And, so, this is like the building blocks that all of these incredibly complex behaviors can be built on. And, so, but by allowing you to reason at this higher level, it just makes what's possible for you that much larger.

JASON: Very, very cool. Yeah. This is awesome. Okay. So, we are coming up on we're just about out of time. So, I imagine we probably don't yeah, we don't have time to actually build anything else.

CHARLES: Okay.

JASON: So, let's instead, for anybody who wants to take this further, what should they do next? Let me start by throwing another link into the Effection library. Where else should someone go?

CHARLES: Where should we go? Okay, I would say, certainly, read the piece on thinking in Effection, because that will then give you the tools those two pieces that are really up at the top, thinking in Effection and the async Rosetta Stone, those are the most important concepts to have, so that you can then apply that to any problem. How do I solve this in an Effection y way? Because I know for me, I definitely came from this explicitly cancelling, explicitly killing, you know, this imperative mode of having to do everything to tear stuff down or worry about, you know, managing asynchronous processes. So, I had been working with Effection for a long time before I realized, oh, you know what, the best way to get rid of a resource is just to make sure that the return from a scope that it was living in, you know, whereas and we didn't even cover this. When you spawn a task in Effection, it does have a method called halt. So you can use to explicitly halt something. But if you'll notice, all of these examples, we didn't use halt once. We just leaned on the structure of our program. What was in scope, what was between the curlies at any given time. You know, it's alive if it's in there, dead if it's not.

JASON: Got it.

CHARLES: So, those guides we spent a lot of time on, okay, what are the really core practical concepts and how do I map them on to JavaScript. So, I would start there. And then the next thing is I would jump into the Discord channel. You know, we have people who have been using it in React, people who have been using it in Node, Deno, people using it for a bunch of different people using it for testing, you know, setup and tear down is critical to testing. There's a lot of different uses. Right now we're very much in the stage of we have the library, the library is very solid, very minimal, like having JavaScript to yourself. Or like having JavaScript itself. But we don't there is not the ecosystem is very nascent. So, there's like three different ways to use it in React. There's already some frameworks for doing state management and stuff in React. So, you know, I can give a shout out to Starfx, which is basically a very similar to redux, except using yeah, starfx. Yep, all of those, that first one is it, sure, right there. So, whatever your use case is, I guarantee you will be able to use structured concurrency, because it is a general advance in programming. Like it is a new state of the art. And I've mentioned Java and Swift a couple times, you know, as kind of being on the advanced guard of doing this. Like these are now baked into this concept is baked into Java. This concept is baked into Swift. I almost feel like JavaScript was a victim of its own success and its own forward sight in the sense that async/await really landed in like 2013, 2014 time frame, and this stuff became generally known in like 2018. And, so, by being so ahead of the curve, I feel like we actually lost out. But the point is, this is here, this is ready, and this will absolutely make your programs better. So, jump in, whatever your product is, there's a lot of people working on it, and in a bunch of different contexts, and you will connect with someone who is operating in the same context.

JASON: Very, very cool. All right. So, Charles, this was a lot of fun. I feel like I have a lot of new information like jammed into my brain right now, and I need to go write some code to get any of it to stick. Any parting words for everyone before I tear us down here?

CHARLES: Yes. I would absolutely say that the key to structured concurrency is bringing the code in line with your intuition. When Dykstra said that the program execution needs to follow program text, what he's really telling you there is that if you're confused, if you what you're seeing is not matching with your intuition, then there might be something wrong with the actual programming language run time. Not with your head. And, so, what we need to do is really ask ourselves always, always, always, you know, what does my intuition tell me. Is this program executing under what I can intuit about it by reading the program text? And structured programming and by extension structured concurrency, is really trying to bring program execution in line with your intuition. So, you don't need a college degree or, you know, two decades of studying computer science to really lean on your own intuition and that's what I love so much about Effection and structured concurrency in general, it lets me lean on my intuition.

JASON: I love that. I am going to be thinking about that as I continue writing code. That's a good one. All right. Well, Charles, I'm going to send you back stage, and I'm going to get everybody off to their next adventure here. This show, like every show, has been live captioned. We've had Ashly here from White Coat Captioning doing all of that for us. Thank you so much, Ashly, for being here. And that has been provided through the support of our sponsors, Nx and Netlify, who are kicking in to make this show more accessible to more people. Thank you very much for them. Click on their websites and tell them I sent you. Make sure you are checking out the schedule while you're checking things out on the Internet, because there is a lot of fun stuff coming up. Next week we are going to learn about Val Town, which is serverless and a whole bunch of fun stuff that I only marginally understand, very excited to learn about. And then the week after that, I'm going to be in London, so if you are in London, I'm going to City JS. Won't be a show that week, so make sure if you have the chance, either tune into CityJS or say hi. All right, y'all, this has been a ton of fun. I'm very excited to go and learn or go and play with all these new things that we've learned. We will see you all next time.

Closed captioning and more are made possible by our sponsors: