skip to content

Control Apps with Your Thoughts

We were promised mind-controlled apps in the future — and with Charlie Gerard the future is now! In this episode, she teaches us about neurotech by building a thought-controlled app.

Related Episodes

Full Transcript

Click to expand the full transcript

Captions provided by White Coat Captioning ( Communication Access Realtime Translation (CART) is provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings.

JASON LENGSTORF: Hello, everyone. And, welcome to another episode of Learn With Jason. Today, we’re bringing back to the show Charlie Gerard. Thank you so much for coming back.

CHARLIE GERARD: I’m good. Thank you so much for bringing me back.

JASON LENGSTORF: I am so ready. I cannot wait. Like, this is something that I’ve been looking forward to, so much, because you have consistently been one of my favorite devs to watch, because you just come up with the most off the wall creative stuff.
I remember seeing you do the on stage Street Fighter demo, right? And, even before then, like, way back in the day, I feel like I saw you do a bunch of facial recognition stuff where you would make faces to control the app and just so many amazing things that I’ve seen you create, that are mind boggling to me. This is a space that I never go in. So I cannot wait.
So, those of us that aren’t familiar with your work and haven’t been Twitter stalking you, like I have.

I’d love to hear your background and how did you get into doing these more what feel to me like science experiments, right?

CHARLIE GERARD: Kind of. So, I have I studied code, in a boot camp, so I don’t have a CS degree. My background is in marketing and advertising, so, I think this is where I got interested into more of the creative side of the digital world because when you’re in advertising, I think the more creative idea that you have, the more your campaign is going to work.

But then on the job, I didn’t find it super fulfilling so I switched to a boot camp to learn to code and then I actually started mixing that creative side to actually being able to build the ideas that I had instead of sometimes it’s frustrating if you have an idea, but you don’t have the coding skills to build it so you have to wait for somebody else to build it or if they don’t believe in your idea, they’re not going to build it. I love the power it gives you to come up with an idea and then build it and share it.

If people don’t like it or whatever, it doesn’t matter, you still got to build it yourself and experiment and I think as I discovered how powerful it was to be able to bring ideas to life, I started looking into more human computer interaction and instead of just using JavaScript and code, I realized that JavaScript can be on more platforms and you can it’s like you don’t have to use things the way people thought you’d use them. You don’t have to use a keyboard to navigate a website or if you have buy a device in a store and it has an API, you don’t have to use it the way it was defined. With your skills as a dev, you can build your own way of interacting with it and I really got into that and I still do now so it’s been, like, six years that I’ve been building useless things.

It’s fun. I find them useful. But, yeah, that’s kind of, like, the journey. Boot camp and then everything.

JASON LENGSTORF: I think that’s something I’ve always found inspiring about your work. You have a quote that that I’ve used a bunch of times and that I’ve seen shared around of “useless is not worthless.” And I feel like that’s such an important thing. One of the things that I always encourage people to do, when they’re getting into code or they’re trying to figure out what to do, is to build something that makes them smile or makes them laugh and do that as a way of exploring. I’m much more likely to finish something if I’m having fun while I’m doing it.

CHARLIE GERARD: Exactly. You keep your passion. I think sometimes, in tech, things can be like, there’s a lot of pressure or, you know, you always have comments about, like, oh, what do you do? Move boxes or things like that? It’s, like, sometimes that can kind of wear you down. And if you use this same, you know, code to actually do something fun and something that’s going to excite you I don’t know if I wasn’t doing this, I would be as passionate. I love my job and coding. If you get to explore more of what you do on the job, it’s super exciting. The amount you can explore and learn, you can mix so many things with code. You don’t have to only build an app or a website, like, there’s code in research and you can look into AR and VR. You can explore this space that sometimes feel only reserved for certain people. I just yeah.

JASON LENGSTORF: What’s really exciting about it, too, is even if you build something that is functionally useless, you built something that doesn’t really serve a purpose. It’s not going to make you money. It’s not going to make your job better, but what you’re doing is you’re still getting practice making concepts fit together. You’re still getting practice thinking about how to make code work and that is a skill that will apply to your day job or to your next job or anything like that and I think that’s such a cool thing.

CHARLIE GERARD: You never know when you’re going to need certain skills or certain knowledge and there’s many times where I was building something and it looked, to some people, useful at the time, but it taught me some things about AR and VR and at work, when it came time we needed somebody with AR and VR, I knew the basics to say, oh, no, we’re not going to be able to build that. I don’t really believe that you only have to learn what you’re going to use now. You have no idea what you’re going to do in five years. I think it’s interesting. I just like learning and for that, I just build stuff.

JASON LENGSTORF: And, like I said, it’s so inspirational to watch the string of demos that you’ve put together and to see how many creative and just amazing things that you can do by thinking laterally about the way that computers work and I you know, I love that idea of taking something and not using it for its intended purpose and just seeing what you can build by, what if I turn this thing sideways? What does it do?

And I love that. It’s so much fun to watch.
Speaking of building things, let’s maybe move over and start playing around with some code today. Before we do that, a quick shout out to our sponsors. We have live captioning for the show. So, if you want to follow along and read along, you can do so at We’ve got the show uh oh. Well, so, the embed isn’t working, but the live captions are, so, you can see the you can see the live captions here. These are from White Coat Captioning. They are amazing. A huge thank you to Netlify, Fauna, OAuth.
We’re looking at Charlie’s computer and I am on VS Code Live Share so I am still able to type on Charlie’s screen. But we had some issues where I was going to write code and we were going to move it to Charlie’s computer. Today, we’re doing literal mind control. This is, like, a dream come true for me. Always wanted to control things with my thoughts. But one of the things that’s required for that is a piece of hardware and only Charlie has that piece of hardware and it’s very expensive and I didn’t want to buy it for one episode.

So we’re going to work on your computer today.

CHARLIE GERARD: Yes. I shared the whole thing, so all my secrets are out. It should be fine. I made sure that my you know, no shame is shown. But, yeah, so, I the plan that I had for today so, I probably should start by talking a bit about the headset itself and what it can do. Then I want to talk a little bit about the different APIs, like, the data you can use coming from the headset and there’s a few demos that I kind of, like, maybe drive and let you let you write some code and we’re going to be able to build a few different experiments and see how it goes.


CHARLIE GERARD: Just as a reminder, this is, like, hardware and this is live coding, so there’s a chance that nothing will work. But in this case, we’ll just have to chat. That’s fine, as well.
All right. Let’s try. Let’s see what happens.

JASON LENGSTORF: Let’s do it. Holy crap, Chris has just subscribed for 12 months. I think that might be the first year subscription. Dang! Thank you so much. That is very exciting.
All right. So, if you haven’t already, go follow Charlie on Twitter and what is the name of this hardware, so I can pull it up while you talk about it?

CHARLIE GERARD: It’s the Notion. Do you want me to spell that? The one that you can pre order is the new version and the one I have is the older one. So, they’re constantly upgrading certain things. There’s software updates quite regularly because the headset runs its own OS. In the hardware, from what I know actually, I didn’t re ask them what was new in the second one. There’s a battery indicator that I actually don’t have in mine. But there might be a battery that lasts a lot better. Otherwise, I haven’t really checked what’s new.
So, the headset, it looks like that. So, it’s kind of the design is, like, much better than the other headsets I’ve tried before. When I wear it, you don’t look as stupid as I have on stage sometimes.

It has 10 different electrodes around the head. And, so, it runs its own OS, so, it’s like data. So, instead of streaming the data to the computer, where you could lose some data from your brain waves, it actually does the computation on the headset and only sends to the computer when there’s an event you subscribe to. If you want to subscribe to your state of calm or focus or a thought you trained, you don’t stream the raw data directly to the computer. The computation is done on the device and it sends to the computer the probability of how calm you are or how focused or if you subscribed to a thought I did “thinking about tapping your right foot.” The whole data, coming from the electrodes, goes through an algorithm and it’s all on the headset and what’s sent to the computer is only the probability of me thinking about my right foot.

JASON LENGSTORF: So, I have a couple just kind of a couple, like, baseline questions, because this is all very new to me. When you’re saying that you are training a thought, so, there’s, like, a setting, I assume, where you’re in training mode and you just think a thought. You can think about anything and when you successfully think that thought, you’re like, that was the thought I meant to think and then the band, like, continually

CHARLIE GERARD: It’s a little bit different. So, when you buy the headset, you have to create an account on the platform and when you go into training mode, you have a list of thought that you can train, that they know are are easier to how am I going to say that? Are easier to detect. So, it’s it’s easier to detect the pattern of thought when it’s about a movement of the in the body, so, an action. So, a few different thoughts that are available are if you think about pinching your fingers on the right hand or the left hand. You can tap your right foot or your left foot or you can think about pushing something forward. So, it’s all about movement because it’s easier to detect because of where the electrodes are placed around the skull, depending on the area of the brain that is responsible for movement.

JASON LENGSTORF: And when you’re thinking about it, if it’s like, I’m going to pinch my fingers here I’m trying to move into the camera and it’s not working. Are you actually thinking and doing the movement at the same time?

CHARLIE GERARD: I’m not doing the movement, I’m thinking about doing the movement. I feel like if you do the movement, you aren’t really thinking about doing the movement.

JASON LENGSTORF: It’s such a meta conversation.

CHARLIE GERARD: But when I trained, I try to stand yeah, to sit still and I try to be in an environment that doesn’t disturb the way I think because you try to that’s really hard because you try to focus on only that thought because the way so, it records the data from all of the electrodes around the piece of hardware and then it’s trying to find a pattern that’s repeated when you think about that thought. So when you go through the training, you go through, like, steps. So, you start thinking about your right foot for a few seconds and then there’s a pause and the screen tells you to think about nothing and you do that, like, 30 times I think it’s 30. So, you have 30 rounds of repeating the thoughts of your right foot. And then you can you have feedback, as you go through the training. So, you have, kind of, like, a bar and you try to repeat that thought and you see if you try to fill the full bar. It’s hard because if you’re in an environment that is, like, noisy, then your brain it’s really hard to focus, right? So, you can train it a few times or sometimes there’s thoughts that are easier to train than others.
For example, I found that thinking about tapping my right foot was easier than thinking about pushing something in space. Because my right foot, like, I know where my foot is. I know the sensation of tapping it on the floor so I can think it’s like as if I was trying to move my foot, but without doing it.

JASON LENGSTORF: I’m following you. I get this.

CHARLIE GERARD: Yeah. Whereas pushing, it’s like I don’t know. I would want to have more feedback. It’s weird. You can try different things, but I think there’s a list of different thoughts and then if you don’t like the thoughts that are on the list, you can have access to raw data to train your own your own algorithms and your own stuff, but then you have to deal with you have to do the whole calculation yourself. You have to use the data and pass it through filters. The data is noisy when you get it raw and you have to know about digital signal processing. I was looking into that over the weekend and I was like, I I was so frustrated with myself because

I want this is something I’m building and I need more knowledge about digital signal processing to get there and I was like, why don’t I know this already? Yeah, it’s interesting. So, if you don’t like the thoughts that are pre defined as the ones that are easier to train, you can if you want access raw data and do the whole calculation and machine learning yourself. Which is cool because in other devices that I’ve tried before, they didn’t give you access to raw data unless you paid, like, a monthly subscription. I’m not a researcher so I use it when I have the time. It’s not worth it for me to actually pay for this. I couldn’t play with the raw data. So, it’s kind of, like, limited.

JASON LENGSTORF: Yeah. Yeah. So, this just sounds, like, amazing. I’m so excited about so many possibilities here. You mentioned, at one point, you can look for a state of calm. And I’m just imagining that I hook up my computer to where I have to put on this device and it will only let me open up Twitter when I’m calm.

CHARLIE GERARD: If it detects that you’re focused enough, it switches off your notifications so you don’t get disturbed. So I think there is a VS Code extension. They are looking into what you can do with the state of calm and focus. The state of calm and focus is more, you’re looking at the brain waves and there’s probably more I don’t know. The activity of the brain waves, in general, is probably more spiky if you’re focused and if it’s calm, it’s probably more quiet or something. So you are trying to do that easily with training, whereas thought is more personal.
So, yeah. At least with the state of calm or focus, I think they were doing stuff around, like, picking music based on your state of mind. So, if you’re calm, then you can, you know, play some music that’s more ambient and if you’re I mean, you can really do what you want. Once you get access to the data, then you can build whatever you want with it and in JavaScript, yay!

JASON LENGSTORF: I’m ready. I want to see this in action. This is so cool and I would love to like, let’s do something, right?

Where should we start? What should I do first?

CHARLIE GERARD: So, I just put together a basic app to start with because to be able to use the headset and have access to the data, you need to log in. So, for privacy reasons, they want people to only have access to data if you’re logged into your account. In other headsets that I used, it transmitted the data via bluetooth to the computer. If you listen to bluetooth devices around you, you could listen to my brain data. It’s not very private. I set up the log in form and stuff like that so we won’t have to go through that.
That’s why on the page on the right, you see “boop.” I checked that I was logged in. As it’s primarily so, the founders of the company wanted to make a headset where the focus was on developers to be able to build things. So, the main the API is available in JavaScript and you can use it on the web with React or in Node, as well. As JavaScript is everywhere now, you could either do websites or stuff with Arduino and things like that. So, to start, I thought that we could look at how to hook up the calm API to do something on the screen with the state of calm.
So, you can you can access the files, as well, or you can only type on the

JASON LENGSTORF: Looks like I can get anywhere. I opened up “app.js.”

CHARLIE GERARD: So, here, if you scroll down a little bit, the main state of the app actually, I have to scroll down. We’re just doing the loading. Just going to talk through the through the code a little bit. So, there’s a few user effect hooks. Checks if there’s a device ID. That’s the whole log in part. Here, there’s a user. If I logged in, then we navigate to the slash one. We are sure on every page we visit, we have access to the user and Notion and have access to the data. So, that’s basic setup stuff.
Here, we have the few routes that we have. So, we have the log in that I already did and if something goes wrong, I have log out. Because I think that’s what happened last time, when we tried to set up, I couldn’t log out, I didn’t have the log out route. That was my fault.
So, here we have the calm component and at the moment, that’s just “boop.” And we’re going to go implement it now. We passed the Notion and the user. So, if we just go to “pages” and we enter in the “calm” file, this is where we’re going to write some code.
So, let me look at what I have I cheated a little bit. On my other computer, I have the code that I want you to write.

So, what I the demo that I wanted to build, for this, was using react three fiber. I don’t know if you’ve used it before. It’s using 3GS, so doing some 3D. The results I want to get to at the end is there’s going to be some bubbles moving on the screen and when I wear the headset, I want the motion to slow down with if I manage to calm down.
So, at the top of the file, I required a few things that I need for that. But so, the first thing that we can do is inside so, before the return, we can probably use the use state hook to set the set of calms. You can have calm and set calm and use state 10. We’re just going to use a render value to start with. And then after that, you can have a use effect hook where we’re going to start our subscription to the calm event. So, inside there, you can define a variable that’s actually oohhh. I didn’t pass calm inside the user effect.

JASON LENGSTORF: I can leave it out.

CHARLIE GERARD: What I passed was the user and Notion.

JASON LENGSTORF: 0ohh, yeah.

CHARLIE GERARD: You need to pass in Notion mainly.
So, inside there, we can declare “viable.” I called it “subscription.” And, here, you have to write “notion.calm” with the brackets. And then “.subscribe.” And then you’ll have a callback with “calm.”

JASON LENGSTORF: Wait. I have to do something. Because this is going to get weird, right?

CHARLIE GERARD: Is it going to get weird?

JASON LENGSTORF: We’ve got the variables in different scopes.

CHARLIE GERARD: Yes, you’re right.

JASON LENGSTORF: We’ll call it “sub” and then we’ll do something with it in here.

CHARLIE GERARD: So, here, you have to access the probability. So, you can do “sub.probability” and usually that comes back as a number between zero and 1 for the probability of you being calm. So, I multiply it by 1,000 because I want to pass it later into my animation and if it’s 0.1, we won’t see anything happening. So, you can store that into your variable.

JASON LENGSTORF: Do you want me to call it something?

CHARLIE GERARD: I called it “calms core.” And then I set that to the state.
And, inside the return, what we’re going to do is I cheated, as well. If you go to the “test.js” file, if you could copy what’s just the first inside the brackets, because the other ones is for later. That’s the animation for yeah. So, if you just copy yep, into here and you put it into the “calm.js” file.
The classic example of react three fiber, to have a canvas where it sets a few properties and we have an ambient light and if you want to have a look at it, it’s in the “components” folder, “swarm.js.” I did not build that myself. It’s part of the some of the examples of react three fiber. And, we pass into into it, the data that we get back from the headset.
So, I don’t I think I’m just looking at the stream to make sure that you see. So, in the data, here, all I’m doing is the speed property, like, the speed variable, here, I’m dividing it by data because before so, the normal animation was divided by 200.


CHARLIE GERARD: You could do whatever. You could apply it to the “X” factor. X Factor, ha ha.

You could apply it to whatever you want. If you wanted, you could create more bubbles when you’re calm or whatever.

JASON LENGSTORF: So, just to kind of talk through the way I understand this code. What’s going to happen here is the calmer you are, the slower everything’s going to move on screen.

CHARLIE GERARD: That’s the goal. Now, is it going to happen? I don’t know.

So, yeah, so if we just go back I mean, if you’re interested, I can push the code later if you want to have more of a look. All of that, I will definitely not explain all of the things that are happening here.

JASON LENGSTORF: When we get into math functions, I’m immediately, like, I trust you.

CHARLIE GERARD: We got more work to do.

If we go back to the “calm.js,” when we render the swarm component, the data we pass is “calm “in the state before. So, I just turned on the device. I might have to wait a little bit because it has to let me just you probably will see that, which is fine. Okay. So, here is can you see that on the screen? Yeah, it’s coming. Here is the console that you have access to when you have the device and you can see, on the left here, that’s it’s starting the OS. You have to wait a little bit for it to start and then it will give me a battery percentage and stuff like that.
If we do the training, for example, I trained a few different things, but my right foot is the best. Okay, maybe I have to wait. You have access to a few different things you can do in the console if you don’t want to build. You have an activity log. That, here, looks empty. But then you have you can have access to the signal quality and if you go to the brain waves, you can actually have a visualization of the brain waves live or the frequency. But I forgot what that is. So, let’s not talk about it.

JASON LENGSTORF: So, a couple quick things, while we’re waiting for this to start up. First of all, thank you very much, [Indiscernible] for the subscription. Enjoy the new emotes. We have a rubber Corgi in there, which I’m just absolutely pleased about. We’ve also got the Stream Blitz logo, which are the sound effects that aren’t working oh, they’re working.

CHARLIE GERARD: I see the video, but I put the sound down.

JASON LENGSTORF: Sure. Sure. Sure. There we go. They’re back. I don’t know why it wasn’t working.
Also, there was a question: What are we doing here? We’re doing literal mind control.

What Charlie just put on is a hardware device that has what was it? Electrodes? That are now at different points around her skull, that are reading her thoughts. And, did it just finish firing up?

CHARLIE GERARD: Yes. I have it on.


CHARLIE GERARD: I’m warming it up.

JASON LENGSTORF: So now we get the exciting part where we literally get to look inside Charlie’s head.

CHARLIE GERARD: Well, I don’t know if there’s much happening right now.

All right. I’m going to say what I hope is going to happen and then we’ll see. So, when I when I’m going to run Yarn Start, it’s going to update the page, we should not see “Boop” anymore, we should see bubbles. What I want is to try to calm down and see the bubbles slow down. If they don’t, then it’s well, we’ll see. Oh, actually, if you go to line 11, you have to call “calm,” so you have to




CHARLIE GERARD: Okay. So, I said that. And now so, if I just do Yarn Start and have my oh, not this one. So, that should update that. I can hide this. So, there’s no error, I should see bubbles. And

JASON LENGSTORF: Oh, no. Okay. I broke it. All right.

CHARLIE GERARD: So, I think we don’t have the Notion.

JASON LENGSTORF: Okay. So, Notion should have gone into “calm.” App JS…there’s Notion.

CHARLIE GERARD: You have Notion, because it says “cannot read property calm.” The one we’re trying to access is on the Notion. I know why, we’re in user effect. The first time you load, there’s no what we have to do is if there’s no user or no notion so, yeah. Or, no notion. Just return. Yes. I’m going to save that. Let’s see what happens.
Okay. So, I have bubbles. And I don’t have bubbles anymore. I think there’s an issue with the probability. So, I don’t know if I calculated right. What I did, before, was multiply by 1,000. I think the issue is

JASON LENGSTORF: Should we console log it or something?

CHARLIE GERARD: Do you mind logging the calm score?

JASON LENGSTORF: Do you want it in the use effect?

CHARLIE GERARD: Yeah. I want to see if I get it because this morning I tried, and I had an issue with it returning zero and…whoa! Okay. So, it loads and then it’s zero.


CHARLIE GERARD: Why maybe because I’m not calm at all.

So, it’s like, zero calm.
I had an issue, this morning, as well. If it doesn’t work, we can move on to the rest. But what we can do can you console log “sub,” instead of “calm score.” Because “sub” is our object with everything related to okay. So, let’s have a look quickly. That’s what I thought. This morning, I had the same issue. And can you change “calm” to “focus”? Instead of calling “calm,” can you call “focus”? I want to see so, the problem is the same.
All right. So…let me think a second. So, a way that I could check is check here. Yeah, I get nothing. So

JASON LENGSTORF: Does that mean it’s not reading from the hardware?

CHARLIE GERARD: It means that for the “calm,” it’s not getting it. But what we can do is move on to the thought training because not training, but the thought prediction because I did it this morning and that worked better. So, usually what would happen is that you would get wow, that’s really small. What you would get

JASON LENGSTORF: We can hard code this, right? If I take a 0.5 and just always return oh, wait. Actually, we just set it to 500 because that would be times 1,000.

CHARLIE GERARD: So if you wanted to do that, it would slow it down. That’s what I wanted at the end. Yay! Look! It does it.

Yeah. So, usually it goes back it comes back and maybe that’s the subscription that I didn’t do properly. But I had issues with that particular subscription this morning, as well, so the calm API had some problems. But the thought one the yeah, the thought one worked better, so let’s try that and if it doesn’t, then it’s full on a problem.

Let’s move on. You should get probability that’s coming back and the higher it is, the more calm you are. If it goes back as you’re a .2, you’re not calm at all.

JASON LENGSTORF: I got it. Thank you for the raid, Visual Studio, not expected to show up. Glad you’re here. Thank you for the bits, [Indiscernible]. Appreciate it.
So, if I want to do the thoughts

CHARLIE GERARD: So, we’re going to create another file for that. So, in the just to be able to, like, keep samples if people want them after in the “pages “folder you probably can’t create


CHARLIE GERARD: I’m going to remove the headset and make sure it’s still charged. So, what they call it is Kinesis. So, you can call the file however if you want, but the API is going to be called Kinesis. I’m just trying to get my cable [Away from mic].
So, yay, you created it. So, inside the Kinesis folder that we have here, what we’re going to do it’s going to be very similar. We’re going to have a state that you can I called it “thought” and “set thought.” You can however you want. Brain set.

JASON LENGSTORF: And use effect, as well?

CHARLIE GERARD: Yes. It’s going to be very, very similar.


CHARLIE GERARD: And use state zero.

JASON LENGSTORF: Use effect. Am I going to pass in “notion” and “user” again?

CHARLIE GERARD: Inside there. In the user effect, we need the same “if” statement that checks if there’s a user or a notion. We’ll return if there’s not. Then, we have another subscription. So you can call it however you want. Actually, you know

JASON LENGSTORF: I think if we were going to ship this production, we would need to return a function that would unsubscribe.

CHARLIE GERARD: I have it in my code samples, but we don’t actually need to subscribe. So, what we can do because as we’re going to hook that component to just another route, I don’t know if it unsubscribes you by default. Yes, probably, because it’s not let’s hide the unsubscribe, just in case.
You can create a variable and we’re going to write “notion.kinesis.” And inside, we’re going to pass a string that says “right foot.” “F” with a capital letter. Yay!

JASON LENGSTORF: Sorry, I’m getting ahead of myself here.
And then for this, would it be, like, “subscription.unsubscribe”?

CHARLIE GERARD: Yes. So and inside of the subscription of the Kinesis, we have an intent. So yeah. And I just want to console “intent” for now because there’s two ways. You can use Kinesis or Predictions. Yes. Console Intent for now. We need to return something.
So, if you go back to the “test.js” file you can return that, if you want.

JASON LENGSTORF: I figured since we were console logging, do you want me to switch it out?

CHARLIE GERARD: First, let’s go to the app.js because we’re going to add that component to a route. The same way we did “calm,” we want to do the same where are you?

JASON LENGSTORF: I’m at the top. I’ll head down to

CHARLIE GERARD: Below “calm.” And then path “Kinesis.” At the top, on line 18, change that. If we save, that should automatically

JASON LENGSTORF: I broke it. I forgot to save Kinesis, first, so you’ll just have to refresh the page.

CHARLIE GERARD: Well, it should oohh.

JASON LENGSTORF: Oh oohhh. Oh, oh, I screwed it up because you’re doing a named import so let me just do that.


JASON LENGSTORF: Okay. That should work now.

CHARLIE GERARD: But what we want so, the first thing, here what I was doing? Oh, yeah. So, if you go to test.js, there’s the function box and then the thing that you have to render. So, what I wanted to do for that demo was having just a normal cube and then if I think about my right foot, it should push it in space. So, it should, like, go further. So, what we want is we want to render the canvas here with the box and the lights and the box is actually the function you’re copying here.

JASON LENGSTORF: So I’ll get this function in. And then

CHARLIE GERARD: I think I put it outside, but we’re see.

JASON LENGSTORF: Why aren’t you uncommenting? There it goes. Okay. And then I’m going to render.



CHARLIE GERARD: Instead of yeah, okay. Yeah, there’s just a few, little

JASON LENGSTORF: All right. Let auto formatting do its thing. Did it console log for us?

CHARLIE GERARD: Oh, we need use ref, use frame and canvas from 3 react three fiber. Let me put the headset back on.

JASON LENGSTORF: So, Dan is asking about live collab in VS Code, this is in Vs Code Live Share. She sent me a link and let me authenticate and now that I’m authenticated, I’m able to use a VS Code Editor open that is pulling files from those. I’m editing her local files in this VS Code share. It’ll share terminals. I can pull up the local host on my machine. It’s really, really fancy stuff. It’s a very helpful feature for remote teams and just being able to do Leslie’s saying in the chat, it’s awesome for pairing.

CHARLIE GERARD: Leslie! Hello!
So, we need to fix a few different things and also, if anybody has more questions, feel free because the device turned off because it was a bit hot so I need to wait a few minutes for it to cool down. That’s one of the

JASON LENGSTORF: I’m missing pieces.


JASON LENGSTORF: I’m missing pieces. So, I need to use ref?

CHARLIE GERARD: Yes. And “use frame.” Where does that come from? Let me “use frame” is from react three fiber and Canvas, as well.



JASON LENGSTORF: Do I need to bring in 3 JS?

CHARLIE GERARD: I think it should be all in. All right. So, the device is still a bit hot so what I was thinking we have the cube, well, now there’s no thought coming because the device is turned off. As I wanted to do two demos with the thought, I thought we could go and build a second one and I think by the time it’s a bit less hot now. By the time we build the second one, we’ll be able to test both at the same time.


CHARLIE GERARD: Because for that experiment, I just want to show that if you subscribe to one thought, you can create whatever experiment you want. So, in this case, I want to be able to push the cube in space and by the way, the moment we’re just consolidating “intent,” but we’re not using that to impact the position of the box so we also need to actually do that. And I think what you can access on “intent” is “.probability.” I’ll have to see once we’re actually doing it. If it is…huh. “Intent.probability.” We can set that to the state.

JASON LENGSTORF: Is it not oh, that’s right, because we don’t have it turned on right now. So, do we need to multiply that by anything?

CHARLIE GERARD: Either no, because I don’t know if you ever did 3 JS before. I think when you do rotation, you don’t want it to be 500 because it would be super fast. Usually in the rotations, you do 0.5. If the probability’s over 0.9, set the state.

JASON LENGSTORF: Oh, okay. So so, I should

CHARLIE GERARD: Actually. Wait. This is where I’m going to get this is where I might be because there’s two things. There’s either you can call the Kinesis API, which I think might only be triggered by itself if it’s over 0.9. So, either we can call directly “set thought” here when we get an event because it means that the headset has detected that we really thought about my right foot, so it just triggers once. Or we have access to the probability, but that might be coming from the Predictions API. So, I might not make sense here because let’s just I don’t think you’ll have access to the probability property on Intent because I think Kinesis will only trigger if it detected something. We can call “set” here directly. Yeah, we set thought to whatever state we declared at the top what am I saying? I don’t actually know anymore.

I’m confused in my own code sample. How did I do it?

JASON LENGSTORF: We can go, step by step, and log stuff out, too.

CHARLIE GERARD: We can say “use state false.” And then we set thought to “true.” And inside the box, we can say if the thought is true, then change the zed position. Does that make sense?

JASON LENGSTORF: Uh. Okay. Help me get through this subscription part. I get back in “intent.” Is the “intent” going to be a true/false?

CHARLIE GERARD: It will not, but I don’t think, at that moment, we care about the properties inside the “intent” object. What we care is it got triggered and if it got triggered, we can set thought to “true.” And then in our box function, we can say that if actually, this is where I had “inside use frame.” I had “if” active, but it should be “thought.”

JASON LENGSTORF: Oh, way. I broke something. Which line am I going to?


JASON LENGSTORF: “If active” so I would set that to “thought”?

CHARLIE GERARD: Yes. If we detected that this is true, then the zed position, I’m minusing it so I’m pushing it further in space.

JASON LENGSTORF: So basically what we should see is when you think about your right foot, it will make the box smaller?

CHARLIE GERARD: Yes. It’s weird to say. “Because I think about my right foot all the time.”

Let’s just I’m going to turn this back on because it’s a bit less warm. So, I think in my version of the headset, I don’t get a state when the battery gets too hot so sometimes it turns off. I can feel it on the headset, but I don’t actually get a notification and I think in their new they’re working on you getting some kind of notification if gets too hot so you can know in advance if you need to, like, pause for a bit.


CHARLIE GERARD: I think that might take a minute. So, we don’t have any error. There is a cube. Just looking at the code quickly. If ever the Kinesis is not the right API, we can change it to “.predictions.” I forgot which one is better. But I think Kinesis gives you an event when the headset thinks that it that it’s over 0.9. Whereas the Predictions API, it sends you back notifications all the time but you can have a probability of 0.2. If you want to set your own threshold, you can use the predictions. Otherwise, Kinesis is the good one.
Okay. So, it’s starting the OS. The second thing I wanted to show with the thought was to scroll down on a page when I think about my right foot. You can do that is more useful than having a cube in space, you know?

JASON LENGSTORF: And I think that’s the part that actually takes this from being “hey, this is, like, fun” to being super practical. If you think of somebody who’s quadriplegic, if we can get these types of devices to be stable and get these hooked into web APIs, that opens up the entire internet to somebody who doesn’t even have the use of their hands and feet, which is really fascinating.

CHARLIE GERARD: Yeah, it’s really cool. And at the moment, one of the things is that it can I think I don’t know about the latest OS updates. They can only see the difference between your normal state. But in the future, they want to be able to have a better detection of multiple thought that you trained. So, you could have one that scrolls down and one that scrolls up. Whereas at the moment, what I could do mostly is one. If it doesn’t detect that I’m thinking, it would be nothing but it would scroll down if I was thinking about some stuff.
Okay. So…there’s an issue here where I should be getting I don’t have any error, right? So…can we log something inside the “subscribe” so if we get it? Please?

JASON LENGSTORF: Okay. Let’s yep. We’ll subscribe.

CHARLIE GERARD: Just “hello,” “boop,” whatever.

JASON LENGSTORF: Let’s just log whatever it gives us because maybe


JASON LENGSTORF: Oh, “intent.”

CHARLIE GERARD: Because I wonder I don’t think that oohhh.

JASON LENGSTORF: Oh, yeah, it’s not even getting to the “subscription was assigned a value, but not used.”

CHARLIE GERARD: I think it’s me because I’m not thinking about my right foot. It’s going to be quiet and very awkward…okay. No.

So, can we, instead, use so, if you create a new variable, that we set to whatever you want

JASON LENGSTORF: In the subscription?

CHARLIE GERARD: Outside. We’re going to subscribe to the so, we can comment out the Kinesis one. Yeah. And, we’re going to subscribe to Predictions. So, yeah. “Notion.predictions” and pass in right foot, as well. We call “subscribe.”

JASON LENGSTORF: Is it an intent again?

CHARLIE GERARD: It’s whatever you want to call it. Then we can re log that. If that doesn’t give me anything, we missed something somewhere else, which is weird.

JASON LENGSTORF: So, let’s give it a shot.



CHARLIE GERARD: That is, like I think

JASON LENGSTORF: Let’s make sure that it’s getting to the okay. So, it is getting into that “use effect.” But the predictions aren’t firing.

CHARLIE GERARD: Like, it’s the same oh! We did something. Okay. Cool.

JASON LENGSTORF: Look at it go!

CHARLIE GERARD: Look at the cube! It’s gone! Bye bye!

So, we were in the wrong so, that’s the thing. What I said when if you use the predictions API, you’re going to get events all the time and you have to decide the probability you want. If you use Kinesis, it’ll send you the events when you thought enough. So, you have more confidence when you use the Kinesis because you don’t just get events fired.
But, there was just a bit of a delay. Can we try to comment out to comment that out, the predictions, and put back in the Kinesis? Maybe it’s just that I didn’t focus enough, so we can try again.

JASON LENGSTORF: It’s interesting, too

CHARLIE GERARD: Look! Look! I did it! I can did it! Did you see? Did you see? Did you see?

JASON LENGSTORF: That is amazing.

CHARLIE GERARD: I’ve done enough thinking for today.

JASON LENGSTORF: That’s so cool.

CHARLIE GERARD: Okay. So, what we can do I’m going to turn it off just so it doesn’t get too warm. Are you interested in trying to create the scrolling one to see?

JASON LENGSTORF: I’m interested in everything. That was amazing.

CHARLIE GERARD: Woo! It’s recorded, right? It’s on the internet forever, right?

JASON LENGSTORF: It’s permanent. It’s in the internet records.

CHARLIE GERARD: As we’re using the same Kinesis API, we can copy the same code, create a new component that we can call “scroll” or whatever. I want to make sure we create different pages. You don’t have to, but for later, if I push that code. So yes.


CHARLIE GERARD: I don’t even need to guide you anymore. You’re a neurotech specialist now.


Okay. So, I have this, but these look that looks like what we would want, right?

CHARLIE GERARD: Instead of using the whole react three fiber, we can use whatever Lorem ipsum you want to use. The “window.scroll” and things like that. We can put whatever content you want in there.

JASON LENGSTORF: Chat, what’s your favorite Lorem ipsum?

CHARLIE GERARD: The chat does not like it.

JASON LENGSTORF: They got nothing.

CHARLIE GERARD: Did you think too hard, as well?

JASON LENGSTORF: Goldblum, that’s cool.



CHARLIE GERARD: I love the gradient. Yeah.

JASON LENGSTORF: All right. Let’s let’s do this.

CHARLIE GERARD: We’ll have to copy it quite a few times.

JASON LENGSTORF: Oh, I just need more paragraphs then. Let’s do 25 paragraphs of that. Oh, get out of here. There we go. That is a lot of Jeff Goldblum. So, back in here, I’m going to take all of this code and oops paste it. Okay. Then, see how fast I can do all of this.

CHARLIE GERARD: You know the quote about the scientist from Jurassic Park? Anyway.

JASON LENGSTORF: I think now we’ve got all of those paragraphs and now I can choose all of the empty ones and get rid of them. And I believe that should work?

CHARLIE GERARD: Yeah, it looks like you have a lot. So, now instead of in our subscription, instead of I mean, we could keep “set thought,” but we need you know, “window.scrollby.”

JASON LENGSTORF: Yeah. Yeah. Yeah. Yeah.

CHARLIE GERARD: Yeah. We can put it in there. And, the top and what I tried, I oh! Can you do it on that? It’s “scroll by.” And then you open the bracket and then you have a squiggly bracket and then

JASON LENGSTORF: I always call it a curly brace, but a squiggly bracket I’ve also seen them referred to as “curly boys.”

CHARLIE GERARD: Whatever works.

And then you have to pass the top. Write “top.” And write “200,” so 200 pixels.


CHARLIE GERARD: And then “left, zero.” And then “behavior, smooth. It doesn’t have to be smooth, but it’s better when it’s smooth.

JASON LENGSTORF: We don’t need to keep track of “state” anymore.

CHARLIE GERARD: Remove all the hooks. Not the user

JASON LENGSTORF: I don’t need React Fiber or a State or a Ref.

CHARLIE GERARD: And we can remove the console log anymore. Hopefully it’s going to work again. I should be able to scroll, if I think about my right foot.

JASON LENGSTORF: All right. So, then I’m going to

CHARLIE GERARD: Yes. Up. Do you remember?

JASON LENGSTORF: Make this one into scroll and we’ll bring the scroller in.

CHARLIE GERARD: You’re a pro!

JASON LENGSTORF: I believe this is going to work. First try. One Take Dave. Dammit! I was so close!

CHARLIE GERARD: All right. Okay.

JASON LENGSTORF: It’s going. It already did it.

CHARLIE GERARD: Wait. I’m trying again. Did you see?!

JASON LENGSTORF: It’s amazing. So good.

CHARLIE GERARD: Woooo! I’m exhausted!

All right. So okay, so we had an issue with the “calm,” not quite sure what happened there. It might be that I wasn’t calm so it didn’t trigger it

JASON LENGSTORF: Do you want to go back and try it?


JASON LENGSTORF: I don’t want to put you on the spot.

CHARLIE GERARD: Well, the file is there.

JASON LENGSTORF: I change the “navigate” back, right?

CHARLIE GERARD: The thing is, we need to move the we hard coded here, on line 19.


CHARLIE GERARD: It might be if my calm is over oh, yeah, there’s a weird thing here. I don’t think I’ll have the time to duplicate.
What I wanted to talk about is, are you okay. There’s two things because you’re supposed to finish in, like, 20 minutes, right?


CHARLIE GERARD: So I’m going to give you a choice. Either we can talk, quickly, about how to get raw data I’m not going to do anything with it, but we can write the code for it and see it in the console or we can go in Node Land and write the thought thing in Node and I have something that I built, that I have not shared and I want you to, like, write the code and then I’ll show what it’s supposed to do. I just, like, tried to convince you.

JASON LENGSTORF: I can’t imagine that we’re not going to do that one.

CHARLIE GERARD: Do you want the cool one?

All right. So, in my folder, at the top, I have actually dammit [Away from mic]. I forgot to comment it out. If you go into the Node folder, do not show actually, it’s not you. I should not show my end file because this is where my password here.
So, here, this is what I have. I’m going to go through the code let me check my so, if you want to do something in Node, the code is going to look very kind of similar. So, you have to require the [Indiscernible] Notion package and as you log in, you can put your email, password and device ID into a file that you can then require and what we’re going to do, here, is we’re going to use RobotJS. Have you used it before?

JASON LENGSTORF: I feel like I tried something on it.

CHARLIE GERARD: It allows you to control some of your computer stuff with your JavaScript, your mouse, your keyboard so you can type letters or move your mouse in Node JS.


CHARLIE GERARD: So, here, I wrote a comment that you have to use Node v10 because I was getting a weird error that said “segmentation fault.” I actually don’t know why.

JASON LENGSTORF: Yeah, I feel like that’s what seg faults, I read about it and thought, “oh, that’ll never apply to me.”

CHARLIE GERARD: I don’t want it to keep pulling data in case it’s getting hot. So, we’re back in the code. So, here we just have we abstantiate a device. You have to run the logging and you can have an error if you’re not logging properly. I have a log that says “logged in.” Here, it’s the Kinesis. You pass the thought you trained and subscribe to it and you get an intent when it thinks you thought about it enough. I did some weird thing here, because I don’t think you need to stringify and parse. I had issue with accessing certain properties. I think you can do “intent.confidence” or “.probability.” There’s this amazing, magical number. That amount of confidence, 9999999.7. Usually, I wouldn’t want to do that but I had some like, I would just want to trigger something when there’s an intent. You can see, here, that I was experimenting with weird stuff.
And, here if you want more stuff, you can have the same predictions. But what I want to do, when I focus about right foot, I want to press the spacebar. And the reason why I want to do this is because I’m just going to run my actually


CHARLIE GERARD: So, I want to go offline. And, I want to play the Dino game with my brain. So


CHARLIE GERARD: Are you going to work? Wait. I might wait. Before we do, I want to log what I get back because that was this morning and now I’m tired so maybe that’s too high.

Let’s just remove. [Away from mic] okay. So, I just want to see what do I get something, even? Wait. Sorry. That’s a bit small.

JASON LENGSTORF: There you go.

CHARLIE GERARD: So 99999.5. Fine. Not 9999.7. So, that was too high. So, what I can do, I’m going to say this is terrible code, that you could see, I was not prepared for this. I have my confidence and I put it over a certain threshold.


CHARLIE GERARD: I launch. I go.



JASON LENGSTORF: I think you had one extra “9 “in there, so your probability is really high.

CHARLIE GERARD: Let’s try again. Usually, you shouldn’t have to do that. I should just train it more. It should be a magic number. It shouldn’t be like that. Okay. Let’s try again. Okay. I’m thinking too much because I want to jump, jump, jump. Wait. Wait. Wait.

Oh, no!

Okay. Well, I got one.

JASON LENGSTORF: That is amazing. This is amazing. This is so cool.

CHARLIE GERARD: I’m going to try to get two. Fuck!

No! All right. I’m terrible at this game.

JASON LENGSTORF: I love it. This is so much fun.

CHARLIE GERARD: Ah, I did it better this morning. I got three, I think.

JASON LENGSTORF: You’re definitely playing this on “hard” mode.

CHARLIE GERARD: Okay. One more time. Okay. I want to say “one more time” all the time. All right.

JASON LENGSTORF: It’s so close, though. It’s so good.

CHARLIE GERARD: The thing is okay. You can stop. So, I think so, I’m going to turn off the headset because I think it’s going to get really hot. But the thing is, I only trained that thought I did the training session only, like, once and you could, like I’d I didn’t train it today, I trained it a few days ago and it, like, saves the training session. The thing, as well, is that it’s really trying to find the pattern that you saved when you did your training session. So, of course, it’s not going to be 100% accurate all the time. Because also, I mean, it’s the brain and it gets electrodes from data you put on top of it so you can’t expect it to know exactly everything that you’re going to think or whatever.
With what we have, with what it can do, you know, build experiences around that. That’s why it’s, like at the moment, I could give the headset to someone else and the calm and focus should work out of the box. It should be trained, per person. Here, you could see that you know, I had to have, like, a weird number and stuff. I know that when I trained it, I didn’t train it to the point where it was actually all the time. I would expect it to not work all the time. But you can still have that kind of experiments where you the scroll was pretty accurate.
Of course, it means that you have to focus quite hard on a thought to have an action. But that kind of I mean I don’t know. I think it’s, like, really cool. You know, what we built here was, like, small experiments. One of the things that I’m looking into, at the moment, is using raw data and machine learning, to build an experiment or a project I’m going to try to move it to show it. The P300 Speller. It’s a grid like that. So, it’s really to be able to spell words by thinking about the letter. So, what happens is that you record when you train, you record so, you have training sessions where, for example, you could have this grid, here, and it tells you, okay, think about the letter “p” and the columns and rows flash and what it records is that usually there’s 300 milliseconds that something happens, you have a spike in brain waves and you can record that.
It tells you, okay, think about the letter “c,” you count it in your head and you record the data and it it is able to match up the spike that it finds in the brain wave with that column or letter and if you run enough training sessions, you’re then able to not be told what letter to look at. Like, you just think, I want to write the letter “a, so I look at how many times it is highlighted and by looking at the live data and the matrix of letter, it’s able to find out where my spikes happened and map that to a letter so you’re actually able to write by actually thinking about the letter, by counting how many times it’s on and off.
Of course, you can maybe write one or two words per minute, but it’s much better than nothing. You don’t have to touch anything. You’re looking at the letter. You’re looking at it and counting how many times it flashes and you’re able to write some things. I was trying to build that over the weekend. It worked once. It worked the first time and I was like, no, that never happens.

Then it failed. But, it was quite cool that I know that, at the moment, my issue is more about filtering the raw data the right way to find patterns in it because I have the code to record the brain waves. I wrote the model in TensorFlow JS. The problem is that I have to write the I have to find the right filters to pass the data through so that I give it a chance to find an actual pattern instead of having, like, noise everywhere.
But, that’s my next that’s the thing I’m trying to do now. And, you know, that’s why it starts by pushing a cube or a scrolling or playing a Dino game. Once you get used to that kind of tech and you’re able to know when it works or when it doesn’t, know the limitations and things like that, then you can go and build and experiment a lot more and do with what you have. At the moment, people are like, it’s not 100% accurate so I won’t use it. Nothing is really 100% accurate.

JASON LENGSTORF: And the other thing that’s really exciting about this, if you just look at this as a point in time, the same way that, say, the Commodore 64 was a point in time, the distance between Pong and

Like, The Last of Us II, it doesn’t even seem like the same universe. It’s not that long in timelines. What happens in a few decades and what’s possible with this kind of neurotech where, you know, we really be looking at stuff that sounds like science fiction right now. I’m able to put on a headset and think my way through whatever interface and it’ll just work the same way when I pick up my PlayStation controller, 3D environments and I saw a demo of the new Unreal Engine. It feels impossible that they’re able to do stuff like that. But, you know, it’s just stacking work on work on work. This is such an exciting place to be because you’re contributing to this foundation that’s going to be, like, what happens in 50 years. Like, this is the kind of stuff that I can see being deeply tied into our experience of the world in half a century.

CHARLIE GERARD: I find that so amazing. I don’t know. I don’t even think my grandparents knew that I’m playing with this. Can you imagine if I told them I was playing with this or demoing it in front of them and I’m thinking about something and it’s happening, from their time with technology and there wasn’t laptops. Even if it can’t detect things 100% accurately, you can do machine learning and train data and get the stuff that you need. So, the software’s going to get better, the hardware’s going to get better and I don’t think we have to wait that long to be able to build stuff because as I said, it’s also about doing with what you have now. I don’t want to wait for the things to get better when you can build applications now that could help people in a certain way.
You know, by doing stuff, at home, but even if it’s when I was thinking about the P300 Speller, if I manage to actually get to a level where it’s quite accurate, it means that I would have been able to build something maybe for a friend or a family member that might have to use it one day and it means that I won’t have to wait for one of the tech companies to release a microcontroller or a board that is more performant or I won’t have to wait for a team of researchers at Apple or Google to release the device. I can play with that at home and I can build something that might not work all the time, but maybe for what I need it to do. It will be enough.
Like, I don’t know I said it in my talk a couple of times when I talked about brain sensors. When my grandfather passed away, years ago, he lost all control of his body in, like, six months. Basically, I saw him before. He was fine. I came back from Sydney and he just could he was in his mind, but his body was not responsive at all and I’m thinking, well, if it’s something that runs through my family, I don’t know, but it could be my dad or me or my sisters. I could build something at home, that means that before one of my family member passes away, they can say something or they can communicate and they can be, like, at least, you know, I’m dying but I said what I needed to say. Even if it’s a word per minute, I can wait a few hours if it means you can talk to me and I can do that at home and I can do it in JavaScript with a device. I know the price can look a bit expensive. But I think that things are going to be a bit cheaper.
I just won’t people to wait and be like, it’s not figuring out all of my thoughts all the time so I’m not going to try it. I just think it’s just amazing. To me, thinking that maybe I’ll be able to help a family member or a friend if somebody has an accident or something I’ll be able to be like, hey, maybe your body can’t move but I worked on something and we can still chat. I think it’s just, like, crazy. I don’t know. I don’t know what else you can ask for.

JASON LENGSTORF: And I love you know, I love that that this stuff is making it accessible to to kind of build the future, right? I think we fixate a little bit, in tech, on building things that are cool. And, like, this is undoubtedly cool. But it’s also it really is, like it feels like it’s starting to scratch at the next thing. And I think that, you know, there’s there are whole degrees you can do on human computer interfaces and what happens next for the way people interact with the world, especially as the world becomes more digital.
Glasses that do heads up displays or different ways of doing authentication and privacy. A lot of them are going to come from this type of neurotech because it’s you know, it’s really interesting. What kind of interaction and communication could we do if we really dig into this and figure out how it works? I’m really excited to see the stability coming up, the price point coming down. It is really, really cool.

CHARLIE GERARD: And the fact that it’s developer focused, the company is dev focused. They’re not trying to close the doors and not give you access to the to the data, whatever. It’s really, like, we believe that we can build the future and we can ideas can come from anywhere, so they’re trying to make it as open as possible and I’d love to see tech companies being that way. Give the power to the developers to be able to build what they think could be helpful or do some research and then share it because these things will not grow and will not evolve if if we don’t kind of, like, contribute, all of us. Even with an idea. If you are interested in helping, but you don’t have the headset, the documentation, they just started, you can contribute in open source.
I was even thinking I wanted to try to build an emulator. If you don’t have the headset, I’m sure we can create some kind of, like, mock API or whatever so that if you’re saving up to buy the device, you can still go and play with fake data and still learn how to do stuff. I haven’t worked on that, but I thought that would be interesting.

JASON LENGSTORF: Yeah. That’s super cool stuff. Unfortunately, we are out of time. Charlie, where should people go if they want to follow up with you or these projects?

CHARLIE GERARD: The best place is Twitter, because I often share I mean, my central place, I often share stuff on Twitter. From there, I usually share blog posts or the GitHub repos.

JASON LENGSTORF: You can see some cool here’s a demo, right here, on the screen, of another this looks like a different device, doing the push away with your thoughts thing. It’s all

CHARLIE GERARD: Oh, on Twitter. Yes, yes.

JASON LENGSTORF: So, go over there.
And then, anywhere else you want people to check out?

CHARLIE GERARD: Um, I mean, I’m only to say, I’m rebuilding my portfolio, but I’ve been rebuilding my portfolio for, like, three years.

But, I mean, I would say my GitHub page because I try to make what I did open source so that people can they can see that it’s JavaScript so you would be able to read it and, you know, take it if you want and build other things. But as it’s JavaScript and you’re a web developer, you’ll be able to follow up. It’s sometimes shitty code because I don’t have the time.

JASON LENGSTORF: Yeah. Yeah. I get it. That’s my whole existence in the world. Here’s something that sort of works. I hope it’s helpful.

CHARLIE GERARD: Yes. Usually I have a GIF. I know it works.

JASON LENGSTORF: With that being said, make sure that you go check out the upcoming schedule. We have so much amazing stuff coming on. Later this week, we have Christian, also known as Code Beast to talk about serverless GraphQL. Later on, we’ve got Joel from Egghead. We’re going to build a silly app for Secret Sandwich, which is a game we play to try to make better sandwiches.
I have a fellow streamer coming on to teach us about Twitch bots and overlays. And the list just goes on and on. Check out the schedule. If you want to get notified automatically, you can add this. You can see it coming.
As always, we had White Coat Captioning doing live captions for us so we are able to make this show more accessible for people. Thank you to Netlify, Fauna, OAuth.
Thank you, Charlie.

CHARLIE GERARD: It was a lot of fun. Thank you.

JASON LENGSTORF: Chat, stay tuned. We’re going to raid and we’ll see you next time.

Closed captioning and more are made possible by our sponsors: