skip to content

GraphQL + Jamstack for Enterprise Apps

“Can the Jamstack handle enterprise apps?” It can! In this episode, Shruti Kapoor & Jason will explore approaches for enterprise GraphQL + Jamstack apps.

Full Transcript

Click to toggle the visibility of the transcript

Captions provided by White Coat Captioning (https://whitecoatcaptioning.com/). Communication Access Realtime Translation (CART) is provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings.

JASON LENGSTORF: Hello, everybody. And welcome to another episode of Learn With Jason. Today, we have Shruti Kapoor.

SHRUTI KAPOOR: Thank you have for having me, Jason.

JASON LENGSTORF: It's great to see you. I'm very, very happy to see you. Looks like the chat is very excited, as well. They're "booping" the hell out of us.

SHRUTI KAPOOR: I love the boops!

JASON LENGSTORF: We both spoke at React India together and it was a lot of fun. And, I feel like like, you what I've always appreciated about you is you've got a great sense of humor. You run like, the dev jokes hashtag that makes me smile. So, for those of us that aren't familiar with you and your work, can you give us a background and maybe a joke?

SHRUTI KAPOOR: Sure. I'm Shruti Kapoor. I'm a senior software engineer at passing an array. I live in the Bay Area, for about four years now. Some things about me, I love GraphQL, JavaScript, React, but more than that, I love dev jokes so I'm going to give you my first dev joke.
So, you guys can answer in the chat, as well, and I'll be reading the chat. So, the first question is: Why don't fish like React?

JASON LENGSTORF: What do you think, chat? Oh, no ideas from chat because it has hooks. Did the chat get it?

SHRUTI KAPOOR: That is right. You are right.

JASON LENGSTORF: I love it. That's excellent. I see (Indiscernible) is in the chat. Sam, what's up, everybody? Thank you so much, y'all.
All right. So, Shruti, I'm really excited about this because today, we're going to do something a little bit different than the usual Learn With Jason episode. What I thought would be really interesting is you're working at passing an array and everybody's using passing an array. We've seen the scale of passing an array. We know it's a huge enterprise app. And I thought it would be really interesting to talk about architecture, right? And to dig into kind of how all of these things that, you know, someone like me, who I mostly work on my personal website. I mostly work on projects that a little smaller. I have all this idea. I love GraphQL. I love Jamstack. I wanted to talk about it in the context of something bigger and how do they apply to the largest possible scale.
You have so much experience in that and so, like, what what is your what are you working on now? What are you excited about, in your work?

SHRUTI KAPOOR: I'm working in the passing an array payouts team. I see Gary, from my team. Hi, Gary. It is the part that helps passing an array disburse money on to accounts so the stack that we're working on is NodeJS on the frontend with React and on the backend, we have Java. It's in passing an array, typically we have a microservice architecture and have Java in the backend and GraphQL APIs, which gets consumed by an orchestration layer. On the frontend side, that data is rendered via React components through a Node app.

JASON LENGSTORF: Okay. Yeah. And so, when you're like I guess, what is your vision for this? Like, as it's growing? It sounds like you're starting to explore some alternatives?

SHRUTI KAPOOR: Yeah. So, I think the main challenge that we're exploring is how we can do reduce the time it takes to render our first page our or first component or can we reduce that latency that the user sees? And that's where I'm been exploring how Jamstack can help us, especially Jamstack with GraphQL. That reduces the number of bytes they have to download. With Jamstack, you can reduce the time that it takes to actually load that first component and that's where Jamstack and GraphQL combination kind of comes in.

JASON LENGSTORF: Very cool. Okay. So, let's do a couple things here. I'm going to switch over to a collaborative view. We're going to use Mural today so we can draw and do architecture and notes and stuff. I'm going to do a quick shoutout to our sponsors. We have live captioning today, as always, by White Coat Captioning. Huge shoutout to them for making this possible. I think my overlay's taking its sweet time loading. Here's Shruti's Twitter. You should definitely go follow her for lots of information and also, lots of good jokes.
If you want to get live captioning, it is let me refresh the page here. Live captioning is happening right now, by White Coat Captioning, made possible by Netlify, Fauna, Sanity and Auth0. So, thank you, very, very much to both White Coat Captioning and our sponsors for doing that.
And, with that, we have a mural that we've set up, that we're going to use. If you want to follow along, I really hope let's see. Visitor link we don't want. It's not that I don't trust you, Chad. It's that I don't trust you at all.

SHRUTI KAPOOR: Don't trust (Indiscernible).

JASON LENGSTORF: That was way deeper than I think the usual information trust them, but don't trust the devil's inside of them. So, what we are doing today is we're going to talk a little bit about architecture. So, if we maybe we should start by framing what a typical architecture looks like. So, if we look at, like I think I can do some shapes here. Let's do some shapes. And I'm going to make let's start with, like, some big boxes. So, we'll start with a big, blue box and we'll call this the frontend, if I can spell it. And so, like, the frontend of a site is going to sit up here, somewhere, and then you're going to have your oohh, boy, where am I going? I haven't used this tool in awhile so I have to get my bearings again. Then, you've got the backend of your site. And these are kind of like, sort of I don't know. How would you kind of define these, if you were going to break them up?
Like, what goes into each thing? Are they even separate things? Am I getting presumptuous?

SHRUTI KAPOOR: Kind of getting into the box of backend with NodeJS architecture and then Java is kind of very backend, which may have, like, REST API or GraphQL end points. We have the layer in between, that serves those frontend pages, using React and Node JS and then we have the system that serves that data, which could be your Java layer or Python and then we have the backend like, the databases, itself.

JASON LENGSTORF: Okay. So, you're saying, like, we've got the so, like, maybe we could "big bucket" this into business logic as, like, the backend? And then we have I'm making these a little smaller so we can fit them all on the same page. We have our actual, like, data I guess, databases?

SHRUTI KAPOOR: Yeah. Databases and the the layer that serves that interface between database and business logic.

JASON LENGSTORF: Anything the user receives is frontend. Okay. I gotcha. I think that's correct. So, yeah I guess we're trying to think of the big things, right? So, when you've got you've got the stuff that goes to the client. You've got the way that the client gets information. So, like an API layer or data or

SHRUTI KAPOOR: However that data is serviced.

JASON LENGSTORF: And then under in hood, let's determine whether or not someone is allowed to see this data. Let's do preprocessing. Let's aggregate data from, like, our different things to show them on an analytics dashboard and under the hood, you've got that actual, raw data. So, your user tables, your log dumps, all your usage stats and payment information and all those things that kind of they're not superuseful if you just look at them in aggregate. You need something to pull that out and kind of collate it.
And so, this is I feel like we're probably oversimplifying a little bit here. For the purposes of, you know, having a discussion, I feel like these are, like, big buckets that we can agree, kind of that seems like that seems about right.
Anyone disagree, chat? Are we missing anything big here?

SHRUTI KAPOOR: A sandwich with bread in the middle called cake.

JASON LENGSTORF: Asking the hard questions, I see. I mean, a sandwich with bread in the middle is just a club sandwich, right?
So, I think, like, if we look at this from a classical kind of like the monolithic standpoint, you might just draw you might actually just draw a big square around this whole thing and say, like, you know, this is the monolith. And when you do look at it helping me with my monolith. I wanted to draw it, instead of typing it, because it makes me laugh.
So, this is like our monolith, right? We've got this big, kind of everything lives in once place. And this, I think, was a pretty standard approach for a very long time. You're going to talk to your database and prep your database. And so, is that that's not what you're doing right now, right? You're doing something different?

SHRUTI KAPOOR: Yeah. So, we don't have monolith. We have more like microservices so we can think of this as breaking down the frontend and the API layer and the business logic so take this big monolith, break it down into smaller components. They all talk to databases. They may have different tables. They may be talking to the same tables. So we can think of smaller, architecture components within itself.

JASON LENGSTORF: So, this would almost look like if you take your business logic and you do something like this, where you've got multiple services that all kind of have the same stack and your frontend even the frontend could be part of the microservice. Is that how you're doing it now?

SHRUTI KAPOOR: Yeah, the frontend is a microservice. It looks like a single frontend. But inside, it's all smaller components, serving their own frontend.

JASON LENGSTORF: This could be, like

SHRUTI KAPOOR: They could be completely different stack itself, the frontend itself.

JASON LENGSTORF: Then you end with something like you got your users and your dashboard and your billing or however you want to do it. Where each of these is a whole different thing and, like, maybe like, you said, a different stack. So, maybe this one is we got a frontend and it is React. And then, with this one, maybe it's

SHRUTI KAPOOR: Backbone. Third one could be, like, Angular.

JASON LENGSTORF: Totally. This one is going to be REST API and this one is going to be direct MySQL connection. Maybe this one is, just, like, out to make you sad. Right? Something like that. You end up with lots of different ways you can do this and this one's maybe built in, something like Rust. And this one is PHP. This one's Java. Right? So, you've got all these kind of interesting ways you could approach these things. The teams don't necessarily need to talk. This is what makes microservices really exciting. Instead of saying everyone's working in the same codebase, you've split things up into concerns. This is the users team and as long as they make sure the user route looks right and works, they don't have to care about tech choices.

SHRUTI KAPOOR: Operation of concern.

JASON LENGSTORF: Yeah. So, like when you look at something like this, are we done? What are the issues that come up with something like this, do you think?

SHRUTI KAPOOR: In the API box, how much data are we serving to the frontend right? Let's say we have information we want to serve up all the way do the front, how much information are we pulling from the database and how much of that are we serving up? That is determined by the API layer or, let's say, the business logic. Both of them combined. So, that's one place where we can optimize and reduce the data that we send.
And then on the frontend, itself, like, how much data are we shipping to the users the first time they load the page? Is that data all necessary? We can bring in performance and reduce the time it takes to load the first ping.

JASON LENGSTORF: Yeah. And so I think, like, this is the other issue that we ran into, like, when I was at IBM, is what we've drawn, here, is really clean. We're saying that, like, each team has a very clear stack, right? But what I found, when I was at IBM, is it's never actually like that. It's more like, if we take this over here I'm going to simplify this a little bit. Let's say these orange ones are microservices. We had a users microservice. We had a billing microservice. We had an apps microservice and these all lived down here.
And then on our frontend I'm going to assume that each one of these has an API on it. We would have dashboard. And now "dashboard" didn't just use it didn't just use users, it also used why can't I I need this. Get out of here. So oh, boy! I haven't used this in so long. Sorry. So, we end up with something like this, where "dashboard" has to touch all of these services and we have to you know. We have to make this work. So, we have a different REST end point for users and billing and apps and before we know it, we are suddenly building, like, a proxy API and this proxy API is how we're able to talk to those services. And, now we've got a huge problem. And, like have you run into this? Do you want to talk to this a little bit?

SHRUTI KAPOOR: This happens all the time. Dashboard may want to have access to users, so should billing and so would all the other components, right? Especially in an app like passing an array, most things are authenticated so what ends up happening is everybody starts calling users and then based on, like, billing gets called a lot and maybe there's an authentication that gets called a lot. Everybody starts calling every other service or one service gets called multiple times so this happens quite often.

JASON LENGSTORF: And what we found, too, with on my team, the dashboard so, like, the dashboard would have a different like, a different need. So, for billing, it would only pull a few things. Right? But it came from a few different places so there was, like, a REST end point for billing, that was specifically for the dashboard. They built an end point. There was a dashboard microservice that actually showed the dashboard and then there was, like, a billing microservice and one team was in China and one team was in Toronto. For those teams to work together, they had to write a spec and send it back and forth and ship their REST End point. The billing team, they needed a bunch of information from billing so they had to do the same thing. They wrote up a spec and now billing had 30, 40, 50 end points that each service needed to use and there was no, like there was no way to validate whether any of them were still in use. We couldn't sunset things. It was growing out of control. We were like, I don't know, does stuff break? Let's turn it off in staging and see if anyone calls us.

SHRUTI KAPOOR: That's why we have versions in API. Instead of removing this, let's create another version. So we have V1, V2, V3. We're in versioning hell.

JASON LENGSTORF: It is the worse. The backend teams are frustrated because they feel like the frontend teams aren't like, the frontend teams just ask for whatever they need so it feels very much like getting yanked around. Frontend is frustrated because they're like, we know this data exists, but we don't want to have to call 50 REST APIs. You end up with proxy APIs where you're like, I'm going to call a bunch of APIs and assemble this into data I can use and my frontend is going to call my change data and so now you've got these links in the chain and your frontend's calling a custom API, your custom API is calling a bunch of custom APIs and those are all hooked up to databases that they don't really know what you're doing or how you're using it so they might accidentally break it. When it breaks, you have no idea how. Right? There's so many hops. How do you debug that?

SHRUTI KAPOOR: Sometimes you have these multilayers of APIs. You have users that's been called and the service is called five other downstream services. You have layers of APIs being called, multiple downstreams.

JASON LENGSTORF: Uhhuh. Yeah. And I think this is so this is, I think, something if you've worked at a big company, you may be familiar with this. So, we're not here to just talk crap. We're here to show, like, maybe what's better. So, if you were going to try to sort this out, how do you untangle this? How do you do it?

SHRUTI KAPOOR: Yeah. So, the first thing I would start looking into is when we call this proxy API, which service are we calling, right? Are we calling users three times? Billing? Apps. Billing is also calling bills and apps so it looks like that one, common service is being called everywhere. So, there is someplace we can optimize that. Also, if we have, like, users being called by a dashboard and billing, we need to make sure that the data that we service in dashboard and the data we service in billing is both of them. We shouldn't be creating multiple end points every time we have a new customer, which is typically what we have to do in REST. We have a bunch of different end points and some of them are not getting used anymore.

JASON LENGSTORF: Uhhuh. Yeah. And so if we want to fix that, like, if we just kind of imagine a perfect world, right? Let's imagine a way to make this all better. So, I'm going to take these arrows out and I'm going to take this proxy API out because these are the things that make us sad. I'm going to take this, bring it over here no, I don't want come on, Mural. Oh, my God! Stop! Why! Hello? Unified data layer. Right? And so then, what we get with a unified data layer is, we have now something in between, where when we take this, like, our communication, now, becomes completely wait. You're supposed to be copying right now. There you go. It becomes, like, very clearcut. We've got this this is what we're doing with our this is going to be the we should have used (Indiscernible), I think. I saw that come up in chat. The only reason I didn't is because I don't think (Indiscernible) is multiplayer and I wanted Shruti to be able to make changes in here.
If we look at this, now each of these, the dashboard is going to call the unified data layer and say" give me in everything I need about user billing and apps." Then the unified data layer knows, in the back, how to get to users billing and apps. And so, this creates there are several things that are going to happen here and this is why it's important. Now, the Apps team, they don't have to worry about the 13, 14, 15 frontend services that call their layer. They only have to worry about this one. This is what they test. This is what they validate. This is the team they have to notify when anything's going to change. These teams, they don't have to hunt don't which microservice provides what they want because they know that all of the data is here and so this is, like, extremely exciting, in my opinion.

SHRUTI KAPOOR: It's like one place where everybody can talk. They don't need to worry about anything that goes downstream.

JASON LENGSTORF: Hello to the Corgis. Thank you for the sub.
Shruti, you were saying?

SHRUTI KAPOOR: What was I saying?

JASON LENGSTORF: We were talking about the unified data layers.

SHRUTI KAPOOR: The Corgis. It's like a black box, you, as a frontend developer or the team doesn't need to worry about it. The unified data layer, the team that owns that component can take of that, call billing and users and make one call instead of doing multiple calls to those APIs. So you're kind of now separating the concern of performance on the backend and performance on the frontend.

JASON LENGSTORF: Uhhuh. And the other thing that's exciting about it, too, that's so powerful, is, like, when we're talking about, like, any API design, monoliths are very exciting, you know, when you're trying to just build something because it's, like, everything's in one place. I'm trying to limit the cognitive overhead and limit the amount of things I have to in my head. You can't track a million lines of code. You can't keep that context in your head. So, you start to cognitively, you know, shift. You break it into pieces that are small enough that you can keep it in your head so you go to microservice. You have smaller lines of code but you don't eliminate that context. You still need to know how the microservice interact with every other microservice. You've created this it's a strong tie between microservices and that means you haven't actually eliminated the context, you've spread the code out. If you go to this, you've eliminated context. You no longer need to know how everything else uses the data because you're not the person. You're not the service providing that data. You've created a very clear boundary between your service and everything else. As long as you honor that contract between your service and the unified data layer, that's all the context you need and that is so powerful.

SHRUTI KAPOOR: Exactly.

JASON LENGSTORF: But, yeah, so, if we want to make this unified data layer, how would you do it?

SHRUTI KAPOOR: So, this is a perfect place for GraphQL. GraphQL is unified data layer. We establish a contract with the frontend folks and backend folks and establish, up front, what data needs to be serviced. You can as you go through the project, if you have new data that you need to add, you can add it to the schema and one thing that GraphQL is really good at is providing you the ability to see which fields are being used and you can know which field is being used and not just what method is being used. That helps. Let's say you have users and you have five different fields not being used, you can mark it as deprecated and remove in the next phase. But any client if they were using that deprecated field, they would know that this field is going to go away. You know this field is not being used. You have data on it, that you can see, you can monitor it, and you can remove that field, if needbe, instead of blindly versioning it or blindly removing that field.

JASON LENGSTORF: It provides a fuzzy upgrade instead of the hard V2, we're shutting off V2 on this date. Everything's going to keep working. This field is just going to start returning null. Your code will still work, you'll be missing your data until you use the updated field.

SHRUTI KAPOOR: It's backwardcompatible if you have interfaced and started working with the APIs.

JASON LENGSTORF: I love it. And this was something that we noticed when I was trying to roll it out on my team and it sounds like you've been finding the same. By putting this this layer in, we're eliminating just a tremendous amount of cognitive complexity. When a team is using something I think it's worth, maybe at this point, looking at some of the things that make GraphQL exciting. Let's go to "how to GraphQL." Do they have a playground in here that we can look at? I if he will like at one point, they did. Oh, no!

SHRUTI KAPOOR: Doesn't GraphQL, itself, have it? Let me see.

JASON LENGSTORF: Actually, let's here's one that I know exists. Let's go to the Rick and Morty API. And, there's a GraphQL Rick and Morty API that we can look at. Okay. So, I want to show you something really cool, like, right out of the gate. So, check this out. We've got the REST API and the regular API. This is what you get when you go to a REST API. We have the characters, locations and episodes. It tells us where to go. But it's not superhelpful. I have to go here and click in here. Look what happens when I pull up the GraphQL API. I'm looking at a UI. This is pretty computery, I don't know if this is superhelpful, but let's look at the docs. Okay, this is interesting. I can see a character. I can see that the character's going to give me an ID and a name and things like this. I can say I want "character." They've already built a query for us so we can look at what's going on and I can just play.
Remember, when we were here, we were looking at characters. So, here's a character. Let's just pull one of these up. So, here is one character and, let's see, his URL is, I think if we just go to Character 1 yeah, here's one. Then what we get back is an episode and the episode includes, like, a link. It's not actually information. Then, if we go down to the URL, okay, that's that's kind of useful. But what about, like, the location? Okay, the location is "earth," but there's just a URL to it. Whereas in here, I'm able to get, like, here's the location. Here's the name. Here's what dimension it's in. Here's episodes. I'm going to look at episodes and inside of this, I can see the name and what date aired. Instead of having to make chain REST API calls, I can dig through this and I get all of that data back and I didn't need to go read the docs for that. I was able to look at these embedded API docs to figure out there was stuff I wanted here.
Here's the episode and I want this information. And I can test these queries and if I want to use this in my app, I can copy/paste it. This, as a frontend developer, changed everything for me.

SHRUTI KAPOOR: Yeah. And what's cool is, like, they didn't have to build this documentation themselves. This automatically gets generated for you. It's something that happens in the background. I don't have to think about writing documentation that's uptodate.

JASON LENGSTORF: It's really, really cool. And then if we go and look let's look at "How to GraphQL" again so we can dig into some of the basics. I think we can see here's a GraphQL type. So, I'm able to just say, like, I want my data to have people in it and I want a person to have a name and an age and I know that that's required, which is why I've got this exclamation point. This one's going to be a string. This one is going to be an integer. This is I mean, it's not exactly humanreadable, but it's more humanreadable to me than a lot of code, where I can sort of deduce what things are. Like, I know what a type is. I know what a person is. As I kind of look around, I can mostly figure out what this stuff means and then you can see that we start linking things together. We've got a person and they write posts and when you search posts, you get an array. We can start to see how this all links together. This is the definition I would write to make a GraphQL API and that definition gets read in here. Here are the actual types for the characters, episodes and all those things so when I look at a character, I can see I'll get an array of episodes back and I can see it'll have this data. Here's my episode and here's what comes back with it. It's very logical. And that allows you to do things, like autocomplete. I'm hitting "Ctrl, space." This is just compared to using something like REST. You can do this with REST, you absolutely can. With REST, there's a bunch of extra tools you have to have.

SHRUTI KAPOOR: And you get it for free, so you don't have to put in extra effort for that.

JASON LENGSTORF: It's truly this, to me, is extremely exciting and I think it's like, it just opens a lot of doors. And then so, now we're talking about it from the frontend. From the frontend side, this is kind of a dream. I, as a dashboard developer, I'm going to go to mycompany.com/GraphQL. We can do authentication and I can explore the whole database of the entire company and see all of the data from all of the microservices and build the custom query I need to pull just what I want to use and build my UI with it. That's amazing.

SHRUTI KAPOOR: What's cool about that, you need to pass that authentication token once. If you have five services, you'll have to pass that token multiple times. Because you have one place you're talking to, you only have to pass your token once.

JASON LENGSTORF: Just so much complexity disappears, right? Actually, I have a very good example of this. Every once in awhile, Shawn Grove is on the call. I'm not sure if he's now, but I'm going to show off this product for a minute. So, this is let me see yeah. This is, like, a product called OneGraph which is a nice GraphQL interface. But it does this really interesting thing where look at all these services, right? So, this is Air Table, Box, Egghead, Dribble, Mailchimp. As a user, I want to pull some stuff together. Let's pull up some information about me. Let's pull up my Twitch twitch is going to make me do some weird stuff. Let's get GitHub. I'm going to get my GitHub and I can pull in, like, my meta I want repository. Let's pull up any repository. Okay. They pulled one up for me.
Okay. Well, look, I need to log into GitHub. So, I just click this button. I'm now logged into GitHub. I can run this call. And I'm pulling in information about GraphQL and then if I go to this, like, "me" tab, I can do some just like, and I'm not showing this to show off OneGraph, I'm showing GraphQL. It is all a single GraphQL layer. I'm pulling up my GitHub information. I can get my avatar URL, my bio and let's pull up my my

SHRUTI KAPOOR: Profile picture.

JASON LENGSTORF: So, got that. And then, if I want to pull up my Netlify, I can pull up my my full name, right? And I can probably pull some extra stuff, but I won't. And so now I need to log into Netlify. Okay. I'm logged into Netlify. And now I can pull this up and check this out! I'm now pulling data from GitHub and Netlify in a single API call and so these are completely different companies. This isn't like microservices and a team. This is completely different entities and we get unified API services. I can click on a thing and decide what I want and click on it and get it.

SHRUTI KAPOOR: They may not be GraphQL APIs, they could be Unified APIs.

JASON LENGSTORF: It's just so it's unbelievably cool how powerful this stuff is because GraphQL is just an abstraction over data that makes it easier for frontends to get to that data. We're logged in, it looks like I'm logged in as my Twitch bot and not me. I had to provide the tokens and they are in here, you can see which ones you're logged into. When you look at, you know, an enterprise service, this is the most enterprise thing I can imagine. How do we string together completely different companies, as opposed to teams? Because you're not going to get GitHub to communicate with OneGraph. This removes the need for it. I can look at this one thing and I never I will never speak to people who work on the teams behind this data, but I can use it. I can still access it. We can take that idea and we can roll that into our enterprise apps and now we've created not, like, a separation, like frontend and backend should never mix, but we eliminate the noise. I need a bespoke endpoint so I can make my frontend different. All of that, like, back and forth goes away and instead, you can talk about what would make our product better. What can we build and how could you use it as opposed to "please fix our problems."

SHRUTI KAPOOR: Exactly.

JASON LENGSTORF: Okay. So, I feel like the GraphQL side of things is super, super exciting and it opens up a huge amount of doors. What are the tricky what's the other half of this, right? So, let's go back in here and let's think about how can we make this better, if we wanted to take it further?

SHRUTI KAPOOR: So, let's see, where we are? We have figured our unified data layer so we're going to make a GraphQL so we're going to fetch less data. Now, the question is, we still have these separate apps. So, let's maybe dive into the architecture of these apps, see what they are. So, typically let's see, we had this at the top. We had a Node layer, I believe oh, we haven't we didn't look into the frontend layer, so let's do that.

JASON LENGSTORF: Do you want me to do that?

SHRUTI KAPOOR: You can do it. You're the pro.

JASON LENGSTORF: Yeah, I'm a Mural pro, I've been using it for a whole 18 minutes. I think I've had coffee in my Mustache for this whole show. That is gross.

SHRUTI KAPOOR: I didn't notice until you said it.

JASON LENGSTORF: Okay. So, inside of this, we're going to need like, we've got our actual what goes into the frontend code. So, the HTML, CSS and clientside JS. This is going to be something like maybe React, Vue, Angular, whatever. You know, any frontend that you want to use is going to be part of this. And then, you also have your, like, dataloading, right? So, this would be maybe it's well, okay. So, we should probably talk about this in a couple ways, right? We can do this in a few ways. There's going to be something that serves

SHRUTI KAPOOR: Yeah, your web app.

JASON LENGSTORF: Right. So, we've got a server and then we also have a data connection. And then, like, the thing that calls the API.

SHRUTI KAPOOR: Yeah. And this is what connects to the GraphQL and Unified Layer.

JASON LENGSTORF: To the unified data layer. Okay. So, all of that lives inside of this, like, billing bubble. Right? And so, I'm going to put these, sidebyside, so they don't overlap. Basically, this is one frontend and so we need to answer these questions and we can do these a bunch of ways. So, the typical way that you would end up doing this is through some kind of, like, Node, PHP, Java, whatever. Like, you're going to have some kind of running server that when a request comes in, you're going to determine how to handle that and return the appropriate frontend files.
So, that's good, in a lot of ways. We can do things like if billing comes in with parameters, we've got, you know, user project or something like that. Where we can pull in this URL and say, okay, when somebody hits the billing route, we want to send it to this server, what's the user ID, send me back the data I need and it builds the CSS/JavaScript. That's good. Like, it does the job. So, what are the challenges here?

SHRUTI KAPOOR: Because you have access and control over the server, let's say if you want to show multiple like, for example, you want to do A/B testing, you know what user it is. You have control, you can do A/B testing. For User A, you show this and User B, you show this.

JASON LENGSTORF: And you would end up doing that through, what? Cookies or sessions or something? You can do it a bunch of ways.

SHRUTI KAPOOR: Yeah, exactly.

JASON LENGSTORF: So, the challenges to this?

SHRUTI KAPOOR: The challenge is, let's say we fix the data that we're pulling. We're always pulling a constant amount of data. We fixed our backend so our backend isn't a bottleneck anymore. We have React components, we're going to build HTML ondemand, really. That slows down the time it takes to completely render the whole page and make the web app usable.

JASON LENGSTORF: Um, yeah. So, we're basically like, instead someone being able to say "I would like this page," and being like, "here's your page," here's the database.

SHRUTI KAPOOR: Let me finish building this, let me finish rendering this and now I'm ready to give it to you. Let's say if we have this billing app, right? Most times, the user is requesting the same page most users are requesting the same page over and over again. Maybe one or two fields may change but usually, it's the same skeleton. We're building the same kind of page multiple times, for multiple users.

JASON LENGSTORF: Yeah. And that's, like this, I think, is extremely common, where you find yourself kind of 90% of the app is always going the header's going to look the same, you're just going to swap out the avatar and their name. The billing pages, all of the UIs going to look the same. You're just swapping out the data in the chart and the email on the "hello, you're logged in as..." so we can

SHRUTI KAPOOR: There's maybe one person  most of it is the same content for every user.

JASON LENGSTORF: Yeah. And so the way that we can think about this I'm going to draw this really poorly. If somebody's on their computer and they're saying, like, please give me a web page, you're going to go to a server, right? And that server is going to say "all right. I need to get this page." So, it goes and it gets the source code That's maybe the best thing I've ever done. And then, it's going to go get the data, right? That looks like a database. And then, once it has the data, it's going to come back to the code and it's going to actually compile that and then that all goes back to the server, which finally goes back to the user. And so, that's a lot of work that has to be done and we're leaving things out. We're leaving out authentication. If there is a caching layer, if there is a you know, some kind of chain of events, A/B testing, all those are additional hops that are going to happen, in this chain of command. This is all just time, right?
So, if we want to fix that

SHRUTI KAPOOR: For every page, right? Even for the same user. Even if they browse a different URL, you're doing this every single time.

JASON LENGSTORF: Exactly. One way to improve this is if we start looking at if we think about it as what work can we do ahead of time? What work can we do once versus what work do we have to do every time and then it starts creating a different story and that's when we start looking at the Jamstack. The Jamstack is pretty exciting in this sense. I think I've already drawn this graphic before so rather than trying to come up with it onthespot, I'm going to just show this graphic. Let's see, it's down here. So, this let me zoom in a little. This is what I was trying to draw. You've got the browser, the browser has to go to the server, the database, all that has to go to a template and turns into static assets.
When we go to the Jamstack model, we have a build step where we know the pages that are going to exist so we go to server and then from that server, the server goes to the database and API and gets the templates and then that generates a big folder of static assets. Because we know those pages don't change between requests, the browser says "I would like a page" and it immediately gets that static asset back. We know the account page, like, skeleton is always the same. We know that the marketing page is always the same, no matter who's looking at it. We know that, like, the you know, the dashboard page or marketplace pages are largely the same, no matter who's looking at them and we can generate those ahead of time.

SHRUTI KAPOOR: Another example is documentation pages. They never change.

JASON LENGSTORF: Yes. Yes.

SHRUTI KAPOOR: They should change. They never change.

JASON LENGSTORF: Yeah. Yes. Absolutely. But, so, what I think is is oh, actually, there's a good question in chat. Let me read this out loud. So, in a way, this is shifting some of the cost of computation from the company servers to the user devices? I actually I think I would disagree with that. I think that this this other way, here, where we've got the whole app is a singlepage app where everything is living in the client, that can, but what we're trying to get away from is we're trying to get away from doing duplicate work at all. Like, if you use the Jamstack approach, what we're trying to get to is, like, here is some HTML, it's not work. You're not going to, like, download this and then do a bunch of processing on your device. You're just going to download it and look at it and that's what makes it so fast. If you look at the average Eleventy site, it makes them really fast. If we look up at the Eleventy site, this is not being built by the browser, this is just HTML. If we look at the source, we can see, like, this, that's here, this is what was generated and this is what was stored on the browser. So, we don't need to, like this is without it's all compressed so that's not going to be useful. This this code is what was generated and this code is what we see. We're not building this at request time. We're not building it on the client side. This is just a static asset we're downloading. It's the same if I open up a text editor and type in "H1 Hello World."

SHRUTI KAPOOR: You're getting this asset from somebody who's already done this computation ahead of time on the server so you're just getting that prebuilt page for you.

JASON LENGSTORF: Yeah, exactly that. And so, you know, when you look at this sort of thing, like a followup question, the benefits of improved caching outweigh that? Sort of. I mean, caching is an extremely hard problem. At IBM, when we were trying to solve this problem originally, we were trying to solve it by putting a varnish cache in front of a Redis cache and it would work. Then it wouldn't. Like, it would work and then something would get out of sync and then we had no idea why and so we're invalidating the cache and it led to so many headaches and tangles that I don't think it was giving us the benefits we were hoping for. Whereas with the Jamstack, we're eliminating pieces and eliminating moving parts so that you're taking a rendered asset and putting it on a content delivery network and if you're putting it on, you know, Netlify has a good CDN that's a multicloud. It has an AWS. All of those are very specially designed to heavilycache with very good invalidation so you don't end up with things disconnected. For that reason, this approach is going to be I hesitate to say "superior." I will say less prone to keep you up all night because you can't figure out what's wrong.
And to me, like, for what I'm optimizing for, in my life, that is a superior approach. But, yeah, so I guess Shruti, what do you think about this?

SHRUTI KAPOOR: At passing an array Scale, there's custom things we're doing in the Node, the server for the web app that is here. And, what works well is that when a user navigates to a page, we can deliver that page to them. We can deliver the shell and with the help of JavaScript, we can make that page or shell dynamic so we can give this dynamic app experience to the user. When it starts breaking is when we have these custom logic that we have for the purpose of logging authentication, A/B testing. One of the things I found really hard and I still don't have a solution for that is how are we going to do A/B testing in this approach? We do that in enterprise quite often, so that's a common use case.

JASON LENGSTORF: With something like A/B testing, there's ways to do that on the clientside, with Optimizely. In some cases, it can be awesome or a performance drag. There are Phil Has great content on this. He has here's one. Some really good content on how to do this, where you're basically able Netlify's kind of built for this. You can just do it by setting up two branches and it will handle the cookies and sessions so people see the same split, based on what they're doing. So, there are ways like, you can use Netlify for that specifically, but you can also do it through your CDN by setting some sessions and doing some routing that way, where you can basically look at somebody coming in, you can tag them and use your routing layer to proxy over/account, check for the cookie. If it has one, you would route them with the one associated with that, like Account A or Account B. And then you would assign them one or the other. It's very similar to the way you'd do it on a server.

SHRUTI KAPOOR: This is in Netlify?

JASON LENGSTORF: Yeah, we have splittesting built into Netlify. You get two branches and you're like, I would like to see what these two branches look like, sidebyside and test performance. That part is really, really nice. Yeah, if you want to use it's not a strictlyNetlify thing. It is a thing you can build in by working on your routing, basically.

SHRUTI KAPOOR: So it's probably picking a cookie from the browser and as part of your API, you're sending that cookie over to whichever layer does the actual A/B testing for you? Is that right?

JASON LENGSTORF: I can't remember exactly how it works but we send something in the headers, for the request, to sort people properly.

SHRUTI KAPOOR: So, I think for our purpose, what we could do, we could pick that cookie from the HTML, from the browser, and send it over as part of our API and then we have this inbuilt custom tooling that we use for A/B test. If that can pick up that cookie, then basically we're done. The middleware or server can take care of the rest of the logic for us. One thing to note here is we'll have to still render some sort of shell for the user even before we have our A/B experience, something that is uniform.

JASON LENGSTORF: You could go towards a serverless functions approach. It would be effectively the same thing. You're going to get a request in and those headers will come in and you can determine who somebody is and send off a call to a serverless function and include that header. You can serve them an empty shell with a skeleton state that says "we're loading" and based on what information goes back, you can return whichever side of the split test they're supposed to get and render appropriately.
There's interesting clientside libraries I've seen where you can conditionally load React components where you're basically saying, I don't want to load all of the code for all of my A/B tests. I want to check for feature flags and I'll conditionally load this particular component and that is the split test, basically.
So, this is one of those things where I think there are a lot of ways to solve that problem and it's very much going to depends on exactly what you're trying to do. But I've seen it done at multiple scales. We've got clients doing it, you know, the completely clientside realm. People are doing it solo based off deployed branches. There's ways to solve it, for sure.

SHRUTI KAPOOR: Cool. Awesome. And then another challenge I was thinking was in the enterprise level, we have all these microservices so how do we pick which microservice goes first and which should pick Jamstack first? Jamstack is not really one specific technology, right? It's this way of thinking, this way of best practices, which is so important. Right? Often we think Jamstack is we need to introduce Gatsby or Netlify. It's a set of best practices we can do.

JASON LENGSTORF: I would equate Jamstack closer to microservices than a particular tech stack. It's a way of thinking about organizing things and to me, the way GraphQL creates business logic or frontend data, the Jamstack provides a similar, clean separation between presentation and everything else and you can kind of make your mind up about how you want to manage that "everything else," because you could set it up as Node or serverless function and ad hoc things. It really doesn't matter.
Or you can set it all up to be clientside and it makes a call to your GraphQL. So but, the thing that's important is what you're doing is you're eliminating the need for a longrunning server and the reason for that is we want to get away from the ability for someone to DDOS our website. And, the power of this really is do you remember when Popeyes launched that chicken sandwich and people lost their mind? Their built their site as a Jamstack site and even when the whole internet was trying to go to their site, they never had a server outage. Hundreds of thousands  or millions of people are trying to look at this website at the same time. The best bad news you could get is your site got so popular, it got taken down. We're so popular, you can't use our server right now. The Jamstack eliminates that problem. Because everything is served from a CDN, you can't overload a server and take it down. I'm sure you can DDOS it, but it is hard. It would take a really serious effort to take down a CDN just by capacity. So I think that that's that architecture, that's why you do it. And the "how "you do it, what stack you're using, those things don't matter as much. The important thing are the architectural part and the infrastructure won't fall over because you've generated everything to a static asset and there's no work to be done.

SHRUTI KAPOOR: Yeah, that's powerful, right?

JASON LENGSTORF: Yeah. So, I mean digging into that a little bit, because when you start talking about static assets, that potentially opens up this other thing which is, okay, if it's static, how do we do dynamic stuff? How do we get that user data? How do we make those extra calls?

SHRUTI KAPOOR: And I think Jamstack actually addresses that question really well on their home on their main website, which is I need to do dynamic stuff, how do I do it? I think it's on Jamstack's website, itself.

JASON LENGSTORF: Yeah. Where is it? I can't remember where this is. Yeah, there's some really good some really good examples here, on this website, which I just dropped in the chat. I can't remember where that exact thing is. It's somewhere.

SHRUTI KAPOOR: Was it on Gatsby.

JASON LENGSTORF: They've got a lot of good information, gatsbyjs.org. We talked about the serverless functions thing. One of the things I like serverless functions is because it's, like you're going to need a server for dynamic stuff. I'm going to need to send off a request that know who the user is and it has to be dynamic and I need to send an auth token that says, hi, I'm a user, I have permission to see the thing that I'm asking for. And we can't statically generate that. That would be a security nightmare to generate millions of pages with peoples' information. We shouldn't do that. Instead, we can do we could stand up like a Node server and that Node server doesn't serve any files. It's just an API. When I make requests to it, I'm going to say kind of what we were talking about before with microservices. I want the user information. Let me have that, please. Here's my authentication token.
The risk with that is you're letting someone know where the database is and one of the big benefits of the Jamstack is it's really secure, by default, because there is no server. You can't hack the server of a Jamstack, you can only deface the generated asset, which can be fixed. You just upload it again. Okay. You're a jerk, but we changed that password and we reuploaded the site. Calling an API directly means you are showing someone where that data is. We have data and if you hammer on that, you might be able to get to the user data.
If you use a serverless function, it is like "mysite.com/api/givemedata" and under the hood, it goes to your website but that's never exposed to somebody. They can't find it in the network trace because the serverless functions live somewhere else so you're able to make a secret call. It can still have an authentication and it keeps it out of clientside code. If you need an API key, that API key lives in the serverless function and then your site has a user that sends an authorization token and you can use those two things to say, all right, I have a logged in user, and this is an authorized use of this API so I'm going to send both of those things in. Someone couldn't take their JSON token out. There's a little bit of security that way.

SHRUTI KAPOOR: Yeah, that's exactly what I was going to ask: Where does the serverless function live?

JASON LENGSTORF: Thank you, Julie.
A serverless function is going to live in a number of places, right? So, like, Netlify has it's kind of built in. Netlify functions are you just create a folder called "functions" in your site and Netlify will automatically deploy them and then they'll kind of set up the proxy and all that stuff. This is a Netlify function. And it's oh, that's kind of cool. You can do a split test kind of thing. You export a handler and that handler returns a status code and in between, it's a Node app. You can send off calls to other APIs, you can process other things. So that all just lives in this "functions" folder and we'll automatically deploy it and it lives at yoursite.com/netlify. But you can proxy that to be "/API" and that can all be done in, like, one line of code. What's really nice about this is it's effectively an abstraction over AWS Lambda. You do the same thing. You write a function like this. And it's a handler. And then you would deploy it to AWS Lambda and set up the routing for it. I think you'd need Route 53 and an API Gateway and then those are that's kind of your deployment pipeline.
The serverless framework is good for you write a little bit of YAML and it automates the deployment. Azure has a function. I think Google Cloud has a function. There are independent players, like Netlify is an abstraction over AWS Lambda. In general, like, all of them are going to be a collection of JavaScript functions or you can write them in Go. I think you can write it in any language. Netlify supports JavaScript and Go. So, you write your collection of functions. You figure out where you're going to deploy them and you basically just have this collection of end points. It's kind of like it's standing up a single REST end point at one time, effectively. You can, if you want to you can use GraphQL with it, you can use REST, you can use direct database connections. You can do whatever you want.

SHRUTI KAPOOR: So the way this flow would work is that from your HTML page, you would call the serverless function and the serverless function then figures out which API to call and has access to the database through an API layer?

JASON LENGSTORF: Yeah. So, let me show really quickly I'm actually this is exciting we're talking about this because I'm doing a Frontend Masters Course on this. So, this is a sneak peek of tomorrow's workshop. This is so, this is what a serverless function looks like if you want to load movies from  I'm using Hasura. I have a utility function that uses Node Fetch to send off a query to Hasura. It sends off my Hasura Secret Key to this GraphQL end point. I use that, here, to say, here's my GraphQL query. I want the data about the movies. And once I get that, I make a call to the OMDb API and I pull in additional information, so I pull off the ratings, their score for Meta Critic and Rotten Tomatoes. I'm able to make a request to this data. I make a fetch call to the function. I make a call to this "movies" function. When I get it back, I loop through and put this data into the container. Right?
And so, I had this pulled up I think I had this pulled up. This is is it? This is the workshop. It's going to be live tomorrow if you want to watch it. This is what that page actually looks like. So, it's pulling the movie data and this is the data from the OMDb. You could load this, ahead of time, if you wanted to. But that calls off to a serverless function and then they get that data back, right?

SHRUTI KAPOOR: (Indiscernible) booper?

JASON LENGSTORF: I'm unreasonably proud of how much I put so much time. I'm very proud of myself.

SHRUTI KAPOOR: Which one is your most popular movie?

JASON LENGSTORF: I mean, this is my favorite. But I think the most popular movie I think this one's the bestrated "in the boop." This one's got the best ratings. So, don't don't make fun of my bee, I tried my best. I got close, right? So, all of that is powered by by these serverless functions and you can do the same thing to, like, add movies. Part of what we'll do, in this app, is we'll have the ability to I'm not going to log in. We'll have the ability to create new movies and it's all protected by password stuff so we're able to pull in, like, that same Hasura utility to send off a mutation and we create a new movie but we're able to grab that data from the client side. We pull the user and check that they're an admin and if they're not, we can bounce them back and say, like, no, you're not authorized to do this thing. If they are authorized, then we let them do the thing and send it back. This is the whole app. There's no other server running. It just consolidates it all down to I'm just writing logic, right? It's just this folder and I tell Netlify to deploy it. That's, like, all the config that I need to get my serverless functions up and running. It eliminates overhead for me.

SHRUTI KAPOOR: You moved it out into smaller functions. That's superhandy. Supercool. Supernimble, as well.

JASON LENGSTORF: Each thing is this very controlled story. So, instead of this sprawl where the logic kind of spreads out and gets its tendrils into different UIs and you're depending on it for different things, it's just one function and each function is selfcontained. Outside of this shared utility you can make any architecture bad. If you put an effort in it, you can make it hard to maintain. But I think the major benefit, here, is because there's so little boilerplate, it's easier to keep things contextually grouped and separate enough.

SHRUTI KAPOOR: And that's very common use cases. We'll have our frontend form. We are letting the user fill out the data and then we are collecting that data on our server and sending it over to the API. Now we have access to the API. We can use the serverless function to just deploy that piece of code. Keep it very light, as well.

JASON LENGSTORF: Exactly. Exactly. So, that that, I think, is this is why I think it's so exciting because when you get into the enterprise, the hardest part about building enterprise apps the hardest part for me is not building the software, it's keeping everybody who's working on the software aligned. Most of the time I spent wasn't on making the thing work, it was on trying to talk to everybody else to make sure we weren't accidentally breaking each other's stuff. What's exciting about this approach is it creates better boundaries so it's less likely that I can accidentally break your code by making a change to mine. That is a powerful thing, especially when you're looking at scale.
If I'm working with a team in China, we have an hour of overlap in the day. That's not going to work if we have to collaborate heavily. We need to be able to hand off, and talk to each other, but not necessarily be like, I'm going to flip this switch and you flip that switch. We don't have time for that. We'll never get anything done.

SHRUTI KAPOOR: Because these are separate, I can't break your app if I deploy something.

JASON LENGSTORF: Uhhuh. Exactly. The other thing that's nice about this, too, the Jamstack approach let's pull up, like, a site here. The thing that's really exciting about this. If we go in here, I can look at my deploys and if this one is broken, I can roll it back to the previous deploy. If we accidentally break something, it's not like we have to untangle the server code. We're like, okay, let's just put the old one back, figure out what went wrong and redeploy again and the fix will roll out. That's not it's very easy on Netlify. But it's not a Netlifyspecific thing. It's a Jamstack thing. If you are rendering to static assets, every deployment to your site is a folder. Each deployment creates a new folder. You throw those folders up. If you keep the last, you know, whatever number of deploys onhand, if something goes wrong, you flip back to the previous deploy and you're happy.

SHRUTI KAPOOR: Yep.

JASON LENGSTORF: Um, okay. So, we've got, like, five minutes left. Were there any questions you wanted to dig into that we didn't get a chance to?

SHRUTI KAPOOR: Let's see....I have my list of handy questions here. So, we talked about doing an A/B test.

JASON LENGSTORF: Thank you, Chad.

SHRUTI KAPOOR: We talked about moving customers over to the serverless. One of the things we're working on, at passing an array, as well, is to make the deployment to CDN easier. And I know Netlify does it really well so tell me a little bit about how Netlify is moving assets to CDN with Git deployments?

JASON LENGSTORF: When you push to Git, we listen. We have build plugins if you want to do visual differing or integration tests. Once we get a success, we will upload to our CDN and we do that we have a multicloud CDN so we've got a bunch of stuff in the background going on. Once we upload the files, it gets a unique URL which is based on the Git commit. Let's look at one of these commits here. I have my production branch, but then I have, like, this older branch. I can click "preview." This is the previous instance of this app that I built. If I want to go to the live version, I can go to, like, the toplevel this is the actual live site. And so, we're able to kind of each one is an independent thing and then once we know that the upload is successful, we just route, like, we update the routing to say "instead of production being this deploy, now make production this deploy." The sites are always live unless you specifically take them down and you're able to route between.
You can put that behind a firewall. But that way, all the sites are always there and you can you know, you can do really interesting comparisons. You can do, like, performance tracking over time. You can make sure your sites are actually faster and all those things and it that's handled by because you know, because you're building into a folder. You're effectively uploading the folder to a CDN, the same way you used to drag a folder into the FTP client.

SHRUTI KAPOOR: I've done it when I was just starting out, 67 years ago. So not that long ago with WordPress.

JASON LENGSTORF: Yeah, I did WordPress stuff in 2010ish. We were still using it for that. What I like about this is it feels as simple as when I was doing that sort of work. When I just had a website and my FTP folder and drag the new files and it would go live. And this is the same idea. I push changes to Git and whatever goes up to Git ends up live on the website and, you know you can put controls in place for that. I love the simplicity of that. It moves us back to this idea of continuous deployment. And not continuous deployment because of this really elaborate dev process, but because things have been moved.
I would love to keep talking about this all day. Shruti, do you want to give us one more dev joke and then we'll get out of here?

SHRUTI KAPOOR: I'll give you two. What did JavaScript call his son?

JASON LENGSTORF: I don't know, what did JavaScript call his son?

SHRUTI KAPOOR: JSON!
Okay. This one, I really like. What airline do developers prefer when they are in a rush?

JASON LENGSTORF: What airline do developers prefer when they're in a rush? I don't know.

SHRUTI KAPOOR: Does the chat have any guesses?

JASON LENGSTORF: What do you think, chat? I don't think they do.

SHRUTI KAPOOR: Delta. Okay. We'll do one more.

JASON LENGSTORF: Okay. All right. One more.

SHRUTI KAPOOR: How long does a loop last?

JASON LENGSTORF: How long does a loop last?

SHRUTI KAPOOR: For awhile.

JASON LENGSTORF: Okay. Yep. That was excellent. Shruti, always a pleasure having you on. Thank you so much for coming on and talking to me today. Chat, you are able to if you want you can see our notes here. And, we'll make sure that all the links that we shared today get published in the show notes. If this is your first time, thank you for coming and hanging out. Definitely go and check out the schedule. We have a whole collection of amazing episodes coming up and a bunch that I haven't had a chance to put on the schedule. Later this week, we have David East. Please, you know, add it to Google Calendar. We'll see you next time.
Shruti, thank you so, so much for coming on today.

SHRUTI KAPOOR: Thanks so much for having me. This was so much time. I love the "boops."

JASON LENGSTORF: White Coat Captioning and Netlify, Fauna, Sanity and Auth0. Thank you.
We'll see you next time. Thanks, everybody.

SHRUTI KAPOOR: Bye.

Closed captioning and more are made possible by our sponsors: