skip to content

Catch Visual Bugs Before They Ship

There’s nothing more frustrating than learning part of your app looks funky from a user screenshot. In this episode, Chris Kalmar will teach us how holistic visual regression testing catches problems BEFORE they ship.

Full Transcript

Click to toggle the visibility of the transcript

Captions provided by White Coat Captioning ( Communication Access Realtime Translation (CART) is provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings.

JASON: Hello, everyone, and welcome to another episode of Learn With Jason. Today on the show I have Chris Kalmar. How are you doing?

CHRIS: Fine, thanks. Thanks for having me on the show.

JASON: Excited to have you here. I think today is going to be a lot of fun. Really looking forward to talking all the ways we can save ourselves from pain and embarrassment, but before we talk about that, let's talk about you. Do you want to give us a bit of a background?

CHRIS: Sure. My name's Chris. I'm a software engineer, like many of you. I'm doing this for more than two decades now. Yeah, doing everything from frontend to backend, I love full stack, doing different things. At the moment I'm working at a contractor, and, yeah. Building next to it this project we'll be talking about. Lost Pixel.

JASON: Uh huh, yeah. There are a million cool things we can talk about, but let's start with level setting. The whole reason that I was excited to talk to you about this is that a category of pain I've experienced many times as a frontend developer is I'll ship my site, I'll go to dinner, or shut down for the weekend, and then I find in my email or in my DMs somebody has sent a screenshot of my site and it just looks totally jacked up. Like I screwed something up, and, you know, the heading is squished to one side, or I blew out horizontal scrolling and somebody is showing my site halfway scrolled across the page. And it's always because I did something small, something silly, and it's really fast to fix, but it's embarrassing when I get told about it from someone visiting the site and not seeing it myself. We've all had this experience. So, Chris, when we're looking at this sort of pain, there's a style of testing that lets us avoid this, right, like we can catch it before we ship. So do you want to talk a little bit about what that's called and how it works?

CHRIS: I would like to take a step back in general and talk about tests and what kind of tests there are, right.

JASON: Sure, yeah.

CHRIS: Different types of tests that help us to catch things on different levels. And I think many developers, especially in the beginning, see tests as a chore. Something that's about work, you have to write extra code and so on, and you don't get to build a feature you would like to. This is where a huge misconception is, because as a developer, if you don't have it in yourself already, you'll learn everything you can automate, you will alternate, because you don't want to spend the time doing the same things over and over again, like testing things that can be automate. There's this... not sure if it was him, but there's a testing pyramid, right, and then there's this testing trophy. Has different shape, right. And it is multiple levels, right. So, at the bottom you have static tests, right. Not tests, per se, but static type checking. If your language supports it, awesome. Then you have it free. For free. Sometimes it's not there. For example, in JavaScript, something that was missing was bought by TypeScript. Right, with TypeScript we have good type system there, and it removes a huge amount of problems out of the box already, because you're using TypeScript. Both things like when you have to run a function that is, for example, from a library that you didn't write, or even if you wrote it yourself, after three months you forgot already that the signature of the function. And it's really hard to do, to spot those errors. Everything looks good, right. It makes sense what you would type in there, but something changed. Somebody changed it. Maybe you, maybe somebody else. It's impossible to track all of that down. And with TypeScript you have this type safety that whenever you change something, it will immediately reflect everywhere that you have to change it. It is perfect for refactoring your code, because you can immediately understand where do I need to fix all those issues. So, this is the first layer that you get for free if you use type safe language, right. Then the next layer is something that you'll have to do more actively, right. Units test. In general, you have unit tests, integration test, and so on. And the lowest level is a unit test. A unit test is something where you can look at specific function, right, and test it, make sure all the edge cases are covered, right. It is your job as a developer to understand how things could go wrong. Not as much about having 100% code coverage, trying to test everything in detail. Need to find out what makes sense, what is really crucial for the application to work. So, for example, if a label doesn't show up on the button, could be crucial, because it's an important application, having functionality there, which the button has no label there. Or is the button completely gone? It depends what you want to test. For example, an email that couldn't be sent out, well, too bad. Maybe it's not so horrible, so maybe you don't need a test for that. But it really depends on the part of the application, if it's crucial or not. You want to have good unit tests on those levels that if something breaks, it will be really bad, right. You have to look at it like that. So, you will have some sort of coverage there, and this is as far as something where developers when they start, and they have to write their first unit tests, this is something where you have different approaches. You have test driven development, or you write the tests afterwards. This is something that I don't want to get into, because everybody is different about that. But I think it makes sense to start writing tests in general when you have some sort of a stability in mind that you're out of the prototyping phase and you know where you're going with this. Like starting with tests right away, it will just cause you a lot of pain, and no matter if you talk about unit tests or end to end tests or visual regression tests. You need to have at least something there where we know that you won't spend more time on writing tests and change the code all the time. But, again, this is something where every developer, every engineer, will tell you something different, because this is the experience that everybody has. But once we have those unit tests, which will look at very small portions of your application, or a very detailed part of your application, you still need to do more, because unit tests will not cover the application as a whole piece, right. So you're going forward to integration tests. Where you test basically more steps in a row. For example, API call and doing something... so, there's a fluid line between integration testing and end to end tests, right. This is really something where the definition gets a bit fluid. But in the end, you don't look anymore at small pieces, but you try to look at the high level picture of things working in general. Like you could have a test where you put something into a cart, you start auto process, you check out and so on. And this is a whole piece, the whole thing needs to work. You don't care at the moment if the text might be wrong on some of the pages, right, or the output could be wrong. But you want to see that the whole process works in general. You have unit tests to look at more specific parts, right. And here we are talking on the one side with unit tests, very detailed. With integration tests, trying to see if process flow works, and then we move forward where we want to have even a better understanding of what the user sees and if the application works. So, things like Cypress, for example, you'd test your user interface. Either you have a mock backend or using an integration backend, you want to test your user interface. Things a user would see. So, you go to the shopping list, you select an article, put it in the cart, you go through the checkout process, and so on and so on. All those steps you want to verify that at the right moment, the right buttons are there, some text shows up with a warning shows up, or error is displayed. And this is where use of things like Cypress or Playwright to automate a process that a user would have. We have tested small function of the functionality, if APIs are being called the right way, and so on. And now we go to the next level to the top of the trophy, which still not many companies are doing, but many are starting to do it, visual regression testing.

JASON: You're putting another layer on the testing trophy here.

CHRIS: Exactly. You could still put it in the end to end category, but it's its own category, it's a category for itself. And here you want to understand that what you think the user sees, the user actually sees, right. Because having... I saw, for example, Cypress tests, where instead of trying to understand where we want to check out a button, that says checkout. A lot of times you will find Cypress tests where those buttons will get test IDs, right, data ID or test ID, so it's easier to target those IDs, right. You can write your Cypress ID, click the checkout button, and click it. But it will... it will... maybe, the text is different, maybe there is no text, maybe the button is not even visible. How would you find out? You would have to actually have this in Cypress implemented, like checking is the button visible or not. I don't think anybody would go that far. So this is where you can have your visual regression test, where you can take screenshots of the application in a given state. And make sure the next time you run it, the visual is still the same. So, a visual regression test is much like a unit test, or even integration tests in some sort. I think if I talk about JavaScript and TypeScript, most of the unit tests are within... what is choosed. And you have this functionality, which is super convenient for many years, which is called to match snapshot.

JASON: Snapshot testing.

CHRIS: We are working a lot with JSON structures, not only JSON structures, but if you look at React, React is, as well, a dom tree and asymmetry, basically, which you could as well snapshot and see if the structure is still the same within tests. And in the same way basically you have the things with visual regression tests, because you take a picture of a given state, you change code, you run the test again, take another picture, and then you compare those two snapshots, which are not JSON code, not structures, but those are pixels, right, and pixels are easy to compare. So, you can find out what is the difference between those two images. And this will tell you things that will be maybe hard to find out in your integration tests. Really possible, you could do it by a lot of work, achieve it by positions of elements on the page, but I think this would make sense.

JASON: And it feels like the sort of thing that, you know, each of these tests serves a different purpose. When you're talking about static testing, you want to make sure that when you define a data structure, that data structure is being used in a way it was defined to be used in your app. Nobody is trying to access properties that don't exist, nobody is trying to turn strings into numbers, or whatever the thing is, right. And then when you get into an integration test, the integration test is when I click this button, this thing should happen. Or when I, you know, submit the newsletter, it should take me to the confirmation page. Stuff that you want to make sure that stuff works, and, you know, or like when I call this API, I get back this type of response. And I think I just sort of bled into end to end tests there when I was thinking about the newsletter sign up flow, where the user arrives at the homepage, enter user name and email, taken to a confirmation page. You want to make sure those steps work. With visual regression testing, look, we spent a lot of time and effort getting this brand designed and building out this CSS, and making sure that we have the different view ports handled, and all that stuff. And we want to make sure that continues to be the case. Like we didn't put all this effort into making a beautiful brand for us to ship something where we forgot to close a tag, and now the whole homepage is just janked up. So, to me, the visual regression testing is... it fills this gap of I know I can verify the logic in my app works, but it's very, very hard with logical tests to verify that the visuals are still intact. Because everything can be logically correct, every one of those integration tests could work, even if my site looked like absolute garbage. You can forget the style sheet all together, and your integration tests are still going to work. So, it's one of those things that it helps round out your testing, so you're really working on something that you feel confident in all the ways. I feel confident that it works, I feel confident that it looks right, I feel confident that my team is holding the code correctly. And it just all leads to higher confidence, which I think a lot of times what people think about testing with is that it's there to add rules or to enforce things within a team. In my experience, though, when you've got a really well tested codebase, it's not about rules, it's about confidence. I know when the tests are good that anyone on the team can ship and don't need a strict manual review, because the tests are going to catch the critical stuff. We know if something went wrong. You know, if you're using something where you have instant rollbacks, then you're really in good shape, because now your risk to trying stuff is so low. You don't need to spend a much of internal administrative time on playing code cop, or doing a bunch of manual testing, because you've got the automated testing to catch the most important bits, you've got the rollbacks if something is going to go wrong. So you can just kind of do a one touch review. I looked at it, it looks right to me, let's give it a shot. If something goes wrong, cool, we click the button, we roll back, but you can be so much more confident when you've got a good testing structure in place. I think that gets overlooked, because we get kind of burdened down with so many rules, I have to make so many tests, I have to do all this stuff, so much compliance. I don't know, that compliance makes confidence when it's actually useful. You have to require the right kinds of tests, but it does make a big difference in a team culture.

CHRIS: This is very important, especially in a team if you have some ground rules there. It used to be back then many, many years ago, that we would have some sort of an agreement, and this is how we style code, more or less, or don't go crazy in every direction, so it's still readable, and then some tools started to appear like use lint and so on. These days you use Prettier most of the time or other tools that will take care of that. And it's so much better, because we can finally agree on something and this is how the codebase looks like, and we don't have to think about certain aspects anymore. It's automated, you store things in VS Code, it will be auto formatted. Don't even get involved. A lot of things are even done for you. I think those parts are super important to make the team happy, because with agreement it's easier to work, obviously. And then, of course, all those tests, as you said, it's about confidence. In the end, it's really about confidence. In a small codebase, and in a small team, you can still talk about ideas. And I would like to change that, what do you think about it, you would have to look at this and this place of the app. This is how it works, try it, works somehow, everybody's happy. But at some point it gets annoying, and you can't do it all the time. It's getting frustrating, because you want to implement a small feature, you break half of the app in the process. Everybody's angry at you, this makes no sense. This is, yeah, not the same process. Therefore, those tests, as I mentioned in the beginning, it's not about having 100% code coverage. This is this would be too much, unless I don't know what kind of app would be need to be in house to make sense, but usually you would say... I think there was this rule like test code needs to be verbose, right. Not about being abstract in your test code, needs to be verbose. And usually the rule is something like test code is three times as much as regular code.

JASON: Whoa.

CHRIS: Yeah. If you have everything like non abstracted, it should be easy to read, and it should be clever code. Tests shouldn't be clever. That's important.

JASON: No, I think that's really important. I mean, yeah, I think this is the other thing, I remember early in my career when I was writing tests, I would get into the fun of testing, right. It was fun to try to get 100% code coverage and do some clever thing that would let me get to all these edge cases by mocking a thing or whatever it is. Then I couldn't remember how I had done it, because it was something really clever that immediately left my mind when I checked the code in. And then, you know, if we ever needed to change that code, I was like, oh, I don't want to touch that test ever again. So as I've matured as a developer, I would say, I have leaned more into, you know, the things that I'm really interested in are the things that are critically important. And I will more likely rewrite code to make it easier to test than try to write clever tests for the code that exists, right. And the other thing I've noticed is there are a lot of things like this. I find myself rewriting code to be compatible with static site generation. I find myself rewriting code to be compatible with serverless, to be easier to test, whatever it is. Whenever I do that, at first I felt, oh, this is so frustrating. Now I'm like, you know what, I wrote that code too complicated in the first place, because it shouldn't be this hard to run, and now my code is very simple, very, you know, I don't write a lot of code that I would look at and go that was a really clever way to write that.

CHRIS: You hit a very important point here about how tests are influencing our codebase. So when writing this test gets too complicated for the component, the part of the application, whatever it is, you know that the solution that you have there is just too complicated, and it's not reusable, not modular enough, to be able to test it properly. You also have to think about what kind of level you want to test, because if you have, let's say, part of the application that does something, and there are functions and other functions below that, that do move things to achieve the whole goal, right, you could test each of those small functions there, which are implementation details, or you could just test the whole thing as one. Then it's up to you. You have to decide does it make sense to test each little thing there, or do I want you to test the more complex part here. And it really depends if the functionality can cause, like, combination of millions of different variations, then maybe you have to look at the ones in detail, as well. But the test itself will give you an idea of is the code well structured and well modularized so I can write good tests. Or if I have to go in a super complex way to get to the result for a test. Then it makes no sense, there is something wrong about it, something you need to change to that. Sometimes that's so hard because of mocking data and having some external service and so on. This is something that's hard, okay. But if it's only about your codebase and their complexities that make it hard to set up your test even, there's definitely something that you can reduce in complexity, spread it out, modularize it. Because not only you and not only the tests will benefit, but every person on your team would be happy about it, because it will be easy to digest. I mean, we have certain rules, we have certain lint rules, and other tools that will provide you with rules that tell you, like, you shouldn't have more than five or six parameters as an input to function, right, because it is... what is it called? Cognitive complexity, I think.

JASON: Right.

CHRIS: I think the human brain can't take more than six items at once and think about it.

JASON: Maximum number of things we can recall is like seven without heavy concentration. Yeah, there's so many little things like that. Another one that I know of is a linter rule that I used to love of was called cyclomatic complexity, where it looked at the number of nested ifs, loops, you know, whatever. Because every time you enter into a new context, if, now there's two contexts. You've branched your brain. Inside that if you set up a loop, you branched again. Oh, my God, we're getting in the weeds here. You set up your linters to check, hey, are you sure you want five nested loops in this function? That seems it might be hard to debug later.

CHRIS: Error message should be like you will never be able to write a test for that.

JASON: So let's talk a little bit more about visual regression testing and then I want to get in and start actually building with it. So, one of the biggest concerns that I've heard and one of the biggest, I think, pieces of resistance to implementing something like visual regression testing is that almost all of the apps that we work on are going to be visually changing frequently. And if one of the things that visual regression testing is going to do is stop a build, or, you know, fail a deploy or whatever, because the design doesn't match up with what we shipped last time, how do we prevent that sort of a workflow from becoming a blocker, from becoming something that slows us down as we ship code?

CHRIS: Well, that's a good question. It really depends on the company team, the idea behind what you want to achieve with visual regression testing. So, it is something that you can always have turned on as an optional thing, right. You don't have to make it a blocking process. It's, basically, Lost Pixel in this case, and all the other visual regression tools are integratable with GitHub, and you have a status check there, and you can set up your branch to only pass or let you merge if you have a green check or not. But it's an optional, this is up to you, this is how you want to go about it. From experience, there are companies that just have an automated acceptance of any visual regression, just to keep more or less an up to date state of how things are looking. And manually check if they like or don't like the results. But it's, again, important to understand what is an important test for you and what's not. What's not, right. So you could have a page that you want to test that can change frequently, or small things could change. And you can define it in a percentage, right. You could define the regression test, where you accept a threshold of 5% in changes, right. This is okay, if you have a need for that, go with it. If you have a design you want to test, you want pixel perfect, zero pixel threshold there for you. And, yeah, this is the strategy that needs to be chosen, and what makes sense. So defining the meaningful test, because, again, you could have 100% code coverage or screen coverage, let's call it like that, and maybe this is not really meaningful to the result, because there might be pages like terms and conditions that you don't care for... you care for the content, right, but you don't care if something breaks, if there's a small text change there, you don't want to always approve those changes. And then there are pages like your homepage, and there's a... you want to always know it is in place, because this is the most important thing for you. So, you cannot really define and choose what you want. With unit tests and integration tests, it's up to you. And, yeah, I think it is like with any other tests, if you're just starting out, might be daunting, or it might be fun. If you're going after the green checkmarks, this is what makes you happy, then, of course.

JASON: Whatever you're into, right.

CHRIS: I know what you're talking about. When I was sometimes getting in a phase where you get those green checkmarks and more and more tests are passing, then you want to add more, because you're just in the right zone.

JASON: Yeah, yeah. That's my completionist bent. I'm like, well, I'm at 98, I should be at 100. I'm at 75, I should just get to 80. Then you get to 80. I can keep going.

CHRIS: Perfectionist speaking.

JASON: Yeah, right. Okay, I want to make sure we got enough time to actually get through some implementation here, so I'm going to take us over into our paired programming view. And, first and foremost, let me do a quick shout out to our... if I can get this open, here we go. Quick shout out to our captioning, we've got Ashly here from White Coat Captioning here today taking down everything. If you look at the web player, there should be a button down here that says CC, so you can turn on the closed captions if you want. And that is made possible through the support of our sponsors, Netlify, Nx, New Relic, and Pluralsight, all kicking in to make this show more accessible to more people, which I very, very much appreciate. And today we are talking to Chris. Chris, where is your... where should I send people, Twitter the right place, you hanging out somewhere else, Blue Sky, Mastodon?

CHRIS: Mostly Twitter, yeah.

JASON: And it's... almost got it. There it is. Go give Chris a follow. And I forgot to tell you, Chris, if you look at the composite window there, there is... there is a hover in the context menu and you can copy the embed link and put it in the full tab, and that will give you a full screen to see what's going on. Yes, Eco, that's what I was attempting to say, if I wasn't saying that clearly. The captions are now actually in Twitch. And, as of today, I guess last night, actually, I launched a new version of the Learn With Jason site that does not currently have the embedded player on it. So, my intention is to bring that back eventually, but I had been... this branch had been like a long running feature branch, and I think I was four or five months into this branch, and I was like, you know what, I just got to ship this. So, lots of incremental improvements coming, but in the meantime, this site is now live, fully run and on Astro, I'm happy with it. And earlier we were talking about the testing trophy. This is the Kent C. Dodds definition of the testing trophy. I will throw this thing in there. Cool, yeah. All right. So if I want to get started with some visual regression testing, what should I do first?

CHRIS: First, you need an idea what you want to test, obviously. It really depends... so there you can start completely from scratch, or you might have already something that can be used.

JASON: So, we can test this site, we could test... I have a bunch, you know, we've got this one here, this is much simpler and doesn't do very much. So, this could be a good one to test, or if you want to put together something really small, we could do that, as well.

CHRIS: Well, do you have any... because it's interesting, do you have any site where you have already Cypress tests or Playwright tests maybe?

JASON: I don't think I have anything that's got Cypress test in place at the moment.

CHRIS: We can try to integrate. Cool if you try any of your pages. Let's try there. Will be most realistic to see if this works.

JASON: We have multiple pages. Let me open this in a new tab. We have this page here. Then we have this page here, which kind of goes through each... you know when you do something and you realize that you've really brought a knife to a gunfight? I made this page, where I like animated how I make a smash burger, and then I was like, Sarah, you've got to make one. Then she shows up with this. You total jerk. (Laughing). So, yeah, don't challenge Sarah Drasner to an animation fight. You will lose.

CHRIS: Can you collapse it again? Are those pieces moving?

JASON: Yeah. So, mine is toggleable, she did hers as like an intro, right. So, it's this very... it's beautiful. It's so good. But, yeah, that is the game. Fortunately, we're not competing on how well we can animate. We're competing how well we can make a burger, and I'm clearly going to destroy her on that. Okay, so, I've got this. I've got the repo is here. Let me throw these into the chat for anybody who wants to take a look. And if I want to get these started... what should I do? Should I maybe grab a copy of this locally?

CHRIS: Yeah, would be good. Is it on JavaScript, TypeScript project, or...

JASON: It is a... Nuxt site. Yeah, JavaScript.

CHRIS: Okay, interesting.

JASON: Let me see. Here. What are you doing? GitHub repo, clone. NPM install. And fingers crossed, this just runs. I haven't run this site in a while, so if we hit anything weird, I can also just do a new NPM create, or init veet.

CHRIS: I have a small project if you want to, as well.

JASON: Yeah, let's use yours, because this just failed, and I don't want to burn any time debugging why that project didn't work.

CHRIS: Okay, just pasted it in the chat.

JASON: Okay, I'm grabbing this now.

CHRIS: It's a tiny project that I built together quickly today.

JASON: Great. Okay. So, we're going here. This is our little thing. Let me get out to here, and, all right. We're going to get into this one and NPM install. Open up VS Code, I'm going to drop this here.

CHRIS: This is based on T3. Are you on T3?

JASON: Yeah.

CHRIS: Cool.

JASON: Why doesn't this one... there we go, okay. All right, so, we've got a T3 app. So, it looks like it's a Nuxt site, I see some Tailwind. And, cool, yeah, so we've got a couple pages, and should I just fire this up?

CHRIS: Exactly. Install it all right. Let's see if it works.

JASON: Local host 3,000. Okay, got it. Got a homepage and a pricing page.

CHRIS: Exactly. On the pricing page, there is some small functionality there, where we can switch the price. From monthly to yearly. And, basically, you can choose one of those buckets, basically. We need to sign up with Lost Pixel, because we need to edit GitHub integration, and we need to, yeah, install Lost Pixel into that. So, the first thing that we can do is we can install Lost Pixel. So, NPM install lost pixel.

JASON: And I got it.

CHRIS: Perfect. Then let's push it somewhere, just put it on GitHub, so we can access it later.

JASON: Let me work it here. We'll go with... move it over to the Learn With Jason. I like to add a remote for the fork? Yes, I would. Okay, so, then we now have an upstream, origin to mine, good, so I can... let's see, that's already going to be set. Let me hit add all. And then hit commit. We'll say... Lost Pixel. Okay. Then I'm going to git push origin main. And now it is up. So, if I go to GitHub, "Learn with Jason," Lost Pixel tutorial. Here. So, it's here, and I've pushed it, including the installed Lost Pixel package.

CHRIS: Perfect. So, Lost Pixel itself is free for open source, and we'll use it like that. It's a GitHub integration, which is based on two components. The one thing is the GitHub action that will need to run. So, this is where the actual pictures will be created. And then we have the GitHub app that will connect us on to the platform and we'll be able to see everything. I just sent you the link to our main page, where you get to the app.

JASON: Okay. I'm going to open that over here.

CHRIS: And in the top, you can see go to app or get started for free. So, this is already the app, the platform, yeah, you have to sign up for the first time.

JASON: Okay.

CHRIS: And it will tell you that you don't have any repositories installed yet. The first thing we need to do is install and we need to enable at least the one repository that you added here, so don't have to say all repositories, but just the one that you just added.

JASON: Here. And, okay, so, we've got it, we're installing it for our tutorial today. Going to install. And let's see, where is all that, is that in here? It's not. Let me get my phone. Okay, hold on. There it is. We're in.

CHRIS: Okay. You can already see on the left side. Why is it loading so slow? Perfect.

JASON: Always the demos. Let me just check.

CHRIS: Works on my machine. Interesting.

JASON: Let's see.

CHRIS: Okay, couple minutes.

JASON: I'm getting a ton of errors. Oh, you know what I need to do, this is probably the ad blocker, which I need to turn off. Just turn it off.

CHRIS: Okay, can you show me the network tab? Live debugging.

JASON: Looks like everything is loading.

CHRIS: Interesting. If you click on "manage" on the top. Can't get access. That's interesting. That will be a short demo.

JASON: Tell you what, it could also be arc. Let me open up Chrome, and we can try Chrome instead. We're going to I am going to the app. Okay. I'm going to have to pull this off screen for a second while I do my stuff. Okay, we're back. I'm in Chrome. Just wants to fight today.

CHRIS: I just tested this today just to be on the safe side. Come on.

JASON: Let's see... it doesn't like... maybe, I'll tell you what, let's try this. I'm going to put it on all repositories and see if maybe it's something to do with that. Hmm. We're being haunted by the demo gods today. Okay, well...

CHRIS: I'm checking the logs in the backend. No obvious errors.

JASON: Isn't that the way it always goes?

CHRIS: Can you click again just to make sure in the app? Click again in the list on yourself. Yeah.

JASON: Oh, oh! We're not in the org. I know what the problem is. I authorized my Learn With Jason org. I didn't authorize my personal user. So that makes sense. We can go... Lost Pixel.

CHRIS: Well, is it in this org or not? You have to enable it, yeah.

JASON: It is in this org. So, here is our... we're going to set active. Okay, all right.

CHRIS: Go back into the list.

JASON: Now go back into my list. We got it. All right, we're good, we're good. We got this. Done this part.

CHRIS: Okay. So, we have this already. Now we want to... there are two things. We need to set up a configuration. So we need to configure the project, what we're going to screenshot exactly, then we need to add GitHub action for that. So, in this case, we want to page shot for now, we'll just take the Lost Pixel config as you saw it, and we'll just copy it somewhere in the root.

JASON: Okay, so I'm going to save this. Whoa, whoa, whoa. Whatever that was. I don't like it. So, I'm going to not save that. We're going to open up the side bar, and then I'm going to put it right here. Lostpixel.config.ts. Drop in the DS.

CHRIS: Not sure if it's just me, or if it's in general, but there is resolution of the screen is pretty bad. It's very pixelated.

JASON: That might be just you, but let me bump it up a bit to see what's going on. Okay, doesn't like something about this.

CHRIS: This is something that we're looking into. There is a couple of additional things that we need to be defined. It's the GitHub run ID, the commit hash, the branch, and so on, but this is automatically taken, so you can ignore this little error. Should be fine.

JASON: Okay. Ignoring.

CHRIS: We have here, if you open the config again, just wanted to explain what we have there. So here we define... in the bottom we have an API key that should come from a GitHub secret. This is the best way how to go about it, and you don't want to make it visible in your repository. So this is the one thing, and then each project, each repo, has its own project ID. So, this is automatically generated for you in the list that you saw before. There is an exception to that. It's not only a repo, but if you have a mono repo, right, if you have multiple projects, multiple frontends in one repo, then you'll have individual project IDs there, and things will configuration file. So we're not going there. On top of it, you see just a basic definition of what we want to do. We want to do page shots for now, for just this one page, which is the landing page. And here you see, as well, a base URL. The IP address that you can see here is... because of the way how GitHub runs, how it is possible to run in GitHub action or how to access a local running process on GitHub action. So, this is how it's there. If you run on Jenkins or any other platform, obviously, this needs to be adapted to work. Local host will not work in this case.

JASON: Gotcha.

CHRIS: This is the main configuration file that we need, and now we need to create a GitHub action. This was in the next step.

JASON: Okay, so, here is the GitHub action. So, I will just grab this, and then I need to call this .github. This down. And then I'm going to go to .github/workflows. What was it called, VRT?


JASON: Oh, for visual regression testing. So we're going to run on Ubuntu the Lost Pixel app. Git node, build the app, start the server, and then we run Lost Pixel using the Lost Pixel API key.

CHRIS: Exactly.

JASON: I can still read YAML. I still got it, everyone.

CHRIS: YAML. Okay. Well, YAML is an interesting thing. I hate writing YAML. I always screw up the intended. But it's less than JSON, right. With JSON it would be way more code, way more structure. It's running in the background, that's why we have the run start. We need to as the API key to the secrets, which should be displayed somewhere below. On step four, yep.

JASON: Okay, I'm going to copy this. And now I assume... I probably don't want this to be visible. So I'll probably need to roll this after.

CHRIS: We can generate new key. For now I wouldn't worry too much. Just go to the Lost Pixel config file and paste it in directly. The whole GitHub secrets and so on, we don't need for this. We'll just drop the key afterwards.

JASON: Okay.

CHRIS: We'll make it easier. But usually... oh, didn't copy the key, I think. But usually in your organization, you would put it in.

JASON: Am I doing the same thing here?

CHRIS: Yeah, here it won't be used anyway, so we can skip it here. Key would only take it from GitHub secrets and run the Lost Pixel project with this configuration file. So, we have will be.

JASON: Oh, I got it. Okay.

CHRIS: So, that's it, basically. Commit and push. Let's see what happens.

JASON: Add everything. And we'll say add the Lost Pixel GitHub action. So, I'm pushing. We'll go over here, go to our actual tutorial, and I just realized I probably should have opened this up as a pull request, right, so that it would have... or is it going to run anyways?

CHRIS: It will run anyway. It really depends on how your organization runs. So it could be defined that it runs only on certain actions, only in PRs, but here it runs on any push. So it's really defined in the workflow file that we have right now.

JASON: Wait, did I screw this up? Permission to Lost Pixel... oh, git push origin main. I see my code. Now we should see in here... there's our workflows. So, that should start an action. It is running the action. And it says it's queued. Why does it run twice? Hopping on in. Here's our Lost Pixel, and it's doing the thing. Get paid, that's a good one. Actually, I can close Chrome, not using it anymore. I don't need that many browsers running.

CHRIS: You're using Arc, right?

JASON: Arc is my daily driver, and I also have Edge running so I can have different profiles going for various stream things.

CHRIS: Cool.

JASON: Hi, how are you doing? Okay, we're crushing it, got node, running NPM install, running app. As you can see, this is something I didn't realize at first when I started using GitHub actions, is that like we defined all of the names... wait, we broke it. What broke? It didn't like... failed on our TypeScript.

CHRIS: Oopsy, I forgot to ignore this. Go to ts config, ignore Lost Pixel config. Skip it like that. Or just add the ignore ts.

JASON: Ts ignore.

CHRIS: Above config, yeah. Because it expects the other variables like the commit hash and so on. Just want to skip it. We'll find a better way how to make it more type safe and still workable. So, just expect error, yeah.

JASON: My config is fighting against ts ignore. That's okay, we'll beat these robots into submission. So, here we go. We are going to fix, ignore TS error. And get back out here. Get another run here momentarily. Oh, God, I got to... upstream to origin main so that I stop doing that. Exactly, Jacob. No, no, really, it is okay. Okay, now we've got ourselves running. Here we go. We're running, we're in the action. And we will get things pulled. Get down to checking out our code. Origin again. Got it that time. Okay. All right. So, what this is doing is it's starting up the site on a like a development server in GitHub action. So, it's kind of running the site... I guess it's building the site and then starting it like it would on any node server. And that's what that URL was that we put in, the 172 dot something.

CHRIS: Exactly. In the next step, start serving background, it's exactly exposing on port 3000 at this IP address on GitHub action, on this T3 app. It's a specific thing to GitHub actions. I think it's even because of how Docker works, exactly, because in the end it's a Docker container, right. And locally, if you run it locally, you would be able to access local host, obviously. But this is the Lost Pixel run. So, if you look at this, you will see some details there, what it's doing. This is just something in the Docker container. It spits out the version, and the configuration that it found, and what it's going to do. It says there are no files there right now, so it will create new screenshots. And if you look...

JASON: Pairing one page, here's our screenshots.

CHRIS: Exactly. It will be uploaded, then it will contact that platform. The screenshoting action happens on GitHub actions, and files will be uploaded to a platform where you can actually see them what it looks like. Run is complete, we should be able to see something on the platform. Yep. There's our first build. Here in the build you can see it's a failing build, it was regression, so added something new, so below you see what is involved. It's a new image. The first thing you need to do, you need to improve it or not.

JASON: And, so, basically, to say this to make sure that I understand it, the way that a visual regression test works is that when you add something new, it's failing because it doesn't match the previous build. So in the case of this, our first build, there are no previous snapshots, so everything will fail, because we haven't said, yes, this is what our site should look like. So what we're saying here is now that we've looked at this, this is our homepage, this is what we expect it to look like, so we are approving it. And now in future builds, any time this runs, as long as this homepage hasn't changed, it's going to say, yep, this is correct, and then move on. But if we, say, you know, change the headline or something, it will say, wait, wait, wait, your headline changed, do you mean to do that? And then we'd have to approve it again to reset our baseline.

CHRIS: Yep. So, we differentiate three types here of regression. Addition, what you have right now, so we added something new. Removal, so we removed the test or screenshot, which is regression. Or there's a difference. We changed something and this is the interesting part where you see actually what changed. This is the one thing that we want to catch. The addition and removal is, yes, regression, and we need to be aware of that. Anything that does not change, we'll just not show up here. So, we have here in this overview, we can see what build it was, which branch it was, by whom. We can directly jump to... if you click on the commit hash, we can jump directly to GitHub page. Where you should be able to see now in your list of checks... so, in the top.

JASON: List of checks.

CHRIS: Yeah, next to fix... actually, go back. If you click on this, you'll see there's an entry for Lost Pixel app and Lost Pixel bot. If you would have multiple projects running in the mono repo, you'd have multiple entries here for Lost Pixel app, which each one of them would be named like design system, front page, internal page. And each one of them has to pass for the whole build to pass, right. So you would have some checkmarks, some access, and in the end if all of them are checkmarks, then the Lost Pixel bot itself will be, as well, a checkmark and, basically, the build passes. So, let's add maybe the pricing page now to this, so we have all pages of this big project.

JASON: Okay, that is pricing. So, I'm going to add another page, called pricing. And I'm going to call it pricing.

CHRIS: Uh huh, cool, okay.

JASON: Anything else that I need to do?

CHRIS: Nope. You just added the new page and this should become a regression, because you have a new addition.

JASON: Okay.

CHRIS: Also, what I wanted to mention is that it is in a linear process, right, if you're just working on one branch, it's quite easy to understand how things are working, but it gets quite more complicated in a team where you have branches and PRs and PRs of PRs, and so on. So what we're doing is we're looking at the GitHub tree, right. Looking at all the whole graph and looking at the parents of the parents of the parents, sometimes if you have merge or three point merge, you have three parents and so on. Try to find out what is the most recent version of this specific shot if I consider my whole codebase, right. So, for example, you could be working on a PR on a new... we can test it, we'll change something on the pricing page. It shouldn't affect your main branch, as long as you don't have changes from this PR on the pricing page, right. So, you'll start to have two baselines that are living at the same time. In a bigger team you have multiple of those. For every PR there's a baseline. Merging from develop and this access, all those shots. Yeah. It's one of the most complex parts in the platform right now.

JASON: Yeah, no kidding.

CHRIS: When I look at git with Linux... respect it's crazy cool how good it works. Yep.

JASON: So, should I... while we're waiting for this to build, should I set up a pull request over here?

CHRIS: Yeah, let's set up a pull request and make some other changes, so we can run through at the same time.

JASON: Check out a new branch and we'll call this pricing changes. Okay. So, I have my pricing changes, and I'm going to come over here, and did you have a particular thing in mind?

CHRIS: What would you like to do? Would you like to change the price, or maybe change the text in one of the pricing items? It's absolutely up to you, whatever you would like to do.

JASON: Sure, why don't we do this. We'll change the pricing list, and we'll say... please buy this so I can afford my summer home in Costa Rica. That sounds lovely. Okay. So, we are just going to be... this is what I'm internally at the company calling radically transparent pricing. Please fund my expensive hobbies. (Laughing). Okay. So, I have a change. And we can check this real quick. Still got this running out here. And once this runs, here we go. We've got a change.


JASON: Good. Okay. Then I guess I can change some of these, too. So, we've got the plans. Where did our plans come from? API example, plans.

CHRIS: Data source. Data folder.

JASON: There we go. Okay. So, enterprise, that one is going to be... let's get this house funded.

CHRIS: They can afford it.

JASON: Exactly, that's the whole plan. You're here, because I don't want to work anymore. (Laughing) Okay. So, we got an enterprise plan. The plan is now $10,000 per month, or $12,000 a month if you go month to month. And we've updated with our radically transparent value prop. I think we're in pretty good shape. So, let's go back out here. I think our build should be done. It is. Okay. So, this...

CHRIS: You can jump

JASON: We can see that it didn't work. And that's because we added that new page, right.

CHRIS: If you click on the details next to this, yeah, you should land directly on the build. And, yes.

JASON: So we get a pricing page. And we can see this is our baseline, good. I'm going to approve it.

CHRIS: Oops. I see that I forgot the dark mode there to remove. Okay.

JASON: It's fine. Okay. So, now we've got... we've got it set. It's all done. We're in good shape here. And if I'm looking... wait, not here. Here? Here. Going to come out and, look, now it's passed, good. Okay, I'm going to open up a pull request, so I will git push origin. And we're going to send this pricing changes up. And we'll do a new PR. Not work? Okay, great. And we want to, yeah, push it up there. And this is going to be called radically transparent price. We'll skip a body, and we're just going to submit that. Now we should have a pull request. Failed.

CHRIS: Where is it?

JASON: Oh, it failed... oh! Why didn't it commit my stuff? Okay, we're going to git add all. We'll git commit and say radically transparent pricing. Going to have this PR again here. Good. Okay, okay, it actually worked that time. Now if we reload, we've got our PR, and if I open this up, we have it automatically running, which is always great, so we can see in this case I haven't set this repo up to not allow merging with all checks passing, but like you were saying earlier, that's a choice we can make. We can say the checks are informational, we can say the checks are required. I think we can say that some of them are required and some are not. That is a really nice place to be.

CHRIS: For example, what I've seen, you can't have checks on the PR. They have to pass to be able to merge, they'll auto accept it once you merge the PR. Just auto accept all those changes, and that's it. It really depends on the way how you want to run git flow and those kind of things in your company or organization.

JASON: Yeah, I think there's a cool... where is that setting? Auto merge on green, I think is the... I saw that get put into production at Netlify right before I left, where they'd kind of figured out if all the checks clear, go ahead and ship this. And I thought that was such a cool feature, and it really speaks to what I was talking about earlier, with like the culture of trusts that good tests create. If you know that all your checks clear, then the spot check you did on the logic itself, do we agree with, you know, the feature that was added and the way that this feature was implemented? If yes, as long as the checks clear, we're good to go, let's roll. I think that's a good way to set up a team culture.

CHRIS: Interesting, the company I'm working for right now has not only a lot of tests integration and unit tests, but also uses Lost Pixel to do regression tests. We have two code reviews by my... somebody else on the team has to do a code review, so at least two reviews need to be done. And, finally, a quality check needs to be done by somebody looking at it. And gives a label to this issue. The label is there, it will then merge the PR, only then. So, it's a lot of steps there to make sure software is good. Okay, we have our first visual regression here that is a difference. Okay, let's look at it.

JASON: Okay, so we're looking.

CHRIS: First of all, doesn't look like much of a difference if you look at this like a human right. If you click on the right side in the picture... then you see the difference image.

JASON: Oh, look at that.

CHRIS: We can choose between... can go for the slider, for example, in the top, which is putting the images one over the other and you can slide around and see what the difference is.

JASON: That helps, that's nice.

CHRIS: But you need also the red part is helping you really finding out what the problem is.

JASON: Yeah, this is great, this difference.

CHRIS: To make it even more obvious, if you go into the side by side view again, you have this mini map. This little button here, yeah, if you click it, then it shows you in the middle exactly where the errors even very long pages, some companies are making huge pages with screenshots. It's really hard to find where the errors are, or if you have red items on your page, hard to spot the incidences, but this helps figure out where it happens.

JASON: Nice.

CHRIS: You can even add a comment here now for your teammates and add something like "please look at it." We don't have yet a notification system built in, so it's just here's where people can look at it and see from the outside there is something to look at, but this will come soon. So you can have Slack or email notifications, and this will help to integrate better into the whole process. So, oh, you can choose if you want to accept it or not.

JASON: Yeah, I think we're happy with this. We're going to ship it. So, I'm approving. And then when I come back out... I think I left it back here, right. Didn't I? Where was my... PR, here we go.

CHRIS: What are you looking for?

JASON: I was trying to figure out how to look at my history, and I don't know if I tried to do that in Arc before. Look, they are passing, which means we approved that change, which means we can, in fact, merge that pull request. So, off we go. And then we don't need that branch anymore. So we can head out here. And it's a running. And then we can see that our checks are in progress now, so we head over to our checks, we can see these running. And then this one should run as expected with no errors, because we approved a new baseline.

CHRIS: Yep. Unless in the meantime...

JASON: Oh, right, unless in the meantime somebody were to have approved a new, new baseline.

CHRIS: Exactly. This could happen. Somebody could work on a feature in the meantime, add something, then you have a mix of both. If they are not conflicting with each other, then they will create a new baseline together. But if you're working on the same page, or same feature, then, obviously, there will be a conflict and you have to approve the newest version based on whatever makes sense, right.

JASON: Yeah. Yeah. I mean, this really does like point to, I think, the strength of a workflow like this, is what I'm always looking for is, you know, just what I would call the gut check. I want to make sure that as we're shipping, that we're at least, like, peeking to see what's going on. So, I'm excited for what you mentioned about, say, Slack or other updates, where what would be really cool, and this is me going into feature request mode, but what I would love to see is if I already work mostly with my team in GitHub and Slack, if there was a way for it to just show me the side by side in Slack, and I could say, yep, that's what we meant to do, and then it just continues on with my build and merge. Those are the sorts of things that helps keep people in their flow, right. And as I'm thinking about what makes me love or hate a process, it's typically how many times do I have to change context to get through a given process. If I can do it all where I am, I'm usually really happy about that.

CHRIS: We have a Slack integration for the build process itself, so you can get notified. You can add this already, but it will not allow you to accept pictures right away. I think it will be a bit hard to show different images in Slack and so on. So we'll think about it. We also think about how to add images back to the GitHub PR, so they show up there, so people don't even have to go to the platform, but they can... perfect, it works without issues. They can uniquely look at it and approve it there if they want to. So, we're still figuring out what makes the process more smoothly. The important fact was GitHub, because most are using GitHub, and it's closing the cycle, makes the integration so much smoother, because we are using the same tools, like always. So, this was important for us. And, yeah, we'll look definitely into more options. There's so many other things that I would like to show you. I don't know how much time we have.

JASON: We are, unfortunately, pretty much out of time.

CHRIS: Okay.

JASON: How about this. Do you want to maybe give folks who want to do more some additional resources, like if you've got blog posts, if you've got documentation, if you've got demo videos, anything they can go and see to learn more about how this works?

CHRIS: We have a lot of blog posts. The person working with me together, he's writing a lot of those posts, good and easy to follow. So, definitely worth checking out. There will be much, much more content coming. I was hoping we would get farther, I wanted to show how we fight flakeyness in tests, like things that change all the time. You can increase thresholds to accept those changes, or you can put masks on things like something that both of us in another project all the time, graphs, charts, would always be somewhat flakey.

JASON: Yeah, they are always going to be... yeah.

CHRIS: Always something, right. So you have this we only looked at the page screenshotting mode, but also have full storybook integration, so if you're building design components in storybook, it's there, free option immediately, because you just have to point to the storybook and don't have to do anything else. Basically have it out of the box, visual regression test for your storybook. And then also we have an easy integration with tools like Cypress and Playwright. If you have visual regression tests if you have integration tests like this, end to end tests, you can basically just put at the right moments a screenshot by using what Cypress or Playwright offers to just create those screenshots and put them in a specific folder, and Lost Pixel will pick it up. So, you don't have to write anything unique. Use the integration tests that you have and take those screenshots and get on the platform the differences. On the one side you test if the functionality is there, and on our side you find out if it still looks the same. There are plenty of other options. You can even run functions directly... get control over the browser with Lost Pixel. So, you can run Playwright function there, you could modify things before you take a screenshot.

JASON: Oh, if you wanted to... like on the pricing page, for example, if we wanted to hit that toggle and make sure we were looking at the monthly pricing as opposed to the yearly pricing, we could do that and then screenshot.

CHRIS: I think it would be easier in this case to use Cypress for that, because it just takes screenshots, but you could do that, you could enable certain CSS changes. You could do a lot of different things. Just by configuring the Lost Pixel config.

JASON: Very, very cool.

CHRIS: So much more that we have to integrate.

JASON: So, it sounds like there's...

CHRIS: Just scratched the surface.

JASON: Yeah, barely got to the surface of this, but anybody who wants to follow more, there is the Lost Pixel docs, the Lost Pixel blog, you should also go and follow Chris on the old Twitter. And make sure that you are staying in tune. Also, we just opened a Learn With Jason Discord. So for anybody who wants to, if you do have questions, or you want to learn about something in the future, it's now available. You can go over to that Discord and ask questions there. We have some of our past guests in there, and they would probably be happy to answer questions if you had them. But, yeah, so, lots and lots of very cool things going on. Chris, thank you so much for spending some time with us. Before we wrap up, I want to do one more shout out to our live captioner. We've had Ashly from White Coat Captioning here with us today, that's made possible through the generous support of our sponsors, Nx, Netlify, Pluralsight, and New Relic, all kicking in to make this show more accessible to more people, which I appreciate. While you're on the site, take a look at the schedule. We've got all sorts of good stuff coming up. I am extremely excited. We've also got some that we haven't even written down in here that are just going to be incredible. Cannot wait. Like, surprise to me that I found yesterday. I booked Rich Harris. We're going to talk about Svelte, and it's going to happen in May. So, make sure you get on the schedule, join the Discord, do the things you need to do to stay up to date with what's happening on the show, because it's going to be just packed. Packed, everyone! With that, Chris, any parting words for everybody before we wrap this up?

CHRIS: Just wanted to say I'm super honored that I was able to be here. Thanks for your time, it was fun. Hope we had more time to figure things out, but it was cool where we got. What I would like to add is it's important to have tests, it's important for your confidence, it's important for your team, for the health in general, for the mental health, because it makes your life so much easier. You don't have to take care of things that are repetitive and boring, and just look at the parts that are fun and enjoy building software. And I think with Lost Pixel you have a really easy way to use whatever you have already and just put it there and reuse your integration tests and other things to really get for free and visual regression test, which helps understand what customers don't have to go on the page every time when you deploy, is it still there, button there, just roll, right. Continuous delivery. That's it.

JASON: Very, very cool. All right, Chris, thanks so much for spending time with us today. Chat, as always, it's been a blast. Thank you for being here. We're going to raid Ship That Code. Stay tuned for the raid and we'll see you all next time.

Closed captioning and more are made possible by our sponsors: