Fast Unit Testing With Vitest
with Anthony Fu
Vitest is a Vite-native unit testing framework that’s Jest-compatible. Core maintainer Anthony Fu will teach us how to use it in this episode.
Resources & Links
Captions provided by White Coat Captioning (https://whitecoatcaptioning.com/). Communication Access Realtime Translation (CART) is provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings.
JASON: Hello, everyone. And welcome to another episode of Learn With Jason. Today on the show, we have Anthony Fu. Anthony, thank you so much for joining us. How you doing today?
ANTHONY: I've been good. Thank you for having me. It's my pleasure.
JASON: Yeah, I'm super excited to have you on the show. I've been following your work for a while now, and I am just -- you know, even back when you were doing component CSS stuff and you've been involved in the Vite community. Now you're on Vitest. So for folks who aren't familiar with your career, do you want to give us a bit of a background on you and who you are?
ANTHONY: Okay. So hello. My name is Anthony Fu, and I'm a coding member of Vue, Vite, and Nuxt. I'm also the creator of Vitest.
JASON: Excellent. Yeah, and so today on the show, we want to talk about Vitest specifically, which is a unit testing framework that's built for Vite. Is that a correct assumption?
ANTHONY: I would say it's built on top of Vite. Interesting is that even though it's Vite, you can use it without Vite.
JASON: Oh, interesting. Okay. Very cool. And so in the description, it says that it is a Jest-compatible unit testing framework. So when I think about Jest, I'm thinking of unit tests. I'm thinking of snapshots. Also, because of the way I learned Jest, I always think of React, but Vitest is not framework specific. It is unit testing, is that correct?
ANTHONY: Yeah, I think it's more unit testing and also a general purpose for framework agnostic test owner.
JASON: Got it. So we can use Vitest with or without Vite to run unit tests anywhere we would have dropped in a Jest test before. We can reach for Vitest. I guess, what was your inspiration for building this? Like, why build another test runner?
ANTHONY: Yeah, for me, if I could, I really don't want to do that. This is quite a complex thing to do. You have a lot of aspects to take care of. I think that's the reason we started working on this one. I think at first we need to talk about Vite testing. So Vite came out, and we realized it's a really nice concept of making all the things on demand and to have a really efficient, faster module replacement. So then I kind of was more involved into Vite and finally joined the Vite team. Since Vite is quite new and is based on ESM and based on the browser to do the evaluations inside of bundling the file. So it's actually serving your file, each page, one by one. Something like that. So they were actually missing a mechanism to run them without the browser. That's also why that's efficient and fast. We don't bundle it. Then it comes to the problem that when we have Vite and people are starting to use Vite, we need something -- like, when you build an app, you probably need a test, right. So there are people kind of looking for a solution to test with Vite. When we do the research, we can find out that, okay, there are probably existing tools not working properly with Vite. So when we want to do the documentation, we cannot figure out a best one to recommend. And that would be our main pain point for a long while. You kind of have a workaround, but it's not really perfect. Jest is the most popular one at that time and probably still is. But the problem is Jest has its own pipeline, and if you want to test your component, for example, where you're using Vite to deploy, and in Jest, you need to write another pipeline to do the transformations to the code that Jest understands. At that time, it doesn't support async transformations, and it doesn't have ESM support. So that's the kind of tricky thing. Vite is async by default. So basically, if you want to test alongside your app, you need to duplicate your pipelines twice. And you need to maintain two configurations for Jest and for your app. Which actually, even though you take time to make the effort work, it still has misalignment. It cannot be 100% matching in terms of the features and how the code is being run. So at one of the meetings of the Vite team, one of the coding members said maybe we can have a tool. Mixing Vite and test, it's called Vitest. We do a search, and it's available. So we just take the name. So we don't have the idea of doing a test at that moment but thinking, okay, we got a really good name. Just giving it a try, something like that. So that's how Vitest came to life.
JASON: I love that it starts from -- it's like a great name. Okay. Well, now that we got a great name, we have to go build this. (Laughter)
ANTHONY: Yeah, exactly.
JASON: Okay. So you said a couple things that I think are really interesting that I hadn't really considered and that make total sense now that you said it out loud. One of the big pain points that I think a lot of us have felt through the last few years, five years, is the slow shift of not using ESM to using ESM. So ES modules came out, and they were wonderful, but I think we all thought we were using ESM, but what we were actually doing is feeding them to Babble and transpiling to common JS. So when Vite came out, Vite is actually -- it's all on esbuild or partially on esbuild to use es modules by default. They're native, not transpiled, right?
ANTHONY: I think it's partial. It's basically for transpile. It's there to transpile high-level syntax into low level. Or remove the type from TypeScript. That's based per file. And during the normal build, we're still using React to do that. We still have a little esbuild to do the transpiling. Then we feed up to the app to do the bundling.
JASON: Yeah, so I hadn't really considered that Jest isn't built for an es module native app. It's built for transpiled apps. So for me, most of the things that I build are toys so I don't write a lot of tests. So I've only seen discussion about the pain of testing. And I did see a lot of people saying I don't know how to use -- you know, I can't get Jest to cooperate with this module because they poured it over to esbuild. I think most famously was the Node fetch module that did like a hard break to es module by default. Then you had to change the way you wrote your code so people would do the major update in Node. Then they would be like, why did fetch break? So it's been a disruptive change, and it feels like it's maybe more disruptive because it's slow. So how have you felt about that as you've been kind of watching? You're pretty close to this with Vitest all being es module native. How has that transition been as you've watched the Node ecosystem go to modules, the frontend ecosystem go to modules? Where do you feel like the big pain points still are as we try to get through the rest of the transition?
ANTHONY: Okay. I think currently, Node is shipping the ESM support in Node 12. So I think the majority of the runtime of Node is already being supported by ESM already. So you can say basically there's no problem to run ESM. But the problem is the integration part. Things -- ESM is async by default. So you can have a top-level await inside your ESM module. Since they can be fetched from the network, it has to be async. The main problem here is the common JS is sync. So when you require something, you don't await it. So that essentially means that you cannot require ESM module. It's async. You cannot make it synchronized. So I think that would be the main pain point, that some packages written in the context of common JS, expect when you require some modules, you don't need to wait it. And when something is async, everything down the string should be async. But actually, the problem is that it's not really -- there's a lot of workaround. You can use ESM modules inside JS, but using the import. Using the import and await it so you can use it. So for me, from a library author perspective, I kind of prefer to make the transition smoother to ship your libraries for both ESM and CJS versions. You can use your build to output to a different type of format. Then you have the export field in your packages directly to different files based on the environment that requires your module. So that would help when the environment of your user is already ESM available. They can get a benefit from your ESM build. And if they're not, they can still use your CJS version until they get time to upgrade. So if you ship something like ESM only, that basically means that you refuse the user to use your packages if they are still stuck on CJS. They no longer get the update, your back fix, something like that. Or you need to maintain two versions, which is probably not a good idea.
JASON: Got it. Do you think -- so this is a little future forecasting question. Do you think we'll see within a near-term version of Node a drop of CJS? Or do you think Node will continue supporting ESM and CJS for the foreseeable future?
ANTHONY: That's a good question. For me, I kind of think they will keep supporting it. It just takes, I don't know, years to do the migration, but still, I think some code is not ESM ready.
ANTHONY: But I think the good thing is the main problem is the highest level of your integration, your framework. If they're supporting the ESM from the top down, they make all things easier. In ESM, you can still import CJS but not the other way around. So we see a lot of new frameworks now supporting ESM by default or even making them default. So I think that can get better and better.
JASON: Yeah, that totally makes sense. Okay. So let's talk a little bit about testing. I hear a lot of folks talk about testing, and depending on who's making the comments, it can feel very different, what gets recommended for what's a good idea, what's a bad idea, what's going to help maintainability, hurt maintainability. I have a feeling that the answer is somewhere in between the more extreme takes that we hear. So when you're building an app, how do you think about the distribution of tests? So there's unit testing. There's integration testing. There's end-to-end testing. There's other types we can do. How do you choose when to put a unit test in place versus deciding that a unit test isn't going to be enough information and you need to go up to an integration test or that an integration test is going to be too flakey, so you want to do a unit test. How do you build a mental model to know that you're not making your life harder with the types of tests you're choosing?
ANTHONY: Okay. So I think I need to have a note beforehand. I'm not an expert in testing. I know nothing about testing before I started working on Vitest.
JASON: Oh, cool.
ANTHONY: Even though I build the tools doesn't mean I'm really good at using it, you know. So my perspective is totally, I think, subjective.
ANTHONY: So don't take it too serious. Just my personal opinion. So for me, I would use unit testing as much as I could to avoid -- unit testing would be having a small scope. It can be more efficient to test and faster to test. You can imagine that if you run a test and need to fire up a browser, which takes a lot of time. So for me, choosing when to test is when I'm not confident about the functions. When you cannot see through the function, like times two. Maybe you have an add function, adding two numbers. I probably don't test it. Things you can directly see. You put two numbers, and you got the sum of it. But when you get a really complex algorithm, you cannot see what is directly going on. So that's probably why I'm going to test it.
JASON: Yeah, that's a good --
JASON: So when you're looking at something like a unit test, you're looking to see is a unit test going to be noisier than just looking at the code. Because I agree. If you have an add function and it just says return, number one plus number two, a unit test doesn't give you a lot of additional information over just looking at that function. But when you get into something where you're mapping over data inputs and applying defaults and it's really important that whatever the result of that function is, is correct, then having a unit test so when you change logic over here and it accidently kind of shifts what's happening in this function that you're testing, you don't find out because you broke production because now you're not actually returning the array of, I don't know, user data, categories, preferences, whatever. Now you have a forever spinner because that piece of data is missing now. Those are the sorts of things that I think are -- yeah, that makes total sense. So the other thing I hear a lot of heated debate about is snapshot testing. Now, snapshot testing is something that Vitest offers. Is that correct?
JASON: So for anybody who isn't familiar, maybe you can tell us what a snapshot test is and what you believe is the right application of one.
ANTHONY: Okay. So again, I'm not an expert in testing, but I'm really in favor of having snapshot testing. That could be one of the reasons I created Vitest. If you cannot use Jest, you can use Mocha, which doesn't have the built-in snapshot, which is a pain point for me. I think maybe I can use Mocha to find some add-ones to do the snapshot, then wrap it with a package. That's the initial idea of Vitest. Instead, I just like to have a default tool set, something like that. The great part is Jest has all the things included. So that's also the goal of Vitest. For me, I think snapshot is a way to run your test the first time and to save your output of a function or some state you want to log into a file. Then the second time you run it, it will compare the result of your snapshot file and the second result. It will compare the file. If they're mismatched, it will tell you there's something wrong with -- like, there's some difference. And the test will fail. If it's passed, that means the result of the function returns as the last time. So you're probably good to go. So that can be a way -- sometimes using the change to the output is expected. It's on purpose. You can also trigger a command to update the snapshot. I think for me, it's basically like a lazy way to not manually write the output. It's for the test runner to save the output for you.
JASON: So if I'm using a snapshot, what's functionally happening under the hood is I will say for this test, I want you to take my -- let's assume we are doing an add two numbers because that's easy to do. We say, all right, I want you to take the number one and the number five and feed those two, add two numbers, and compare it to the snapshot. When the add two numbers function was written, it is number one plus number two and returns a result. The snapshot is -- the first time you run the test, it runs the function. Then you would take the result, so in this case the number six, and store that in a snapshot file that's then committed to your repo. Is that correct?
JASON: So when I commit that to my repo, whenever the test is run, it's going to say, okay, I'm at this test that says compare the function call to the snapshot. So it's going to load the snapshot and see the snapshot stored the number six last time. It's going to run the function and say, okay, I got the number six. Does that match the number six it stored in the snapshot? It says yes, great, the test is passed. Then later, if you were to accidently -- I don't know how you would accidently do this -- but you switch the plus sign to a minus sign and get a negative four now it would say, hey, your function is not returning the same thing. Your snapshot didn't pass. So you can do this manually with a unit test by saying, you know, expect add two numbers, one, five, to equal six. Or you can say expect add two numbers, one, five, to match snapshot or whatever the right syntax is. And it lets you -- especially when it's something really complicated. Like if you've got a user data object that has a bunch of things and you don't want to have to write out this huge object to be the two equal. It stores that big object in a snapshot file. You can manually verify it is what you expected it to be. Then from that point forward, until you change that user data, it just works. You're happy. So to me, that seems like a perfectly reasonable thing to do. I wonder if some of the backlash against snapshots is maybe an over application of those to things that change very frequently. So you're basically always updating your snapshots and it's like, why have these at all if you're never going to keep them between commits.
ANTHONY: Yeah, exactly.
JASON: Okay. Cool. So let's talk a little bit about what we want to do today. So I want to get my head around how to use Vitest, and I think a good way to do that is maybe we can just start with a project and write out a function. You know, figure out how we would test that. Maybe we can test it manually and put the inputs in and know what the output is going to be and kind of do a straight-up equal assertion in the test. Then maybe we can also do a snapshot test in the same way to show how that might save us some time or make our lives a little easier. Then we can break our test and update snapshots. All those good things to show how it works. Does that feel like a reasonable next step?
ANTHONY: Okay. Yeah, sounds good.
JASON: All right. Then I'm going to take us over into pair programming mode. And the first thing I'm going to do is I'm going to give a shout out to our closed captioning. This episode, like every episode, is being live captioned. You can see that on the homepage. In your Twitch viewer, you'll get a closed caption button, once it loads up here. There it is. So you can just toggle on those closed captions if you want them. They're also being posted on the homepage of the site so that you can go and see those. That captioning is being done by Rachel from White Coat Captioning. Thank you for being here. It's made possible through the support of our sponsors. We've got Netlify, Nx, New Relic, and Pluralsight all kicking in to make this show more accessible to more people, which I very much appreciate. We are talking to -- let's see if I can type today. Still getting used to this new keyboard. We're talking to Anthony Fu today. Make sure you go and give Anthony a follow on Twitter. You know, just -- are you hanging out on Twitter these days, Anthony? Is there a better place for me to share?
ANTHONY: Yeah, you can do Twitter. Otherwise, I'm also on Mastodon. It's in the description.
JASON: Here we go. Let's go over to here.
ANTHONY: Thank you.
JASON: Excellent. All right. So we're going to be talking today about Vitest. Vitest is blazing fast unit testing built on top of Vite. That's extremely cool. So if I want to get started, what should my first step be to setting up a project with Vitest?
ANTHONY: It's up to if you want to start a new project or you want to install Vitest in an existing project. I think the best thing is to use npm install Vitest.
JASON: Okay. So I'm going to -- for the sake of keeping things as simple as possible, I'm going to put us in a new project. So I'll do -- what is it, npm create Vite. Right? It'll just give me a blank project.
ANTHONY: Yeah, I think so.
JASON: Yes. Okay. So we're going to do Vitest. Should I do vanilla, something else? What do you think?
ANTHONY: Maybe we can pick Vue or React. One of them should be fine.
ANTHONY: Yeah, TypeScript, please.
JASON: Got it. Okay. So let's go into the Vitest. Then I'm going to run npm install. And you said I need to do npm install Vitest, right?
JASON: Do I need TS or anything? Just straight up Vitest?
ANTHONY: Just straight up Vitest. I don't know what TS is.
ANTHONY: Yeah, we can start there, maybe just a utility.
JASON: Okay. So we'll do util. We'll call this one -- we'll do something like load user data. That'll give us a nice, somewhat complicated output. So we can export function, load user data, and I'm just going to kind of make some stuff up here. So we'll pull in a username. That username will be used to get -- I don't know. We can maybe even do some, like -- oh, boy. I'm faking my way through some of this stuff just to figure out -- we can do a little variability in this testing without actually setting up a username or something. Or like a database, I mean. So we'll go with username. Nope. Oh, boy. Still learning this keyboard. Username, and we'll have one for you. Oh, my goodness. And we'll do a name. And maybe, I don't know, like projects. Then we can do Vitest. You're a Vite maintainer as well, right? I'm just going to keep that simple, if you don't mind. Then we'll do another one for me, and it'll be a username of jlengstorf. And we'll do name Jason. Projects, we'll do burgers and cheese.
ANTHONY: All right.
JASON: Oh, then I have typos in my code. Okay. So what I want to do is we're just going to return users.filter. No. What's the one for getting a single user?
ANTHONY: You can use find with the predictor.
JASON: That's the one. Yes. And we want -- oh, my goodness. I bought an ErgoDox, and I'm still not capable of typing with it. So we're going to do user.username equals username. Then we need to include that as an argument. So we're building out kind of a mess for ourselves here. But that's okay because that's what we're trying to test, right. So we're going to get our user. That user is going to equal load username. Then, I don't know, maybe we want to do something -- maybe we could even extend it a little bit. But for now, I don't know, we could do user.coolness. Then set that to be, like -- oh, my god. Okay. Username equals -- with a negative one. Then we'll return user. Okay. So it's unhappy with me because we inferred this type. So we can fake this, right. Hold on. Am I supposed to use interfaces or types?
ANTHONY: Both would work.
JASON: So we'll go with username is going to be a string. The name is also going to be a string. The projects will be a string array. And the coolness will be a number. Okay. And I guess that would also be, like, potentially undefined.
JASON: And this one needs to return a user. So --
ANTHONY: Yeah, maybe you can assign after the user.
JASON: Oh, I got you. So we'll go with user, like that.
JASON: Okay. It's unhappy because --
JASON: So do I need to do one of these? Nope.
ANTHONY: You can actually add -- how do I say that? Yeah, this too.
JASON: What were you thinking?
ANTHONY: Probably this is the better way.
JASON: Okay. All right. Yeah, so we'll do something like this. Then if you -- what are you unhappy about? User is possibly undefined. Yeah, so we'll do something like this. If no user, we can, I don't know, throw an error.
ANTHONY: Yeah, that's a good one.
JASON: So by doing this, we've got -- this is a pretty decent thing to test. We have a data access function here. I'm going to move this up to the top because it's going to cause my eye to twitch if I don't. So we have our type. We're loading users, and then we are getting that user and modifying it somehow. So we've got a fake database access. We've got a function that would actually load that. And I should probably make this async because we would end up getting -- it would end up wanting this to be from an actual database. So we can just fake the whole thing and make it seem like a real network call to go off to our database and get a user by their username.
ANTHONY: Yeah. Sounds great.
JASON: Okay. So the next thing I would want to do -- oh, it needs to be a promise. That's fine. We can make that happen.
ANTHONY: Yeah, that's TypeScript.
JASON: Love it. I love that it tells me that stuff. Okay. So now we've got our TypeScript happy. Our function should work, by all estimations here. And that should give us something to test. So I want to write my first Vitest test for our load user data function. How should I do that?
ANTHONY: You can query the test file. And where is totally up to you. You can query it alongside your function, or you have a dedicated test folder, something like that. As long as the test file is ending with .test, .ts or js.
JASON: Got it. Okay. So we would do a load user data, dot test, dot ts. Then I'm going to need my function here, my load user data. So we'll get that and what else do I need? Do I have to import anything from Vitest? Is it magic? How does it work?
ANTHONY: So the first thing different from Jest is the test utilities are now directly available globally. So we would command you to especially import it from Vitest so you can use import and use describe test from Vitest.
JASON: Okay. And I can hit control space to see what's available. There's a lot in there. That's awesome. Then I can do test. Great.
ANTHONY: And probably you also want to expect. But it's okay. You can add it later.
JASON: Get an expect. Okay. So now I can describe my test, and we want load user details. And this works like Jest, where --
ANTHONY: Yeah, it has a function.
JASON: Then each one of these then becomes a test like this, right?
ANTHONY: Yeah, exactly.
JASON: So we want to load user data as expected. I don't know. That's a bad test name. Y'all can yell at me on the internet for that later. Okay. So if I do a const user equals -- I need this to be async so we're going to do an await. Load user data. We'll do antfu7. And this needs to be async. Then we would say expect user. Then is it like a dot to equal?
ANTHONY: Yeah, to be or to equal. Both would do.
ANTHONY: To be will be straight through. Sorry. To be will check the reference. So that will always fail. If you compare to a value, it should be fine. So equal would be better here.
JASON: Equal will be better here?
JASON: Okay. So we've got that.
ANTHONY: Probably you can run the test to see.
JASON: Yeah, let's run it and let it fail. What a great idea. So I'll just delete that for now. Then we'll come in here, and I'll just do one of these so it doesn't yell at me. Then to run the test, I actually see some cool stuff here where it's like telling me I can do it already.
JASON: I don't know where that comes from. What plug-in is that? That's a great plug-in. I wish it would tell me what it was. Oh, that's Jest or something. So that's fine.
ANTHONY: Actually, you can --
JASON: Sorry. Go ahead. We have a delay, everyone, where I'm talking over Anthony. I apologize.
ANTHONY: I'm sorry. We do have a Vitest VSCode extension. But I think later we can introduce it.
JASON: Yeah, that would be great. So to start, how do I run this the first time? I don't think I built anything into the package to allow testing. So do I need to add a command? Can I run something from the command line?
ANTHONY: If you prefer, you can add the script test and then Vitest.
JASON: Okay. So we'll go test and Vitest.
ANTHONY: Yeah, that's it.
JASON: So out here, I can do an npm run test.
ANTHONY: Okay. It says fail. Actually, I would recommend you to use the terminal from VSCode so you can see the code.
JASON: Yeah, let's do it. So down here, terminal. I can do an npm run test. And right alongside my code. Great. Okay. So it was expecting an option, which was incorrect. Instead, it sent us our actual data.
ANTHONY: If it's me, I would just copy the output.
JASON: Oh, yeah. Good call.
ANTHONY: They are doing the manual snapshot already. And this is one of the interesting features that Vitest does by default. Vitest runs with dev mode. And it also smartly knows that when you are in CIs, it's only run once. But when you're in a real terminal, it will start into watch mode. So you can directly start having feedback of when you change the test. It will automatically run with it.
JASON: That's excellent. So the next thing we want is we want to make sure that it is setting the right coolness levels for us. So we'll make another async function. We will do one of these. Then I'll get me, load user data. We'll pass in my username. Then for yours, grab this one. Then we should be able to do expect -- now, this one we want to be exact. So I want this one to be negative one. And I want yours to be 100. Okay. And now our tests are continuing to pass. It's doing the coolness set correctly. Then we also need to check -- I want to check what happens when we do it wrong. Like we check for a user that doesn't exist. We're going to get that error. So how would I check for the error being thrown?
ANTHONY: Yeah, maybe you can start to write a test case that's expected fail. Just fetch -- you're just putting the wrong argument to the function. Then you can see how it works.
JASON: Okay. So we'll do an async. And we'll do no, not real. That'll be one of these. Await. Load user data. Fake user.
ANTHONY: Then you can say the test failed, no user found.
JASON: And that is what I want. So I want to catch that and check that my error message is no user found.
ANTHONY: Yeah. So the opposite is this is what you actually expected, right. So a lazy way to do it is you add a dot fail after test. No, before the description.
JASON: Oh, I got you. So test.fails. Oh. That's cool.
ANTHONY: And this is a lazy way to do it since you don't really test what the error is. You just say, okay, this is just completely failed, and I don't care about how it fails. So probably should remove it. Okay. Then we can have an expect. Since the error is thrown before the expect, it cannot be captured. So instead in the expect, you should put the function you're expecting it to throw. So you copy -- yeah, you just move the code into that and pass it as a function.
JASON: Okay. So I now expect my function, and I would want this to throw error.
ANTHONY: And you need to turn it into a function. So arrow function using await.
JASON: Expect arrow function. So I need to set this as async as well, I assume.
ANTHONY: Yeah, or we can remove them together. And you can save it to see if the error worked. No, still not working.
JASON: Unhandled rejection.
ANTHONY: Oh, the promise will be a bit different. When you're testing an async function, you'll have the contact go through. In promise, you are actually being rejected. So you can change the arrows to be reject. Probably just reject. I can't remember. Without to.
JASON: This expression is not callable. So do I just --
ANTHONY: And dot, I guess. Oh, what's that? I can't remember. To have error, something like that.
JASON: To have returned.
ANTHONY: To throw error? Probably to throw.
JASON: Hey, there we go. Okay. Can I validate the error message? Like if I want to make sure I threw the right error?
ANTHONY: Yeah, you'll see on the type you can have a string or you can have a regular expression.
JASON: Got it. Okay. So to start, let's make sure it fails. Just a generic error. So that fails. It expected something went wrong. It received no user found. Perfect. That's exactly what I wanted. So I can take my no user found, copy that, and I can throw it in. I love that we're just doing snapshot testing. Just the hard way. But this is great. So now we can be reasonably certain that our function is going to do what we expect it to do. We check the moving parts. So we've got -- we're loading a user. We know that we'll catch it if the user doesn't exist. We know that we're setting the coolness properly, and we know the shape we're getting back is the user that we expect, right. But this feels a little repetitive. Like, I feel like I'm sort of duplicating efforts here where, you know, this data and the coolness and everything, it's all kind of defined in the code, and I don't know that it really makes any sense to make my team, any time we need to add a new property, go through and update every test in the test base. Copy/paste this property into every test. So that does feel like a great use case for snapshot testing, where we just need to make sure the output doesn't change unexpectedly, but we don't necessarily need very high precision on everybody copy/pasting every field into every test.
JASON: Okay. How would I change this over to a snapshot?
ANTHONY: It's quite easy. You can just have the dots equal the entire function call, to change it to match snapshot.
JASON: So we're going to change this to match snapshot.
ANTHONY: And save. And check out the file. You actually have a folder created.
ANTHONY: And here it contains the data of your output. So I think the best practice here is maybe you should do a commit. So you will commit with the snapshot. Next time you change it, you will see the difference.
JASON: Got it. Okay. So let's add everything here, and I'll do a git commit. And we'll say feature of first commit with working snapshot test. So now when we look, there's no unchanged anything in here. Our git history is clean. And if I run npm run test again, we're still running. Everything is good. And now it's time for a feature change. We want to update this. So both you and I are going to get a favorite food, and I will put in -- let's see. Mine is going to be pizza today. What about you? What's your favorite food?
ANTHONY: Okay. Just go sushi probably.
JASON: Now I want sushi. All right. Then we're going to update this to be favorite food. That's going to be a string. Now that's happy. And when we save, what's unhappy? This is good because it doesn't match. So we wanted this to fail. But we don't want it to continue to fail.
ANTHONY: Yeah, exactly.
JASON: Okay. Oh, and it tells me exactly what to do right here.
ANTHONY: It tells you, yeah. You can press U if you think this change is expected.
JASON: Amazing. Now we're back to passing.
ANTHONY: Yep, then you can check the file, the snapshot.
JASON: So here's the changes I made. We can see the favorite food coming in. And here's the updated snapshot. And nothing unexpected here. This is exactly what we expected to happen. That's why we would update it. Great. So then I can come in here.
ANTHONY: Yep, and that's when you do the review. You can directly see the input and output changes.
JASON: Now we're all set. We got everything in here. We are in excellent shape, I think. So this is amazing. We're in really good shape here. We've got the ability to write out logic. We're doing this in TypeScript, so you can see how that all works together. And TypeScript saved us from some errors that we would had otherwise. Like I forgot to make this a promise at first. So I would have gotten weird type errors, and my auto complete wouldn't have worked and stuff like that. Having the logic in here with the tests we've written, we can feel pretty confident that whatever we're doing is not going to break the site because if we made changes that change the output of the function, it's going to get caught by that snapshot test. And the logic that's custom, like this coolness setting, is explicitly tested. So we're not just snapshot testing everything. We're explicitly testing the code we wrote. We're snapshot testing the output of the function, which is a big old object. And we're also testing our error handling explicitly as well to make sure that if we send bad data, we get what we expected to come out of it. So this is great. And this is running real fast. You know, I love the tip of making it a git commit each time so you can see the diff not only in the code but in the snapshot as well. That is very, very cool. Somebody is asking if we can see the snapshot one more time. So this is the snapshot code. Then the test itself is this one here where we're saying expect the result to match our snapshot. So we're just telling Vitest, take whatever the result in user is and store it in this snapshot file. All right. So we've got about 30 minutes left on the clock. Is there anything else -- like, what else should we showcase here? Chat, if you've got questions that you want to see, throw them in. Or in the meantime, we'll dig into something else. So Anthony, what should we look at next?
ANTHONY: Okay. I think we can switch from our manual testing for the output of the functions to snapshot, which is save our effort to update it manually. But we can introduce a new problem, that we need to jump back and forth between those two files. So it's not really straightforward when you make the change. You need to check the snap. So interesting is that we have another function to solve this. You can change to match snapshot, to match inline snapshot.
JASON: Like that?
ANTHONY: Yeah, and save it.
JASON: Oh! Very cool. Okay. And so this did exactly the same thing, except instead of creating this file, it's just re-writing my test to do -- like writes the code for me.
ANTHONY: Yeah, exactly.
JASON: That is extremely cool. That was like -- okay. This was great because that was such a Steve Jobs, oh, one more thing. So well done. (Laughter) You got me good.
ANTHONY: Yeah, it's good. Great. I'm glad you like it.
JASON: That is an incredible feature. And I love this because you're right. One of my resistances to snapshot testing is the fact that it's sort of magic. It's like, well, match the snapshot, and if I go look at my snapshot files, I can see what's going on. But who's actually opening the snapshot? We don't. We don't open them. We just kind of say it's fine. So this is amazing because it's putting it right in my face so I have to look and say, oh, that is what I meant to happen. So this is wonderful. Does that mean, then, I can delete my snapshot?
ANTHONY: Yeah, exactly.
JASON: Okay. So then I can delete my snapshot. And if we come over into the Git thing, we can see we've gone to inline snapshot, and we've deleted this fall altogether. I can save all that and say switch to inline snapshots. Okay.
ANTHONY: And you can go to terminal and terminate the process.
ANTHONY: And you can use npx Vitest, which is the same. And add an actual argument. We need to add a new argument called dash u, which is to update the snapshot. Or you can have the full name like dash, dash update.
JASON: Okay. So we've got npx Vitest dash u. So this would update the snapshot automatically if we were to change them, right?
ANTHONY: Yeah. So here, combining with inline snapshot is a funny thing. You could go to the test and change my ID into yours.
JASON: Oh, okay. Yeah, that's a great idea. Oh, it just does it automatically.
ANTHONY: And you change it back, just switch back. So when you have a complex logic to do, you can actually play the functions by changing the input and directly see the output in the test file. And you can change it out again.
JASON: So now that we've added the dash u, if I go in here and add something else, like we're going to say, I don't know, we'll say -- I'm just going to add something easy.
ANTHONY: Okay. I'll go chips.
JASON: We need snacks. Always need snacks.
ANTHONY: Okay, snacks.
JASON: So now that we've -- and as soon as I've changed that code, because I'm in update mode, that dash u, it's saying I know you're trying to change the snapshots right now, so I'm not going to make you do the failure and then manually update. You are in update mode.
ANTHONY: Yeah, you always update the snapshot.
JASON: It's a little thing, but that stuff is so nice. It's just a really pleasant, you know, quality of life improvement when I know I'm going in to change data, and I don't want to have to manually update all the snapshots. So I can just hit dash u, go tweak my code, and then review this and make sure, yep, that is, in fact, what I want. And we can test it one more time over here. And it updates. I can see everything came back the way we expected. So we flip it back again. It's doing exactly what we want. So then I'm happy, and I can come over into my commit and just double check one more time. Let me pull this back over to full screen. What are you doing? So we can see that we added the snacks. We can go in here and see that we added snacks here and updated our data. So now we have -- and I mean, this is great. This just feels so nice to work with. It looks very -- like, I'm not intimidated by this. I feel like sometimes when I start writing tests, I'm getting inside my own head about it. Like, oh, I've made this too complicated. Now I'm never going to be able to update this. But none of this feels like it's not something I can keep up to date. I think maybe this is a good opportunity to talk about, like, where is the high water mark for unit testing? So this one, I'm testing a very specific function with some very specific logic. At what point do you think an abstraction is too large and you need to almost break it down so that you can unit test individual pieces versus trying to unit test the whole thing, which would be enormous?
ANTHONY: Okay. So I think for the main barrier of doing unit testing is that most of your function, it's not really clear you got one input. Like in the function, you rely on other -- like the network or some other part. So that can make things harder to test. So actually f you have the mind to start with the unit test and stick with it, it actually helps you to make your functions more pure, to avoid side effects as much as possible. But that can be a way to help you to maintain your code, since your goal is to get all the things unit tested. Then you need to write the functions being easily testable and easy to understand.
JASON: Gotcha. There's a question in the chat, which is a great question. Is it possible to output coverage?
ANTHONY: Yeah, and you just pass dash, dash coverage.
JASON: So we'll do npx Vitest coverage.
ANTHONY: And since the coverage is optional, it'll ask you to install a package. You can just type yes. Then run it again. And that's it.
JASON: Nice. Cool. So our one function we've done, our one file that we've tested, has 100% coverage. Yes!
ANTHONY: Yeah, nice.
JASON: Amazing. So that's great. This is great. And I love that the second half of the question was do I need to install Istanbul instead. It's so wonderful you don't have to. I know we installed an extra package, but I didn't have to go learn that. I just said show me coverage, and you were like, you got to install this first. I just hit yes, instead of me having to Google how do I show coverage with my Mocha test or whatever it is. And then you have to install Istanbul, once you've learned what Istanbul is and all those other things. So really, really good stuff here. Excellent. This is very, very cool. So, let's see. Is there anything else that you want to show before we start to wrap? Because I'm a little worried about trying to go in a completely new direction, like starting to test Vue components or something, just given the amount of time we have left. But I don't want to cut you off if there's anything you have left to show off here.
ANTHONY: Okay. I was just mentioning, we were talking about unit testing or integration testing. It's actually not really possible to always write pure functions.
ANTHONY: So times you do need to have some side effects, and that can be part of your code or your logic. So Vitest also offers a feature to mock some certain things, like for example the time of the test. Also, you can mock the module by passing the module. Or you can mock a network. Probably some can be quite complex and probably require you to read some documentation. But if you have time, I think we could do a quick one.
JASON: Yeah, let's do it. Maybe we can even do a quick mock of this load user function. We can just mock that and give ourselves a fake test user.
ANTHONY: Okay. We can go to our test file. Then with import, you import a new thing called vi, which is Vitest, the first two letters. The same thing as Jest. So in Jest, you have a global variable called Jest. On the top level, you have vi.mock.
JASON: We want to go with vi.mock.
ANTHONY: Then you pass your module. You can copy from load user data.
JASON: Do I want to leave out the .ts or keep it?
ANTHONY: You can match it the same as your input. Then you return the function. You pass the function, and in the function, you return an object to represent your whole module. So here we can export an object and contain the function saying load user data.
JASON: Okay. So we have our load user data.
ANTHONY: And you need to return a function.
ANTHONY: Yeah, you don't need the extra error function. Just return an object.
JASON: Oh, you just return an object. Perfect. Less complicated is better. So then I've got my actual function here. And this can return -- we'll go down here and grab this, get ourselves a fake user. And this fake user is going to be -- oh, wait. No, we don't want that because then we'll have a fake user. So we'll go test user. And our test user is going to be just an absolutely terrible person. They don't want snacks. Their only projects are -- you know what, they don't have any projects. They're going to have no projects because we don't talk crap about projects here, everyone. And their name is going to be, let's see, who's on my bad list? Alan. Alan's favorite food is going to be boiled spinach. Gross. They're as uncool as me. So that'll give us a fake user with the username test user. Then we will --
ANTHONY: You can run the test again.
JASON: Run the test again. Oh, wait. I screwed that up. It's just npx Vitest. Okay. So it fails. And it failed because our coolness level is wrong. Our test user is wrong. This is the other thing, too. We're forcing data to come back, right. So we need to do one of these.
ANTHONY: No, the function is complete being mocked. So since our mock function doesn't take any arguments, it doesn't change when your argument changes.
JASON: Right. Okay.
ANTHONY: So that's totally expected.
JASON: Got it, got it. And then I will -- okay, so I don't think we should update this, actually, because what we've effectively done here is made it so our function is not testable because we're overriding everything that happens from it. So we probably don't want to actually commit this, but the fact that we can is very cool. So where I would see this being really useful is like, for me, I don't want my tests to be testing somebody else's API, for example. So if I had a function that loaded data from GitHub, I don't want my CI to fail if GitHub's API is flakey or I hit the rate limit or something. So I can pick a mock. My logic is what I'm testing, not GitHub's API responses. So in this case, I think the way we've done this mock is probably pretty antithetical to what we're trying to do because we completely removed our ability to test any logic from this function. But what's really nice here is that it does work. The other thing that I like is that it worked despite the fact that this is loaded before we do the mock. I feel like that's something that's tripped me up in other languages or in other testing harnesses where if you import something and then try to mock it, it's already imported so the mock doesn't work, and you get into all these confusing things where you just have to know stuff about import order and inheritance and all those things. This, to me, feels intuitive. I have a thing I need to test. I want to mock what's happening so I can define the module that I want to mock. And the declaration order doesn't matter. Because my intent is to mock the thing. So I think that's -- yeah, this is really nice. I'm going to leave this in commented for folks. Oh, nope. Apparently I'm not -- oh, my goodness. All right. Yes? We're good? Okay. Sheesh. Save, then I need to hit command, not space. And that's the comment button. There we go. All right. So then once we have that, our tests are back to passing. And there is a commented out example in the code of how to mock something. Okay. Anything else you want to show off before I wind us down here?
ANTHONY: No, this is great. Thank you.
JASON: No, this was so good. Testing is one of those things. I feel like when I start thinking about writing tests, it causes anxiety for me because I'm like, oh, there's just so much to it. There's so many things I have to think about, all the different ways stuff moves. The fact we were able to write 100% test coverage, almost by accident, by thinking about the way we would be bringing in data and testing our code, I feel like that is a testament to the ergonomics of Vitest. I never felt like I was thinking about the testing tool. I was thinking about the logic of my code. Then saying, yeah, I want this function to equal this thing. I want -- if I load your data, I expect to see your data. And this inline snapshot is such a killer feature. So I think this is -- you know, I'm excited about this because looking at this, it makes me actually feel like I could manage and maintain tests, even on projects that aren't, you know, huge, long-lived, multi-user projects where we have to do testing. I want this for my personal sites because I keep breaking my stinking API because I don't think about it. Then one part of my code calls that API and I forgot about it and it breaks. I'm like, oh, no, I took down my scenes. I did that the other week. So tests like these, I'm like maybe I should just spend a day and write tests for this stuff I keep breaking. Well, Anthony, thank you so much. Where should people go if they want more information? So I'm going to drop the Vitest site one more time. Is there anywhere else you would recommend people check out if they want to get deeper here?
ANTHONY: Yeah, I think the documentation should contain all the information. If you prefer to have a video, I do have a course. Like I'm still doing it. But it's in Vue School.
JASON: This one?
ANTHONY: Right. Just starting a few episodes, and it takes me quite a lot of time to do it.
JASON: Sure. This is great, though. 11 lessons is not nothing. So make sure you go and follow Anthony on Twitter so that you can keep up to date on what's happening in and around that ecosystem. And also get notified whenever there are new lessons released on Vue School. In addition, this episode, like every episode, has been live captioned. We've had Rachel here with us from White Coat Captioning all day. Thank you so much, Rachel, for being here. And that is made possible through the generous support of our sponsors. Netlify, Nx, New Relic, Pluralsight. Y'all keep sending me cash so I can keep making the show better. Thank you so, so much for doing that. We got a lot going on, on the schedule. Make sure you set your calendars. We will continue with the Tuesday solo shows. I'll be back next Tuesday from 9:00 to noon Pacific. Then on Thursday, we're going to have Kapehe from Sanity come in and look at Sanity Studio v3. That just launched pretty recently. And we're going to dig into that. Thursday after that, we're going to do visual editing with Next.js and Stackbit. It's very cool how far that stuff as come, especially if you haven't looked at it in a while and you assume it's the early days of visual site builders. There's a lot of cool stuff happening there. Then, starting the week of February 20th, we're doing something really fun. We're going to do observability week. So this is in partnership with New Relic, one of the sponsors of the show. We're going to do four straight days on the 20th, 21st, 22nd, and 23rd where we're going to dig into all sorts of things around monitoring, observability, logging, site reliability, all these things that help keep your sites running in production at scale and how that can work for you, whether you are an individual dev or a thousand-plus-person company. So make sure you mark your calendars. Add on Google Calendar. Subscribe on YouTube. Follow on Twitch. Do whatever it is you want to do to make sure you don't miss any of these wonderful episodes coming up. With that, we're going to go find somebody to raid. Anthony, thank you again for spending some time with us today. This was an absolute blast. Thank you all so much. We'll see you next time.
ANTHONY: Thank you for having me. Nice talk. See you.