skip to content

Measure Deployments With Sleuth.io

Sleuth is a way to get metrics about deployments that will "make shipping less stressful for developers" — Don Brown will teach us how!

Full Transcript

Click to toggle the visibility of the transcript

Captions provided by White Coat Captioning (https://whitecoatcaptioning.com/). Communication Access Realtime Translation (CART) is provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings.

JASON: Hello, everyone, and welcome to another episode of Learn With Jason. Today on the show, we're bringing in Don Brown. Down, how are you doing?

DON: Pretty good. How are you doing?

JASON: I'm doing great. I'm happy to have you here. I'm excited for today. We're doing stuff that I'm not particularly familiar with, but we'll talk about that in a minute. First, let's talk about you. For folx who aren't familiar with your work, do you want to give us a little bit of a background?

DON: Yeah, sure. I've been programming for 20 years or so. Different places. I think we were just talking about for the military. Used to work with US Navy, doing some classified secret work. That was pretty interesting. I was back when they were, like, 40 people 

JASON: Oh, wow.

DON: Left when there were about 5,000. Did a couple little things and now I have my own company. A couple of my old friends, we decided to create a new company to tackle deployment metrics.

JASON: Yeah, so deployment metrics are something that I think I have a kind of gut feeling around what these are, and I probably monitor some of these, but I don't know that I've ever formalized the definition in my head. So can you make talk us through what are deployment metrics?

DON: Sure. So, when you're looking at metrics, obviously like any metrics, there's hundreds of them, thousands of them. You can create your own. When I talk about deployment metrics, I'm talking about the Dora metrics. What happened in 2016 the DevOps Research and Assessment Group sat down and said we want to do a giant study to understand who are our top performers and why are they top performers to kind of help the rest of us learn to get better. In all the research, they came across four metrics that highly strongly correlated to highperforming team. And then dove into them. So, what those metrics are are deployment frequency, so how many times you deploy. They are change lead time. So, how they defined it was the time from first commit until it's in production. It was meantime  change  rate  the ratio of how many deploys were successful and how many were not. Maybe rolled back. Incident or whatever. And finally, meantime to recovery. If you detect there is a problem, how quickly can you get that fix out and have the problem resolved? They found if you're a team who scored highly in a certain range on these metrics, you were statistically highly correlated  I'm not a mathematician or a statistician but whatever. It means you're probably in this elite category and you're probably a highperforming team.

JASON: Cool, yeah. So, that's  this is the Dora Report, I've seen these, they're also called The State of DevOps Reports, is that correct?

DON: Yeah, there's two. That came out in 2016 and every year they do a new one. They'll focus on different things. I think the last one they focused on DevOps as a platform, so the idea that an SRE team shouldn't just be handling your software and production, but it also should be building tools to enable your developers to support their own software. Each time it has a different focus, but it kind of all comes from that. There was the Dora Report and then there was a book called "Accelerate," which some people might be familiar with, which was the people who did the report talking about how they came up with it. A lot of it is stats stuff, but a lot of good juicy nuggets as far as learning how they work and what they learn.

JASON: This is  let me see, I'm pulling up the book as well. I'm sharing some links in the chat. It is Nicole. That's what I thought. Dr. Nicole Forsgren has done a ton of work in this space. Absolutely brilliant. This is kind of the collective findings. So, specifically, it finds like one of the biggest things we're looking for here is your ability to not only ship quickly, but ship confidently and with a really fast recovery time is kind of what correlates to this. And so when you set out to build Sleuth, your idea was to kind of make this into a, like, a thirdparty monitoring system, is that right?

DON: Yeah, basically people were kind of coming up with their own ways to kind of monitor and track this stuff. You know, historically, when I joined they were releasing every six months, three months. Because it was behind the firewall software only, so there wasn't a lot of change. As you adopt cloudtype things and adopt practices like continuous delivering and continuous deployment, all of a sudden you're changing several times a week, sometimes teams multiple times a day and some teams multiple times an hour. So, the rate of change is so much higher, it's hard to know what the heck is going on, which can also lead to incidences because people don't know what's happening. So, we found that there wasn't a lot of software that would kind of look at that bigger picture and keep track of what's happening, but also the health of it, but then also give you those metrics so that you kind of at a higher level step up, like, the manager or the director level have an idea, are we going in the right direction, are we going in the wrong direction?

JASON: Yeah, I think that's super useful. It's something I've been thinking about a lot actually is, you know, the way that  the way we organize our tools, the way we set up our infrastructure, and how that impacts our team culture. So, you know, the Dora Report is part of what kind of got me down this road of thinking about this in the first place. And what I like is, you know, there's the old adage that you manage what gets measured. And when you're talking about how you make a company care about things like this, like, you know, if what we're saying is that we want to make sure that we have a highperforming team, not by saying we only hire the best or by, you know, refusing to hire anybody who is, like, less than 15 years experience in the industry.

DON: Rock stars only. 15 years of Kube experience.

JASON: None of that, please. So, instead, what we're saying is if managers don't know how to measure this stuff, they look for other things to measure. They focus on metrics that they can control. So, if healthy teams is a metric that you want, we previously didn't really have a way to do that, right? We would just say, I don't know, a healthy team is a team that commits a lot or a healthy team is a team that has a lot of PRs or whatever.

DON: Yes. So, what I like about this is that you've provided numbers that correlate to this research that very clearly states this is how we do it. We are going to give you a number that shows whether or not your steam is statistically correlated with high performance.

DON: Exactly. That's what managers want to do. Their job is to take a team and make it better, so they need something. Either they're going to make up something in their heads, such as that team has cool people I drink with, so, therefore, I like them more. Or they're going to come up with some BS measurement like lines of code or numbers of commits or PRs. As you know, when you focus on that, it creates the whole wrong kind of culture and wrong kind of behaviors,.

JASON: Absolutely. Yeah.

DON: What they did with the Dora is said, if you're going to look at some measures. I want to caveat this by saying I'm not a fan of starting with the numbers and putting goals against those numbers. I think that's a bad idea. Because any time you  I think there is a phrase somewhere that says, at the moment a measure becomes a goal, it ceases to become a useful measure.

JASON: Yes.

DON: Because what happens is is people start saying, oh, deployment frequency, great. I'm going to commit one line of code per change, and I'm gonna get that number way up. You didn't do anything, and now actually a change is more complicated because it's spread across multiple deploys, so you really have to understand the role in it. But, yeah, if you understand the role is kind of an after the fact let you know generally how things are going, but it's one input among many, then I think these are metrics that are your best bet.

JASON: Yeah, and I like  that's a really good point that, you know, when you create a metric, people are going to try to figure out how to game that metric, and one of the things that I have tried to do as, you know, as I've moved into the different rungs of management, you've got to make sure they're all working in concert. You can't pick one number because everybody is going to push that one number. If you say, okay, we want to drive, I don't know, adoption of this thing, we're not saying, okay, make a contest where everybody has to deploy one function. Okay, let's drive awareness through something like that, but the other metric we're monitoring is whether or not they stay active for a period of time. Whether or not that activity leads to deeper integration with the platform. Like metrics that actually show whether or not progress was made instead of just, hey, we made a hypy kind of viral sharing thing that got us a huge spike of traffic and no one ever thought about us after.

DON: Yeah.

JASON: This feels like it's kind of the same thing where we could very easily game I'm gonna commit one line of code at a time. It's a little harder to game how quickly can we recover from an error. In one of my favorite words, gestalt. In combination, we take a sum total of that, we get a clearer picture of something that is a little more difficult to game as a system.

DON: Exactly. And I think that's where they were really clever in picking the four metrics. The thing we heard circa 2005, 2008 Facebook, move fast and break things. Really focusing on speed. At the time, when I joined in 2006, they were deploying once every three to six months, so, yeah, there was a lot of value in improving the speed. In the Dora Metrics, one of the other things is change failure rate. So how many times those changes fail. If you just optimize one number, you're probably gonna screw the other number, so you kind of have to keep them  they kind of balance each other.

JASON: Yeah, I like that a lot. So, you know, kind of looking at this, the way that Sleuth is handling this is  well, I guess is it easier to talk about this or is it easier to show this? How did you approach solving this problem?

DON: What we found is teams as you got larger, you could afford people to come in and be on a platform team, to build these kind of monitoring tracking systems for yourself, and we had the same thing. In fact, I think just they're starting to release some of their platform stuff. It's kind of like AWS, just to back up a little bit. AWS was created because Amazon wanted to be able to have the best website to do things.

JASON: Mmhmm.

DON: In order to do that, they created this big platform called AWS and eventually said, well, why don't we just release that and make that a product? So similarly, I think a lot of large companies are doing. With Compass, for example. You build it for your team just for your one company and say, well, everyone's building this themselves, why would we want that? Why not give something that somebody could purchase. Instead of spinning up two, three Devs at 100, 200K a year depending on the location, instead you spend some money and get this for free.

JASON: Right.

DON: Not for free. Although there is a free plan. There are some open source ones about this as well. But the idea is find a way to measure this stuff without having to go and have the long discussions of which metrics should we measure, how should we do it, by using an offtheshelf system, you kind of simplify that.

JASON: The build versus buy conversation, too, it can be easy to optimize. Oh, we can build this inhouse and save $60 a month, $100 a month, whatever the plan is, but, like, how much do you pay your Devs every hour and how many hours are they going to spend not only building this, but continuing to maintain it throughout the lifetime of the product?

DON: Yes.

JASON: And that's, like, it's so much more expensive than $99 a month. I feel like people, you know, we we talk about initial costs. I can set this up in a weekend. You can set it up in a weekend and maintain it for a lifetime.

DON: Yep. Oh, my God, that is one of my favorite things when I get someone right out of college. The enthusiasm. I can rebuild this. It's not a big deal. You're like right. They just don't understand that 80% to 90% of costs, and I think there's even studies on this, is maintaining the software. Writing it is actually the easy part. That's the part that doesn't take a lot. So, yeah, if you've ever been at a large company and just like I'm gonna build this tool to make my life better. It is. You get accolades. I'm kicking ass, and a year, two years, three years, you realize I'm having to maintain this. I'm not saying you regret doing it in the first place, but you start to understand the true cost of what you did.

JASON: It's that kind of hardwon wisdom. All of us that thought we were hot shots early in our career build that system, because it's so hard to maintain, go to Jason for that because he built that nightmare. Nobody else knows how it works, so paying him. No, what have I done?

DON: Actually, it was pretty bad, when I started in 2006, so as I said, behind the firewall software only, I was hired to bring it to the cloud. How we did billing was when you went to our forum and typed in a credit card, it sent an email to an address that a group of 20 people, eventually I think grew to 30 people, would go through those emails and manually copy and paste the credit card number to run the credit card and email you the license. It took a day or two or whatever. In cloud software, that's monthly, that clearly isn't gonna work. So before we could even start our project, I had to build what I called HAMS, which was a hosted account management system. It was like a pig drinking beer. Perfect, that's the new name of my project. I built this system to automated billing. I think up until maybe last year, it was still in effect. I'm sure it had been written a few times or whatever. But kind of this thing just to get me through my one problem so I could move on.

JASON: Right.

DON: But it was such a critical thing, they had a whole team and rewrites and all kinds of stuff around it. It's good. I'm not saying I regret doing it, but holy crap, if I was the only person on call for that, that would have been my life.

JASON: It's the total cost of doing business. And it's that also, you know, what I think gives  when people talk about rock star programmers, part of the backlash against the rock star programmer is that they don't stick around to maintain this. They come in, they invent things, they get bored and then they leave in 18 months or two years to go to another company and rebuild a bunch of new stuff because it's exciting, and we all get to deal with that legacy code of, like, yeah, somebody came in, they built this thing. We mostly know how it works. We don't know why we didn't buy an off the shelf solution and now we staff six fulltime engineers to keep that running. Ooh, it would have been cheap tore pay somebody else for this.

DON: It reinforces the whole idea of a rock star programmer because he or she looks back and says it took you six people to replace me? Like clearly, you know, so it kind of creates this myth of this hero person that comes in, does something, walks away, and then the people that are just no good are the ones who have to clean up the mess. But that's, of course, not how it is, but it 

JASON: Right.

DON: It's unfortunate.

JASON: Yeah, you know, as you said, it kind of points out one of the tricky parts of this industry, is that the shine and parts aren't really the valuable work a lot of the time. Like the valuable work is keeping the lights on.

DON: Yeah.

JASON: And the shiny part is, like, I can build a proof of concept that only mostly works very quickly and it will look very impressive. Making that last 15% work, though, that's why people pay for services, you know? That's why my homegrown websocket server is fine for me and absolutely unacceptable for a real business, is because I don't deal with the edge cases. I just apologize when all my shit breaks.

DON: Yeah, exactly. Yup.

JASON: And, you know, you can't apologize when you're Amazon. You gotta have those people that spend all the time working through that last 15% of horrible edge cases that, you know, only one client needs them, but if you don't have them for one client, they lose out on millions of dollars of revenue and, you know, it's not an acceptable tradeoff to me. For me, if my stuff breaks, I have to get embarrassed and blush on camera and, you know, there's no tradeoff for that.

DON: Yup.

JASON: Nobody's giving high fives to that engineer who inherited the 85% project and took it to 100%, but, jeez, that's hard work.

DON: What do we always say? The last 20% is 80% of the time.

JASON: Yeah. This is why I never finish anything. I get something to the part I have to start doing the polish work and then I'm like, oh, yeah, that's what happens when you get this far in a project. You know what would be fun? What if I just started over? (Laughter)

DON: Right.

JASON: I'm kidding. I have a new hobby now.

DON: Yeah. I got a better one. Got a better one. Yeah.

JASON: This one won't have any problems. This is going to be the one project that ever existed that doesn't have that last 20% of work.

DON: Yeah, I guess I also want to give a little shoutout to those folx or that person who does that first 20% and thinks they're done. The chat was also saying there are a couple of people who have been them. I think we all have been there, but I think they provide a service. As much as it is easy to look at them and say, oh, you're creating all these messes, you're ruining things, but they're not. In the sense they're injecting new ideas into it. That's why I love working with brandnew programmers, they don't have the preconceptions, oh, this one thing is too hard or whatever. They'll sit down and say, oh, I'm excited about this. I'm doing it. When you started programming and wrote your first game, you just have this energy about you. Even though, yes, there are definitely disadvantages we talked about, you're creating something that wasn't there in the first place that all the rest of us old people would look around and say, oh, yeah, I would never do that. That's too much work. You with your maybe naivety or whatever just jump in and to it, and many time it is turns out that potentially could turn around a company. There's many examples of Gmail and stuff started as little side projects, again, if everyone was responsible, none of us would have started. We would look and say, oh, God, no, I'm not starting that with a tenfoot pole.

JASON: Simon Wardly has talked about this. The concept of different working personas. I think in the beginning a lot of us have boundless energy and enthusiasm. We assume all problems, well, the reason that problem's never been solved is because I haven't been involved, right?

DON: I love it, though,. It's such a great feeling.

JASON: As you get further and further in your career, we start to drift somewhere on the spectrum. The farleft of the spectrum is this idea of, like, being an inventor or what Simon would call a pioneer. You are the ones who likes to wander out into green field and unknown stuff and see what you can put together, right?

DON: Yeah.

JASON: This is typically I think where you would put somebody classically called a rock star developer. You put them in that bucket because they are the ones who are going to dive in and make a big mess and try stuff out. In the middle of that spectrum is somebody you would call more of a settler. They show up where the pioneers have been and they take the things and they take them from being these wild ideas that kind of work to being things that work. They actually do that work of polishing and, and organizing and putting in the infrastructure so that this thing actually functions. And then on the far end of the spectrum are urban planners. People who like to take existing systems and incrementally improve them and, like, really just make them perfect and enterprise ready. I've met people who fall in all three camps, you know, and different blends of those camps throughout my career, and I really, really like thinking about it that way because it also helps me when I'm thinking about hiring. Like if you're hiring an SRE, in a lot of cases, you're looking for somebody who falls into that, like, urban planner mindset, somebody who really gets jazzed up about, ooh, you've got a big, messy pile of legacy code, I'm gonna make it run so smooth. You put someone like me who falls more in the pioneer/creator camp, I'm gonna come in and make a giant mess as an SRE. I would be a terrible SRE. I'd look at it and say, that sounds frustrating, I'm gonna build it over again. You know what I mean? I would cost every company so much money if they put me in as an SRE. Thinking about hiring and how to line people off. We got way off topic. We're way in the weeds here, aren't we?

DON: That's okay. I got a way to bring it back. I'm gonna go a little more in the weeds and then circle back.

JASON: Please do.

DON: One more corn field.

JASON: Yes.

DON: But one of my pet peeves is individual performance rating. If you're a manager, you know how annoying it is to try to rate your employees for their quarterly or yearly or whatever bonuses and things. My least favorite, though, is when I'm in a company they say, okay, we're gonna rate them by numbers, which is already problematic, and then you can only give one person on your team the top number and two people the next number and then the rest get the rest number. I don't know if you've ever worked in a place that did that 

JASON: I have not, no.

DON: It was the worst. Because all the things you're talking about, how there's different people in different roles and they can excel in that, how do you compare them such that you can only say, well, this one person is the most valuable? Because the rock star is doing something totally different than the urban planner. And how do you compare that? It just puts you in an impossible position and I find it really demoralizing. Going back to that back road of deployment metrics or team metrics, that's what I like about the Dora.

JASON: Bring it on home.

DON: Yeah, I'm bringing it. It's looking at the measuring performance but for a team, not for an individual.

JASON: I love that. I love that.

DON: Whereas these other measures, per commit, the rock star is going to win that one, because they're turning shit out nonstop. The urban planner, I have this bug that is going to take me five days to debug. They're always gonna lose at that number. If you step back and say as a team, the team is deploying this often, failing this often, able to recover quickly, now you can start to compare between teams and use that as a way to get better.

JASON: Yeah, absolutely. And I like it, too, because it shows  you'll also start to see, like, you know, you can find individual performance indicators. Because I think, you know, teams succeed or fail as a unit. It really is, like, you know, I had somebody said the phrase teams are immutable, and I thought about that a lot because each team that you bring together is a unique combination of people with uniques strengths and weaknesses. When you get the right combination of people together, it's magic. The team that I have at Netlify is unbelievable. Sarah put this team together and it's incredible. So, we measure as a unit, like, how much of an impact are we as a team having, and then we also want to look for individual growth and things like that. So, what I like about these metrics is that they let us get the broad view, and then we can reward the team for being a great team, and then use the individual metrics to find coaching opportunities. How can you get even better? How do you get to that next level? How do we get you that next promotion, you know, by  yeah, I like that. So, instead of stack ranking your employees, you're looking at your teams as a level of health, and then you can find opportunities for employees to improve within that overall measure of health.

DON: For sure. Because we're all a team going toward that goal, so you're gonna look at, say, change lead time. How long it took from first commit to deployment. You're gonna start breaking that down into buckets. This is coding time. This is review time. This is deployment time. You'll say, okay, what's the long pull on the  our deployment takes three hours. Let's see if we can fix that. That's where you get individuals involved and they can step up as you're trying those processes. What do we want to accomplish as a team and how can we make it happen?

JASON: Yeah, I love it. Okay. I feel I'm very sold on this as a general metric. I want this metric for my teams. So, let's learn how to actually do it. So, what I'm going to do is flip us over to pair programming view. And I'm gonna start by giving a shoutout to Jordan from White Coat Captioning who is giving us live captions today. Thank you very much, Jordan, for being with us. You can see those on the homepage of learnwithJason.Dev. Made possible by our sponsors, Netlify, Fauna, Auth0 and Hasura. If you want to see more from Don, Don is on Twitch. You can go and see  let me see if my shoutout command is going to work today. Let's try it. Hey, look at it go. Okay, so, this is  go follow Don on Twitch and get lots and lots of excellent content just like this. Regular streaming here. So, you know, lots and lots of good things to see there. We're going to be talking today 

DON: I develop my product on stream usually.

JASON: That's really cool. So, that's realworld stuff, which is even more than we typically do on this show, which is more proof of concepty. So, yeah. Today, we're going to be working with Sleuth. So Sleuth.io is the product that we've been talking about. Gives you the metrics around deployment. And this is that DORA Report that we were talking about. You can go and find all of the reports down here. This is  this is the one that you were talking about, the 2016?

DON: Yeah. That was the original one, yep.

JASON: Yeah. So, I will drop another link here.

DON: I think it's 2016. '17.

JASON: One of them. Anyways, these are all really good. This 2019, also, I think it was this one, is really excellent as well. So, you know, get those. Get into those. Check them out. Really, really good insights there. This is the book, "Accelerate", if you want to read the book version of what Dr. Forsgren put together. Okay. I'm ready. I want to see this in action. I'm ready to give it a try. What should my first step be if we want to set up a site?

DON: So, do you have an application you want to use or create an application and have Sleuth track its metrics?

JASON: Let's create a new application, we talked about how many mistakes I make in the first 20% of my projects. Let's not have to suffer through those today.

DON: And continue the pattern of just creating a new project whenever you get stuck. Yes, let's do that.

JASON: Exactly. Exactly.

DON: We're enablers today.

JASON: Let's do something that's relatively straightforward to deploy. So, let's grab an 11ty template, a static site generator. We've got some episodes in the archives if you want to see. What we should be able to do is get a template. So, let's find  let's see. Some templates that we can quick start on. Starter projects. There we go. Let's grab 11ty Netlify jump start, perfect score. There we go. Let's do it. And where is the  generate a repo from this template. That's what I want. We'll put this in Learn With Jason, and we will call this sleuth11typroject. Make it public. There we go. All right. So, if you want to go check out that project, I just dropped it in the chat. So, we've got that project, and then what I'm gonna do is just head over to Netlify and we'll deploy this thing. So, a new site from git. Go to GitHub. Do I want to set anything up with Sleuth before I deploy this?

DON: No. Let's get something out there. I think that's how most people do it. They have something existing that they want to then go and track.

JASON: Let's do that. So, search for Sleuth. There it is. Okay. We're gonna put it in my line. Gonna assume that we need that for some reason. I don't know what that's for. Trick puppeteer. Okay. Let's deploy it.

DON: Now, you have to remind me, have  I actually have used Netlify for a project before. Do you interact with Data Dog.

JASON: So, Data Dog is available on enterprise plans. We can  you can add integrations for Sentry or roll bar or things like that manually. We have log drains set up on Enterprise Plans.

DON: Okay. Have you used Sentry to integrate into a product?

JASON: I've used Sentry. We did an episode a little while back. It was so pleasant, so nice to set up.

DON: Of course you did. Raul is the one who recommended me to talk to you. I'm remembering now. Raul is the marketing guy at Sentry who used to work with me back in 

JASON: Oh, nice.

DON: Yeah.

JASON: Let's see. So, we are now set. We got a site. I'll rename this. Okay. So, here's our site. Let me rename it to something that I'm actually gonna remember. So, we'll go to the site settings, and let's rename this thing. And we're gonna call it sleuth11typroject so that I can match that to the other name. And then I can open that up. Let's get that out of here. There we go. All right. So, here's a real  here's the real site that we're gonna use and manage. So, I am now ready, I think. We've got a project and we can go and make some edits to this and check our deploy velocity.

DON: Okay. Cool. So, if we want to  yeah, let's go through probably the Sleuth startup process and then we'll kind of see what we want to sprinkle in after that.

JASON: Okay. So, do I want to try now for free?

DON: Yeah, try now for free.

JASON: Let's do it. I'm gonna sign up with my GitHub account. Okay. Let's call it Learn With Jason. Okay. LearnwithJason. Probably need that one. Oh, good. Everybody just give me one second here.

DON: That's a lot of orgs.

JASON: I know. I'm in everything.

DON: Makes sense, I suppose.

JASON: And we authorize. Okay. GitHub.

You hackers. You dirty hackers.

JASON: 11ty project so I don't lose it. Allow members of your team? Sure. Repository. Let's get sleuth to show up. There.

DON: You have a lot of repos. I guess that would make sense as well.

JASON: Yeah, I think at this point I'm close to 500 repos.

DON: Jeez.

JASON: So, let's see, do I want to do any of these things?

DON: No, that's okay. For example, if you're a team that has a deployment branch per environment. So, for example, staging goes to the staging branch, production goes to production, stuff like that, I think we'll just keep them all on main.

JASON: Okay. Cool. Yeah, let's do it. And then we'll keep that the same. Do I  what are  let's look at these and then you can tell me if I need any of them.

DON: Yeah. So, we're gonna use precise. Well, actually, let me take a step back. So, there's a number of tools in this space that are also trying to give you metrics. What they usually do is just look at the git data. Sleuth is a little bit different because we actually use  we require you to  we want you to tell us about when a deploy happens. So, we look at deploys. We also look at git data, but it's mainly about deploys. So, the first thing is about how you tell Sleuth about a deploy. We're going to use the precise method, meaning you send us a Webhook. That way our metrics are precise. For example, change lead time will be exactly when it was deployed, not when we think it might have or the PR was merged. If we want to do a mono repo, which some companies do. In this case, probably not. The automatically lock deployments is kind of an interesting one. So, the idea here  this is kind of a side feature. It's something we really liked and I built a number of times, so, it's my product. So, I threw it in here, even though it's not really metrics related.

JASON: Sure.

DON: But basically how  if you're a team that does singlefile deployments. Meaning when it's merged and mastered, you deploy as well. What we often did is try to tell other Devs, hey, I'm going to merge to master. I'm going to deploy now. Please don't deploy to master. We consider that locking. I built that into Sleuth so when code gets merged to master, it will lock deployments. You know how in pull requests you have the little thing how CI ran? Sleuth adds its own check and the check goes red when master is locked.

JASON: Okay.

DON: Now no one else can merge their PRs. Once it goes all the way to production, then it gets unlocked, and then the check goes green, and now people can merge again. So, it's really useful for a distributed team where you're not really sure who's gonna be deploying when but you want singlefile. That can be a really great deployment. Other teams don't work that way. They do batch deployments. It doesn't help. It totally depends on your team.

JASON: That totally makes sense. Ben and viewers. Hope you are having fun on your stream today. I also saw earlier, thank you for the sub. That's very much appreciated. Okay. So, this is all good. Slack notifications. That's great. Collect impact. What does this mean?

DON: So, Sleuth, it doesn't just track when a deployment happens, it tracks how healthy that deployment is. For example, you can connect it to Sentry and it will pull Sentry every two minutes to see how many errors. If those jump, it will mark the deployment as unhealthy. Which is what you need for mean time to recovery and all those things.

JASON: Very, very cool. Include project in display. Seems right. I'm ready. I'm saving and continuing.

DON: Saving and continue.

JASON: I'm gonna skip for now. I've done that before. For this, we would use  we'd probably use Sentry.. Let's see, I don't have it set up. We can add that later.

DON: Yeah, you can add it later.

JASON: So, get a webhook.

DON: So, now it's telling you the next step is to add that webhook in your deployment process to tell Sleuth when a deployment happens.

JASON: Okay.

DON: So, if you click on that Get Webhook, it should give you some instructions.

JASON: Nice. Okay. So, the way that we're gonna wanna do this, is I can copy this 

DON: You can probably copy the second one, because that's the full curl statement. If you wanted to use curl, then use that one. And good thing it's a demo project, given the key is there.

JASON: Ah, yeah. Yep. Oh, and you can add extra stuff here.

DON: Yeah. So, the only thing, if you look at that curl statement, the only thing you're going to want to replace is your_sha, if it's in a CI system.

JASON: Let's do this, GitHub repo clone, and then we'll get Learn With Jason sleuth 11typroject. And then in here, we can open this up. And we're gonna have a package JSON with a set of commands. Scripts. So, when we build, we can do like a postbuild that will run  let's see, yeah, we can get it to run a command or we could use, like, a Netlify build plugin or any number of things, but so ultimately, though, what we want to do is a postbuild, we want to run this script.

DON: Assuming that a postbuild, that only gets executed when you're done deploying. Because you don't want your Devs when they're running it locally to be triggering a bunch of deployments or something.

JASON: Fair. That's a good point.

DON: So, I don't know, I'm not a Node person, but I know you can have other Node commands. Maybe you have a Node deploy, and that actually does something.

JASON: Yeah. So, I guess what we can do is let's  let's run  how could we do this? Let's do it with a  let's do it with a script. So, we'll create something bin and then we'll call it postdeploy.js. And then let's see. Who remembers how to do Node scripts with a hash bang? Let's see. Node. Bin.

DON: Google to the rescue.

JASON: I know. I have to...

DON: When I joined hip chat, I was getting back into Python having never really used it in production. I was Google stuff, how do I do an "if statement Python," like the most basic things possible, but it's just faster that way.

JASON: That's what we want. User. Bin. Node. Now what we should be able to do is we can run scripts in here. Is it okay if we just, like, make some commands to make this thing work to start?

DON: Yeah.

JASON: What are you aheming? Oh, everybody in the chat was already telling us what to do while I was Googling that. (Laughter) Thank you, chat. Sorry I didn't pay attention, yet again.

DON: Blame it on the lag.

JASON: So, what we can do is I can set up, like, an API key and we'll make that into this API key here, which should be in our end, but because it's a demo project, we're not gonna do that today. And then the sha, we're gonna have to figure out, but that comes out of  oh, you know what? That's gonna come out of Netlify. So, we just need 

You hackers. You dirty hackers.

DON: Yeah, so, if this is  this is a Node script, right? If you have a way to shell out and run get rev parse, assuming it's going to be run in the version that it's deployed, then that's the simplest.

JASON: Okay. So, I can get  to do that, I can get execa.

DON: Also, some build systems will have the sha as an environment variable.

JASON: That's actually what I was thinking. I'm pretty sure Netlify's gonna give this, right?

DON: It's a pretty standard thing.

JASON: So, what we'll get in the build environment is the... read only variables. Build ID. Get metadata. And here is the commit ref. That's the one?

DON: Yeah, probably the commit ref would be my guess.

JASON: Okay. So, what I'll be able to pull out of here then is  so, we got SHA, and that will be process.env.COMMIT_REF. And then I'm gonna need. Fetch is gonna be nodefetch, which I'm gonna have to install. And that will give us the ability to send off this post. So, then what we can do is if process.env, we'll just make sure that what we're doing is actually a deployment, because Netlify will tell us, and we can set if Netlify is true 

DON: Or  is that if I have an understanding of different environments?

JASON: It does. It has a context.

DON: Okay. Yeah, so, for example, if you match your context onetoone to Sleuth environments, you can pass that context directly into Sleuth and tell Sleuth this is a staging deploy. This is a production deploy. This is a QA deploy. Whatever.

JASON: Nice. So, maybe to start just for the case of  if it's not Netlify, we will just console.lon not a Netlify build doing nothing. So that way we can see it's actually working.

DON: That works.

JASON: If we get here, what we can fetch this URL, and then we'll send in some variables. So, we're gonna send in the API key. That's basic auth, is that right?

DON: Yes. Basic auth with that header  is it basic auth?

JASON: That's just body data, right?

DON: Yeah, actually, I think it's just in the body. I think you can also put it as a basic auth header if you wanted to, but I don't think it's documented on that page.

JASON: So, that would just be api_key = API_KEY and sha = SHA.

DON: That would be a Netlify plugin idea. That's a good one.

JASON: This would actually be a great Netlify plugin that we can build that would just do this for us. But, yeah, so, let's try this. I'm gonna chmod x bin/postdeploy. So we can actually execute it. Then I'm going to start by just running it. So, let's bin 

DON: You're probably gonna get a 400. It's already seen that revision, but that's fine. That shows it authenticates and everything.

JASON: Permission denied? How dare you. Bin/postdeploy. Why did you  oh, I need to list the bin. What am I doing? Is it L? What's the one that shows you?

DON: L

JASON: Trying to remember my commands and completely forgetting it.

DON: You're a Windows guy, aren't you?

JASON: I am not.

DON: Like an A+ X or whatever you wanted to do.

JASON: Now we've got it. Now I can go bin/postdeploy and now it should say not a Netlify build doing nothing. Okay. Perfect. We can run this script. So, what I'm gonna do in here is just postbuild, we will tack it on.

DON: Yeah, that works. Every time a dev runs, it will just exit out.

JASON: Right. So, then if I run my build  run build.

DON: Do you have Netlify running on pull requests? Because that would break this as well. Well, in the sense this wouldn't work for it. You don't want it to register deploy when it's just testing a PR.

JASON: What was that? That was cool. It generated, like, all of our local images. We get to the end, it says not a Netlify build doing nothing. So, you want this to only run on production?

DON: Well, generally speaking, I've found people will have pull requests and you want to run your build on the pull requests and make sure the pull requests are okay. If Netlify is running that, then it would be registered as a deploy for production, which is probably not correct.

JASON: Okay. So, we'll just set it to production, and that way it will only run on production builds and not pull requests.

DON: Perfect.

JASON: That should put us in pretty good shape here. So, let's add everything. Okay. Let's git commit. Do I need to do anything before I run this to make it show up on Sleuth?

DON: No. I don't think so.

JASON: Okay.

DON: As I said, it's probably gonna return a 400. It will have already seen that revision. Which is fine.

JASON: Create a new build?

DON: It won't create a new deploy because you already have one. If you go into Sleuth, you'll see it sucked in your history and figured your last version might be your first.

JASON: So, when I push this one, Netlify's gonna build again? So, it'll create a new deploy for this commit?

DON: Yes.

JASON: Okay. So, let's get that. We'll push. And then we can head over and watch the magic happen. So, let's get to deploys. Here it goes. And what we should see  I didn't print anything at the end, so we shouldn't see anything here after it runs the post. But we will see it show up hopefully in  either we'll see an error if I did it wrong or we'll see information in Sleuth.

DON: That's a true pioneer in action. No logging.

JASON: Zero tests. Zero safeguards. Ship it.

DON: It works on my machine. Good times.

JASON: See how far we get here. So, this should happen pretty fast. There we go. There's the social images. And off it goes . All right. Let's see if anything happened here. So, close. And then we're gonna look at what am I looking for next? Project dashboard.

DON: Yeah.

JASON: Here's my next deploy.

DON: So, it detected that there was a change. Maybe do print out errors in case we paced in the Auth ra  oh, no. There's your deploy. Are you looking at the future deploy?

JASON: I think this 

DON: So, what happens is when we detect a commit to master, we'll create a future deploy, meaning it's not actually deployed yet, but if you wanna see what's coming when you do deploy it, that's what it is. So, that's what we're kind of looking at here. It looks like it didn't get registered as a deploy from our script, though, so we might need to add printing out what happened.

JASON: Okay. Let's do that. So, we'll go into our script, and then what we're gonna get back is a response. Do you know if the response is text JSON?

DON: Off the top of my head, I want to say text  you just print out, well, whatever the equivalent is, which is the body of the response.

JASON: Okay. So, let's send back res.text and see what we get. I guess we can console log res.body and see what happens.

DON: Throw the status code in there.

JASON: Okay, yeah. So, let's console.log. We can do the res.status.

DON: That's a fancy font you've got.

JASON: Operator mono. Res.okay.

DON: I did a channel point redemption thing where people can suggest a font and I'll twitch my IDDT to it. That is crazy. The fonts that people find is nuts.

JASON: Console.log. And then we'll just pass it  okay, so let's log both of those and see what happens. This builds fast enough that I think I'm just gonna ship it.

DON: We'll do it live.

JASON: Well, because my other option is to try to run the Netlify build locally without 

DON: Set all the environment variables.

JASON: Changing all the  which I could do. It would be reasonably fast and we've got, like, a local CLI for doing that stuff. It would be good. But since this is so fast, we'll let it rip and see what happens.

DON: Yeah. That's the advantage of a new project. The minute or less builds.

JASON: Yeah. Well, and it's also just kind of a nice thing when you use something like 11ty, it's just such a lightweight little builder. Status 401.

DON: Okay. So, that's a authentication problem. So, the API key  however that's being sent over wasn't being recognized.

JASON: So, I'm probably doing something wrong.

DON: You can also send it in JSON, if that's easier.

JASON: That would be  stringify. We can send it in as, like, API key like this?

DON: Yeah, you can just send it as an application JSON content type, with the JSON body. That works, too.

JASON: Okay. So, let's do that. It is posting, so that's not the problem.

DON: Actually, I've had this kind of, like, trying to figure out the right command for people, because sometimes people copy and paste curl, but other times do what you do and want to create their own command. I created a standalone binary Python client for people to use that. Is it a post, the right content type? Did I escape my string correctly? Little things like that can trip you up.

JASON: Yeah. So, we do wanna post. Let's see.

DON: Do you  with that fetch library, can you debug what's going over the wire? Like it dumps what's going over the wire?

JASON: Yeah.

DON: That might be an option just to see if things are being coded in escape.

JASON: Close that so I can see what's going on. Bug responses.

DON: And it should be a post first, so I think we got that right.

JASON: So, what came out at the end here? We got back 200, success. Okay, that looks better.

DON: Hooray.

JASON: Let's go try this thing out. We're gonna get back here. We've got a healthy project. Hey, look at it go. Latest deploy. It is a code deploy. Change Lee time, five minutes. Nice

DON: If you hover over this, you can see what it was. It looks like we did our commit six minutes ago. From the last time we had a deployment, sorry. The first commit related to that was six minutes ago. Then we had several commits to try to get it working, and we finally had it successfully deploy at the end there.

JASON: Nice. Yeah, so, if we had been logging an error or something, it would show an unhealthy deploy because there would have been issues, right?

DON: Yeah, so, that can go multiple different ways. If you go to, like, the integrations tab, you can see you can connect it up to Sentry, you can connect it to Data Dog, Cloud Watch, anywhere you have a source of impact, you can connect it to those types of things. There is also an API, so if you want to manually tell it it's healthy or not, you can. You can change it in the UI and you can change it in Slack, and you can change it by GraphQL. So, basically, lots of different ways. The reason I mentioned those different ways, the end goal is you want to get good metrics. You want to get good change failure rate and mean time to connect. You really care about whether the program is tracking healthy or unhealthy correctly.

JASON: Mmhmm. Yeah. I mean, this is really nice, though, like, being able to see how quickly that works and getting it all plugged in. So, now we're seeing, like, here's our failure rate. So, if we wanted to hook this up with, like, unhealthy, I would need to set up Sentry, right?

DON: Or you could just manually set it if you wanted to try it out. If you go to the deployment, force change this one to be unhealthy.

JASON: Okay. So, I go into here, and I look at one of my deploys. And I can 

DON: There's a little carat right by the heart up there.

JASON: Oh, nice. Okay.

DON: There you go.

JASON: So, we'll mark it as unhealthy. Maybe. Hello?

DON: Did it jump on you?

JASON: The click isn't doing anything.

DON: Oh, refresh it maybe.

JASON: Unhealthy. Okay, yeah, there it goes. So, now we have project dashboard, and it says deploy verification. It says it's red. Okay. So then let's make some kind of a fix. So, we'll go in here and we'll change something. Let's make a page. How about one of ... sure.

DON: Holy buckets? What's the holy buckets from?

JASON: Holy buckets. That is  I don't know why I started saying that, but at some point I think I wanted to swear and I said "holy buckets," and then it, you know, then I just ran with it.

DON: I like it though. The buckets has the percussive consonant. So it's similar. That's the best thing about streaming live. I remember when I used to do conference speaking, I did a couple where I coded live. It was always so much more fun, but it took so much more time to prepare.

JASON: Mmhmm. The live coding at conferences is high pressure. Streaming, it's like, hey, let's just break things and see if we can recover from them and see how fast it happens.

DON: Yeah, then it's like we'll come back tomorrow and we'll do it again. Whereas a conference, it's your one shot. There's no redos.

JASON: Yeah, exactly. So now we've got. So, we marked one unhealthy, and here's our failure rate. So, here's a bad one. Right? I like it. I like it. Then we're gonna go back here, look. I think this deploy's probably already done. Almost. Ready to start.

DON: Go, baby. Go, baby. Go.

JASON: Did you hang on me?

DON: Yeah, I've had a live coding where the computer crashed. That was fun.

JASON: Let's just reboot that. Start it again.

DON: Basically, I stand up and start pantomiming.

JASON: Let's see Yani, he did a thing he called hacker typer that would allow you to mash on your keyboard and that would make it look like you were typing really fast.

DON: That's really clever.

JASON: Which is funny. Yeah, prerecording and showing the video of live coding is also a really good idea. I think a lot of people who are making courses now do that. They code the project and then narrate over them coding, which also takes a lot of the pressure off because, you know, all you have to do now is talk. You don't have to remember how to get the code to work.

DON: Yeah, and I guess I'm mixed on that. On one hand, definitely from an audience member, it means you get a more consistent experience. But on the other hand, if you've ever been at a talk somebody entertaining and they're live coding, it is just the pinnacle, I think. It is so much fun to. Then they can pause and go off on a tangent or come back over or whatever, but it is a very hard thing to do. I've also done it with macros where I'll have different macros that will auto fill some things. That helped a little bit. It was training wheels, I say. Training wheels to the live coding.

JASON: Let's see. So, this is live. And so now I have my code deployments. Make the deploy healthy again, right?

DON: Mmhmm.

JASON: Good. Yep. And then when I go and look at my health, 1 of 2 unhealthy. So, this part  how do I get it to go back up? Because we did deploy that.

DON: That's the change failure  yeah, why  I think it's because for that day, you had one unhealthy, so it's kind of bucketing it by 

JASON: Oh, I understand. Okay. 1 of 2 unhealthy.

DON: Yeah.

JASON: Oh, I see. Okay. So, then on, like, future days it'll  I see. And so this is kind of the average. So it starts green and trends on down and, right? And then it will trend back up. That makes sense.

DON: That's kind of doing it the highlevel way. We actually have this thing which I'm so tempted to turn on for you. We're about to beta. Which kind of redoes the whole metric view to make it more front and center and give you more trends over time and give you suggestions and all kinds of stuff.

JASON: That's nice. So, this then, this is slick. This is really nice. And so obviously we can't do a lot of the over time stuff, but I think I remember you saying you have a live demo for Sleuth.io itself, is that right?

DON: Yeah, if you went to app.Sleuth/sleuth/sleuth.

JASON: It bounced me to my own thing.

DON: Or click on live demo. That is probably faster.

JASON: Do I have to log out to see it?

DON: Just go to/Sleuth and that should trigger it to go back over. Or that works, too. /sleuth.

JASON: It bounced it.

DON: Say that 10 times fast.

JASON: .io/sleuth. If you're off, it takes you to your own stuff, which makes sense.

DON: It makes sense. We kind of put in this hack, if it's Sleuth, we try to show it more public but otherwise, you know, of course you don't show anyone else's stuff. This is us developing Sleuth. You can see we're working on this dashboard thing. We just changed our billing. We track three things. Sleuth will track code, but we also track our terraform changes.

JASON: Oh, nice.

DON: Not just metric changes, but also feature flags. For example, if we create something and it creates errors, we want that to show up in our stats. One source of change. We have one company that does feature flagging. They use us, and they have  they do some changes and then they have a Slack bot, hey, I scaled up the cluster or whatever. They have that connected into Sleuth so they can get all the metrics and track those changes in Sleuth.

JASON: Yeah. This is cool. I mean, and you can, like, not knowing anything about your company or, like, what your roadmap is or anything, you get a pretty good overall, like, okay, let's go to the application, let's see how it's going. You can see, okay, the company's healthy. You've got good lead time. You can look at previous deploys. Oh, look, multiple deploys per day. That's pretty awesome. We can see, like, your deploy frequency, 8 deploys a day. Takes about a day to get something into production. Really low failure rate. Mostly small PRs. That's also  (Inaudible) Look at that. I want to know what this one was. (Laughter)

DON: The gigantic one?

JASON: Yeah.

DON: Probably founders changing stuff. We might be started by a couple of pioneertype folx.

JASON: This is good. I mean, I love this. I think this is great. I also like this because this also shows whether you've got a really bad imbalance where, like, one person is the only one who is allowed to touch the code base or something like that.

DON: Yeah, I'll be honest with you, that's one we're thinking about ripping out. Remember we were talking about before team versus individual? Some of these stats tools can be used by managers to drill down individuals. We weren't really comfortable with that. We feel stats should be a team thing, not an individual team.

JASON: Yeah.

DON: Originally put this in on Hip Chat, we used a program called git satisfaction. It would give you supports based on handling tickets. Which I found really fun. It was basic simplification. I thought do it over here to see who is contributing the most or whatever. The more you think about it, the more you think, oh, that can be really used for evil as well. In fact, it'll probably mostly be used for evil, so in the next version, we're probably gonna pull that. Even though I like it. When I have a week where I'm coding a lot to see my name go to the top, it's a vanity thing.

JASON: Vanity metrics.

DON: Probably not a great idea.

JASON: I wonder if there's a way to do it that's not individually targeted, but that's still shows, you know, okay, you've got how many people on this team and how many of them deployed? So, like, you know, percentage of team contributing to the deployment so that you can see, like, you've got 10 people and only 1 of them deploys. That's a bad measure, right?

DON: Yeah, yeah. All those kind of  those are really interesting things to know. The challenge with it is you think about how else it could be used, and I think for us, anyways, I'm  I consider myself  if I had to pick a name, I would not say a manager. I would say I'm a developer. Maybe more on the architect, pioneer level. Managers want to slice and dice and see my remote people versus my local people because I'm trying to justify, you know, getting rid of remote or whatever the case may be. And we don't want to 

JASON: How dare you.

DON: It to be used for that. What we feel is most important is that the team is doing better and the developers use this as a tool to help them get better. We don't want it to be a topdown thing.

JASON: No, that totally makes sense. I get it. This is slick, right? What I love about it is we're an hour into the show, and we spent a solid 20 minutes at the beginning of the show just talking about philosophical things. So, our time to being up was basically nothing. We were tracking these insights. We're gonna get further insights as we go. And, you know, we can plug this in with Sentry. Maybe we should plug it in with Sentry so we can kind of show how to take this one step further. So, let's  what is it  getsentry.com  sentry.io. That's what it was. And let's create a new project. So, I'm gonna create a project. And it is JavaScript project probably.

DON: The Rolling Stones on the git satisfaction, I know. Ended up getting bought by another company, which is a bummer because I wanted to use it for support for Sleuth, but I didn't. I'm terrible at pronouncing names. I probably shouldn't even try, but the person who made the Easter egg comment.

JASON: So, let's call this one sleuth11typroject.

DON: But, yeah, makes an interesting point. We thought about what if you could see your stats but your manager couldn't see your stats?

JASON: Ooh, that's an interesting idea.

DON: Then you get the best of both worlds. I want to see how I'm doing if I'm taking too long, if I'm improving. I don't want my manager to go use it on quarterly, weekly performance rating. That's something we haven't done, but that's something  I like that idea.

JASON: Okay. So, what I'm gonna do here is 

DON: Nicky, thank you. Nicky.

JASON:  something completely unacceptable, where I'm going to just use this base and drop in a script tag. Everybody buckle up because I'm about to make bad choices.

DON: Where is that hacker overlay thing when you need it?

JASON: Oh, wait, I have to do this in something that's gonna get  this is gonna have to get transpiled isn't it?

DON: Yeah, Jacob makes a good point, where let's do a oneonone and sit down and look at your stats. For sure. But if you're in one of those types of places, they're probably doing key logger and whatever else things. I would say those are not good, you know, everything can be misused, I suppose, but you just want to make it a little harder.

JASON: Okay. So, I'm going to use their CDN because I don't want to have to figure out how to transpile some things, and instead I think I had  pull this back over. And then I need to go back here. Where it's gonna show me my own thing? Yes. So, just in case I need it, I'm going to drop this in here. I don't think any of it changes, but let's just go ahead and grab this. Okay. So, I have the name. Release  eh, I'm not gonna worry about that. Integrations? New integrations. Browser tracing. Trace to sample rate. Okay. So, we're just gonna leave that part there and ship it. Okay. So, we're gonna put Sentry in place. This should, hopefully, put it in place for us. So, let's give it a shot. Add all. Then we're gonna git commit add Sentry. Okay. Then let's push. Then I'm gonna go back to Sleuth, and let's see if we can add integrations

DON: You did this as a new Sentry account, right?

JASON: No.

DON: Then you might want to off screen, because you have to paste in your personal access token.

JASON: Need an auth token. We're gonna call this  is the description label 

DON: Make sure it has the scopes that we need.

JASON: Okay. So, I need event read, member read. So, let's just create this auth token. I think it will let me create it before we  so, if I go to settings. Security and privacy, maybe. No. Auth? No. Integrations? No. Okay.

DON: It's under your personal. So, if you go to your account, I think there's actually a link  if you go back to Sleuth, we created a link  oh, no, we can't do a link because the org is in the slug.

JASON: Sentry.io, settings, accounts. Hey, it took me right there. I'm going to make sure I have the right things. Event read, member read, organize read. Event read, member read, org read. Then we need project read, project releases, team read. Project read, project releases, team read. And I don't need event admin, right?

DON: Nope.

JASON: Okay. Perfect.

DON: Basically, we'll read the number of errors for a project, but then we also go into Sentry and create a release because Sentry has a feature where if it knows about your releases, it can do smart stuff. Sleuth is kind of like  you ever use Segment? Yeah.

JASON: Yeah, I know Segment.

DON: The idea with Segment is I'm gonna send my analytics to one place and it sends them to all the places I want it to go. You can use Sleuth that way. It will register the deployments in Data Dog and Sentry and all the places.

JASON: Okay. So, I added that token. I'm not gonna pull that back up because it's visible, and then I have this  this is now enabled. So, do I click this add impact button?

DON: Make it so.

JASON: Here we go. And we're gonna call this Sentry, right?

DON: Sure.

JASON: Okay. Test connection. Average value, 0. That would make sense. No errors yet.

DON: Yep.

JASON: So, I'm saving. And now if I go to this project I should, I think, like, can I just create some errors? I guess I would need to reload it. Let's reload it and make sure that it does  ah, look, we're already getting errors because we don't have whatever we need from Sentry to make this thing work. (Laughter) So, that's not gonna work, actually. It's not gonna  it's not gonna track. So, let's  oh, no, there's my token. I'll delete that later. Okay. So, in here is it tracking that thing? No, that didn't work at all. So, let's go over 

You hackers. You dirty hackers.

JASON: Docs. Then we've got the start setup. Is that what we need? That's not it. Setup. Nope.

DON: While you're doing that, I wanted to reply to surly who was saying creating a thing to game that metric, deploy by creating commits. If you were creating commits even on your day off and that automated script also did deployments, which is what the metric is actually tracking, the deployments, not the number of commits, that would be pretty bold on your part. I would say. If you had an automated script doing deployments.

JASON: Is there a general settings  auto resolve. Now that I've lost my  now that I've lost my pieces, I'm not sure how to set it back up. Instrumentation. Is this right?

DON: But I think it kind of reinforces why it's important to track the actual deployments and not pretend that a commit is a deployment or something. Because you just won't get useful stats and it'll be much easier to game.

JASON: Okay. So, loading its SDK. If you must use CDN, the most important thing to note is that Sentry.integrations  oh, that's what I screwed up. So, it's Sentry.Integrations instead of. No, that's fine. We can make that work. Oh, and look what I can do. I made a mistake, so I can come in here to my deployments and I can say, here, womp, womp, I broke it. Okay. So, let's go fix it. And then go back to my base. We're gonna fix this. Okay. Okay. We'll push that straight to main. No checks. Ship it.

DON: Where's the Bill O'Reilly GIF when you need it?

JASON: We'll do it live. So, let's get this going and then we'll come back out here and make sure that I didn't break it worse. Let's see. Yeah, I mean, building  deploying on Netlify with a build hook, 100%, yeah, you can do all that stuff. Okay. So, let's make sure this worked. Don't you lie to me. It is a constructer. That's what it just told me. Well, now their stuff is broken, right? This is no longer my fault.

DON: I don't know.

JASON: I don't know either.

DON: We use Sentry, but 

JASON: This is probably my fault for using, like, this is where I probably should have done, like, a Gatsby or a next that has an NPM setup. Because this, like, everything in here is like don't use the CDN. Like don't. Just don't. And I'm doing it anyway. So, this is  this is definitely like me doing this incorrectly. But maybe what I can do 

DON: What you could 

JASON: Oh, I did the wrong thing is why it didn't work. Okay. So, I need to get the right code. Right? Man. Okay. So, let's just  let's  so, let's just assume this would totally work if I was not doing it like the hardest possible way.

DON: While doing it live.

JASON: Yeah, exactly.

DON: Nice.

JASON: So, I'm making some mistakes here. That's  that's really where we're at on this. So, Sentry might have been a little ambitious for a project that we're gonna do it the way we're Sentry is definitely like don't do it this way. No, I'm gonna do what I want. Let's pull that out. So, let's git commit.

DON: Did you want to do the GraphQL thing quickly?

JASON: Yeah, let's do it.

DON: Because I think we're getting close to the end of time, right?

JASON: Yes.

DON: One of the interesting aspects of Sleuth is that all of our UI is all driven from GraphQL, and so you can query GraphQL yourself and do whatever you want. So, what we originally talked about doing was 

JASON: Oh, we got so far off topic. I'm sorry.

DON: That's okay. That's okay. But we were talking about doing, like, a Twitch overlay where you could see your metrics on stream, which meant it would be a page that would load. How would the page get data? It would get it from the GraphQL API.

JASON: Right. If we wanted to  let's just query the API a little bit. I think that'll be good and we can push a change and see it come in. So, how do I get to this API if I want to get there?

DON: /GraphQL.

JASON: Oh.

DON: So app.Sleuth.io/GraphQL.

JASON: So, I'm assuming I'm logged in. Look at project. And then I probably need projectslug.

DON: That's gonna be the name inside of the URL. I don't remember if your project name had spaces. We do slugify 

JASON: Yeah, so I called this sleuth11typroject. And then in here, we can get the name. And let's just make sure  nope, that's not it. Org slug  oh  org slug.

DON: You'll need your org slug as well. I think that was learnwithjason.

JASON: Yep. Okay. Here we go. So now we're getting our project. And so the name is kind of redundant because we knew that to begin with. But let's get some other data here.

DON: And actually I think there is a toplevel field if you want to get the data fed into the charts. If you go and click on Docs in the topright. You see kind of a list of  click on query. All the different fields you have.

JASON: Chart data.

DON: Chart data. There is also metric recap, I think, which if you just wanted to get the actual number itself 

JASON: Okay. Yeah, let's do that.

DON: And not get all the data points because you're not trying to chart it or something.

JASON: So, metric recap. And I'm gonna do the org slug. And that is gonna be learnwithjason. The project slug is  I should have kept this. Sleuth11typroject. Then do I need other things? Environment slug? Start date, end date. Can I leave those blank and they default? No, they look required.

DON: Let's find out.

JASON: Let's see. Are you gonna yell at me? Number of deploys. Oh, might need to give it a timeframe. Environment slug. Okay. So, environment slug is gonna be production. And then we needed what else? We needed  oops. The start date. Oh, what's the format for the date?

DON: It's a string, the ISO format.

JASON: Let's go 2021/06/08. Hopefully it doesn't make me do a complete iso string.

DON: Find out. This will be a discovery process for you and me both.

JASON: See if it runs. Deployment slugs.

DON: That would be the name of the repository that you put in.

JASON: And that is metric recap. Slugs is an array. Let's see. Look in here. Deployment slugs is sleuth11typroject like this?

DON: Waiting for my twitch to catch up? Yes.

JASON: Okay.

DON: So, whatever's up in the URL. Since you did it with dashes, it should just be verbatim.

JASON: Number of deploys, 5. Let's look at some other data. Average lead time. Let's get the number of deploys per day. Failure rate percent. Beautiful. This is great. Like we can see a lot of information here very quickly, and, like, if I was gonna build a dashboard, I can plug this into an existing dashboard. I can build my own really quickly if we had more time, which unfortunately we don't because I screwed around. We could stand up a little thing that would actually show the deployment health right on stream. Which is very cool. Like there's  so, there's a lot of potential here. A lot of really cool stuff that we can do with this tool to give us a great view into our team health. And, again, you know, with me spending quite a bit of the time just kind of jawing away here, we were still able to get a ton done. You know, we  even without the  like once we get the Sentry stuff fixed, that'll give us even more insights, but this is, you know, this is already great. Like we've already got such a good insight into what's going on. To see, you know, here's a healthy deploy. Here's an unhealthy deploy. That's really nice.

DON: Yeah. That's the idea. One thing that we fountain that we're putting into the next version is a weekly report. We found many team leads want to know how was the last week? What were some problem areas or whatever. Each team is different. That's the advantage of going to GraphQL. If your team wanted to do every two weeks, if you care about these metrics. However you want to slice and dice it. Because you're a developer, you can go straight to the GraphQL and get the raw data. That's already been calculated. Like, for example, when we go to Sentry, what's actually happening is every two minutes we read an error rate from Sentry and throw that into a machine learning algorithm and figure out what are anomalies. If your production has 400 errors a minute and that's how you roll, that's fine. We're not judging you. If it goes to 800, that's an anomaly. We mark it for you.

JASON: Yeah. That's good too. One of the things I noticed is a lot of times people get obsessed with the absolute numbers, but what really matters is the relative numbers. I know when I was at IBM, for example, we had some code bases that were 25% tested. And when we looked at our code coverage. So, you know, somebody would start banging the table, we got to get to 100% code coverage. And, first of all, you know, code coverage being an important metric not withstanding. What we started looking at was, okay, well, let's look at relative addition. Can we get from 25 to 26% and make sure we don't go down to 24%? So, we started tracking it that way. Is our coverage getting better or worse over time instead of looking at, like, well, we have to get to this metric no matter what, which becomes arbitrary and, again, that gets pretty gamified pretty fast, I think.

DON: It gets gamified and also makes you not want to start. You're never gonna get the IBM code base to 100% without shutting the Dev team down for six months. Why even bother if we can't hit the numbers we want? That's not what matters. What matters is you're getting better. It doesn't matter you obtain perfection. As I'm sure you would agree, there is no such thing as perfection in software.

JASON: Right. Absolutely. Well, so, Don, this is really  I mean, this is great work. I love this product. I think it's gonna be super helpful. I hope that, you know, everybody watching in the chat, I hope that you're also excited about this. I can absolutely see myself plugging this in to team stuff. So, I'm really excited. So, if somebody wants to learn more, I'm gonna drop Sleuth back in the chat here. Where else should someone go if they want to keep up? Let's send them back to your�@mrdonbrown. In the chat.

DON: That's one place. For the pandemic or whatever, I've been learning about video production. Making videos on YouTube. Detective  different themed videos.

JASON: Sleuth or 

DON: To communicate an idea.

JASON: What's the YouTube account?

DON: YouTube/c/sleuthtv.

JASON: No 

DON: That's it.

JASON: Okay. There's the link.

DON: So, yeah, we try to go slew Sleuth stuff, but kind of in a little bit more interestingtype format.

JASON: The video production is also gorgeous on these. I'm very jealous of your studio.

DON: I appreciate it. You know what I mean. It's such a  it's a deep, dark hole that once you go down, you just keep going down.

JASON: For real.

DON: Yep.

JASON: All right. Well, so, with that, you know, make sure you go and check out, like, let's see, we dropped the Sleuth link. We dropped the Twitch link. Make sure you head over and, you know, check out  again, we've had Jordan with us all day doing live captioning from White Coat Captioning. That's made possible through our sponsors, Netlify, Fauna, Auth0 and Hasura. All of them kicking in to make the show accessible to more people. Means a lot to me. I hope it means a lot to you. That's all available on the homepage. While you check out the site, you can look at the schedule and see what's coming up. We've got some really good stuff coming. Better experiences with screenreaders with Ben Meyers, SemanticsDev, who just raided today. We're going to have Sarah Dayon to talk about JavaScript Autocomplete. Teach us about FlutterFlow. I know nothing about Flutter. I know very little about Firebase. We're gonna learn about Rust, Strapi. So many good things coming. Add them to your Google calendar so you see them coming up. It doesn't add notifications or anything. Just lets you know they're happening. Make sure you go and check that out. Don, thank you so much for taking some time to hang out with us today. This is an absolute blast.

DON: It's been so much fun. I really enjoyed it.

JASON: I appreciate you taking the time. I imagine we'll find fun things to work on in the future. Chat, as always, thank you so much for hanging out. Make sure you stay tuned. We're gonna go find somebody to raid. We will see you next time.

Closed captioning and more are made possible by our sponsors: