Sleuth is a mission control software for teams doing Continuous Delivery. It provides centralized visibility into software delivery progress and performance, plus automation that empowers developers to make frequent deploys easier and less stressful.
Sleuth's metrics tracker gives managers an accurate and ongoing picture of their project's delivery performance as measured by Accelerate metrics.
Sleuth's deployment tracker and automation tools help developers own, coordinate, verify, and streamline deploys, and ultimately improve on the metrics.
Min 0:30 - Deployment frequency is NOT "how long it takes to get a change to production". Deployment frequency = How often an organization successfully releases to production Min 0:37 - Change lead time is not "From the developer works on it all the way to it gets in production", that is called Cycle Time "Lead Time refers to the amount of time between when an order is placed and when it is delivered, Cycle Time refers to only the amount of time when actual work is done to complete an order"
Many thanks for this video! About the MTTR, when should we consider a hotfix instead of revert? Is it only about failure natural? Should we consider production as “cumulative stack of changes” and avoid revert even for a critical issue ?
The rule of thumb I use is what gives the best customer experience. If you are deploying frequently and have the ability to push out a fix quickly, then rolling forward is usually the right choice. However, if your deployment process is long and you have the ability to quickly jump back to the previous version, start there then roll the fix out asap. Something to remember with rollbacks is one way things like database migrations. Sometimes, a deployment may contain a database migration that is one direction and can't be reverted, so if that is something your service uses, rolling forward may be your only option. There are ways to structure and rollout database changes that are seamless and can even be reverted (old code on a new db), but they have a cost in complexity, both code and deployment process. That said, I'm usually willing to make that tradeoff.
On Business value points, it is not the metric that corrupts the system, it is the fake managers that are not qualified to lead engineers. You need someone with engineering knowledge to manage engineers or you end up with the usual bloated companies where every problem requires more hires and small features take forever and politics is the real focus vs business value.
Lines of code mean nothing, but that said, if you spend 6 months and all you contributed is 5 lines, then something is wrong either with the work environment or with you.
I agree with your points on lines of code, but I think it is more orthogonal to the general concept of measuring developer productivity. Can you measure the effort, activity, or even the output of developers? Sure, but does any one number accurately represent the "productivity" of the developer? I'd argue there is no such thing or even collection of metrics to "measure developer productivity" accurately and completely. To your point, however, there are indeed insights to be gained by objectively looking at any single metric.
No way to measure an individual developer? That is NOT true, there is clearly a HUGE difference between individual abilities, creativity, passion, learning ability, IQ etc. People who don't want to be measured are the ones who are insecure about their contribution and want to hide it in the team. Managers are a problem when they have no idea what the developers do, non-technical people who call themselves leadership but contribute nothing but a bunch of fake process that wastes time and fake ceremonies that just give PC optics vs really useful activities.
Big thanks to Nathen Harvey for joining me today! So many insights and great discussions. If you think DORA is just about metrics, you are missing so much!
I like how you had to emphasise that it is "not about the metrics, but about the developers". Obviously just another way to micromanage developers and have micro level insights, so this message had to be stressed, kind off like propaganda brainwashing. Not a sincere product definitely. 100% agreed with @JonathanCrossland and his comment - it is not about developers, but about desire to do more quicker, which in a long term is a recipe for disaster.
In my experience it's the opposite. We say it's impossible to generate duplicate UUIDs, when what we mean is you'd have to generate a billion per second for 85 years to have a 50% of a collision occuring
Maybe Habit Teacher could be a headline that could spark some understanding and foster a conversation about what we do. You know I love your videos, but just to iterate. That amount of distance between the camera and you in these videos is the distance you should keep 😆
Both actually. We use the CircleCI tool to automatically split tests, based on execution time, into batches. This allows us to run all the tests within 5 minutes or so.
The CircleCI workflow is implemented using an approval, but we use Sleuth Actions to automatically approve it after 5 minutes of a healthy status on staging. Thus, in practice, it is completely automated, but what is nice about the approval is it gives us an easy way to block the deployment or manually override if the staging unhealthy status is invalid.
There are so many bad assumptions here. These metrics do not show anything to developers. Every time you sample these metrics the context is different. Different work, task, code, requirement etc. Shipping more and smaller is wrong. Ship well and when necessary is more correct. Small changes can easily affect a lot of things. The assumption here is that a small change is small. Sometimes a small change is huge. It also has nothing to do with quicker. Being fast is nonsense for developers, it is a business desire. We should stop trying to apply business desires as a positive to developers. Change failure rate is bad, because a failure ALWAYS changes.the nature of a failure is always different, in how you measure and how it works.
I think both things can be true. The State of DevOps research showed by surveying 30+k companies that deploying frequently did strongly correlate to organizations achieving their business goals, but as you correctly point out, at the micro it is a lot more complex. I noticed you have a RU-vid dev channel, would you be up for a collab/debate/discussion/whatever? I'd love to chat further :)
Context matters here. The goal is to be capable of releasing software on demand. This typically means at the tempo of the business value proposition(features and bug fixes). The DORA metrics align to this concept, just balance them against the deployment cadence needed to achieve business value.
@@donbrown573 This is basic correlation/causation cargo cult fallacy. Just because successful people disproportionately own BMWs does not mean owning a BMW will make you successful.
I love the more professional look with this one. Love the thumbnail too. The footage really adds to the video in a good way. Short, but sweet. Great job with this one. For next time, your lighting was a little weird but thats totally fine. Good job!
This is why I don't think you can take the metrics too seriously. They are indicators to incentive good behaviors, but at the end of the day, any metric can be gamed. That said, Sleuth will soon be adding the ability to define change lead time starting at issue events such as moving a ticket to "In Progress", so that may be useful for some customers.