Knowing which delivery metrics to measure and optimise in a SaaS business is hard work. My intuitive attempts to find them over the last 20 years never led me to a point I was happy with – many of the things I have tried ended up abandoned due to complexity (hard to measure) or just not being that valuable in retrospect.
Naturally, my excitement was already high when reading Accelerate: Building and Scaling High Performing Technology Organizations… and it went off-the-dial when I came across an excellent set of metrics accompanied by an in-depth explanation. This book is a well-executed exploration of the data coming out of the State of DevOps report.
So what did I take out from this?
Avoid metrics based on team or individual outputs
Any metrics based on productivity or outputs are likely to be unhelpful towards the goals of a SaaS organisation. At best they focus on subparts of the system such an individual or a single team, at worst they can be gamed, creating ugly side-effects – try imagining what commits-per-day or bugs-per-developer metric might do to a team dynamic.
What about Velocity, it’s agile!
Velocity is a good measure for capacity planning as well as team awareness and growth. Velocity is not a good team productivity measure, and it gets worse when used in team-to-team comparisons. When misused, velocity metrics are likely to be gamed, losing any value it had in capacity planning.
Velocity is a measure local to a team because the team contexts and constraints are always different.
Focus on global outcomes
Your metrics should focus on global system outcome, that is those that can be best influenced by all parts of the organisational system working well together.
An example of conflicting local metrics
As an example of a broken system, imagine the hosting operations team were focussed on application up-time as a primary metric, while the Product development team’s primary metric was feature output. The most likely outcome is having low-quality code shipped into Production at a fast rate – Everyone loses here. While the product dev team might improve their metric, they lose motivation shipping low-quality code, and operations are angry with the dev team for their tanking metric (and for getting support calls in the night). The big loser here is the customers and ultimately business revenue. Both these metrics feel logical at the local team level, but the conflict at the system level is perilous.
Simple system metrics that work
Metrics should never be viewed alone, and always should be viewed in context. The following four metrics make a solid starting point for considering you SaaS software delivery ecosystem as a whole.
1) Delivery lead time
There are many ways of measuring this metric, and it can often be organisation specific.
A good starting point for thinking about this measurement is from the time that the development team start work through to the time the feature gets deployed into production.
Delivery lead time is a good measure of system throughput. As a side note, be aware that poor requirements could negatively affect this metric.
2) Deployment frequency
How many deployments is the team doing? If you subscribe to modern dev-ops thinking this is a good one. A higher number of deployments usually correlates well with support responsiveness, product innovation, and quality.
Deployment frequency is a proxy for batch size which is often hard to measure. Small batch size is known achieve better flow [work-flow], improving feedback loops to mitigate risk while also increasing levels of experimentation and motivation. See The Principles of Product Development Flow by Don G. Reinertsen
3) Time to restore service
The time to restore service is the average time it takes to get things back to normal when they go wrong. It’s a great measure of internal support responsiveness and can help identify system issues such as resource over utilisation within teams impeding flow, internal communication issues, lack of production telemetry and ineffective error monitoring.
4) Change fail rate
How many issues make it to production? Maybe as a result of new feature development, a bug fix that introduced new issues, or a networking configuration change.
This metric creates a healthy tension with the previous metrics. For example, having a high change fail rate alongside a low delivery lead time might indicate you are running too fast, and you need to slow a focus on quality.
There are a lot of good metrics that can be used to measure and improve your SaaS software delivery ecosystem. The set above is a great starting point, and from here you can layer on some other metrics that are more specific to the optimisations you organisation needs. Just be sure to understand your specific context, and focusing on the system as a whole.