Technical Due Diligence – an Art supported by a Checklist

This list should help you think about the various aspects to cover in a technical due diligence exercise. In my experience, no due diligence exercise follows the same script; they always take a unique path. This list can help as a trusty guide, not a script.

It really helps to request some documents from the team up front as it provides the team with the chance to think about the scope of the upcoming DD exercise. Their responses provide you with a better chance to make the most of the limited time for the exercise.

It’s crucial not to come across as judgmental or condescending. This is hard to do given your about to ask them about numerous best practices and the least desirable aspects of their baby! Show some empathy for the individuals going through DD. I find it helpful to keep in mind that I have never written and maintained perfect software, so I don’t expect this from any team. Despite your questioning, you are not trying to catch them out. The goal is to understand the opportunities and risks that exist within the solution/team so that this perspective can be fed into the overall deal context.

Some perspectives you should consider

Architecture

  • Describe the overall architecture of the system.
  • What architectural documentation exists?
  • Easily understood or complex?
  • Are you using Industry standard components?
  • What are 3rd parties vendors are used to make the solution work?
  • What would be the next major architectural step?
  • How old is the system? approximately many different people have worked on it?

Scalability

  • Which physical and process segregations exists?
  • Do you have any load balancing in place?
  • What are the single points of failure?
  • How would scale the system if your load increased dramatically?
  • Can you achieve automated scaling? Elastic scaling?
  • Cost structures (licensing, compute, storage) – What costs scale as you scale out your system?
  • What else could you automate to improve operational efficiency?
  • What’s the next scalability challenge?

Performance

  • What performance monitoring practices are in place? Manual, automated?
  • Has there ever been load testing? When?
  • Where are the bottlenecks? What is the thing that would break first under increased load

Security

  • How is sensitive information is stored in the system? How is it transmitted? (HTTPS, at rest)
  • How are user passwords stored?
  • Where are application secrets stored, and how are they managed (e.g. 3rd party API keys, Database passwords)
  • What protection exists against common attack vectors (OWASP to 10, e.g. XSS, SQL injection)
  • Any security specific infrastructure, i.e. Firewalls, IDS and WAF?
  • Latest penetration testing results?
  • What functions requires root access. Who has this access?
  • How do your upgrade and patching software/OS?
  • What’s backed up? Where? Last disaster recovery (DR) test?

Compliance

  • What sensitive information exists in the system
  • Is the application or team certified against any standards (ISO, PCI)?
  • What standards do current or prospective clients mandate, or ask about?

Development processes

  • What languages and frameworks have been used? Why?
  • What is the structure of your Dev and Ops teams?
  • How is the team organised? How do they communicate and make decisions?
  • How does the team improve themselves?
  • Whats source control tools, and what branching strategies?
  • Unit Testing? Test Driven Development?
  • What environments exists other than production? (Dev, QA, UAT etc.) How are these managed?
  • Continuous Integration? Describe your DevOps toolchain.
  • How do you deploy the system?
  • Describe the current state of technical debt. How is this managed?.
  • What version of frameworks are used? When were they last updated?
  • What would you add to the development team if you had the investment?
  • What operational metrics/tools are used in Production? Bugs, alerts, performance?

Maintainability

  • Is the source code readable and consistent?
  • What level of comments exists in the code? Code level, Module level?
  • Are they running on current releases of underlying software? Any significant changes on the horizon?
  • Are there any obscure 3rd party dependencies?
  • If applicable, what effort would be required to pick up and move to the solutions to a cloud vendor (AWS, Azure, etc.)
  • Any long-term viability issues with specific vendors?
  • How could you improve maintainability?

Licensing issues

  • Do you own all of the code necessary to run the system?
  • How is it solution licensed to clients?
  • Does anyone else have (or could they claim) rights over the code?
  • What are the terms for any 3rd party licensed code?
  • What are the risks associated with those licenses? Are these critical pieces and could they be easily substituted?
  • Are there any viral OSS Licenses embedded in the solution?
  • Is there an IP strategy in place? How do you protect your IP?

Product

  • Is there a product roadmap?
  • Is there a product vision? Who owns it, and how is it articulated?
  • How do you prioritise features?
  • What major features have been released in the last 12 months?
  • Are metrics or tools used influence the product direction?
  • How do customers influence the product direction?

Catch-All

  • What else is essential for a potential investor in your software to know. What could affect the system in the future?
  • With a significant resource investment (money, people), what would you do to the application or the software delivery team?

Measuring SaaS software delivery – Metrics that work

Meter readings

Knowing which delivery metrics to measure and optimise in a SaaS business is hard work. My intuitive attempts to find them over the last 20 years never led me to a point I was happy with – many of the things I have tried ended up abandoned due to complexity (hard to measure) or just not being that valuable in retrospect.

Naturally, my excitement was already high when reading Accelerate: Building and Scaling High Performing Technology Organizations… and it went off-the-dial when I came across an excellent set of metrics accompanied by an in-depth explanation. This book is a well-executed exploration of the data coming out of the State of DevOps report.

So what did I take out from this?

Avoid metrics based on team or individual outputs

Any metrics based on productivity or outputs are likely to be unhelpful towards the goals of a SaaS organisation. At best they focus on subparts of the system such an individual or a single team, at worst they can be gamed, creating ugly side-effects – try imagining what commits-per-day or bugs-per-developer metric might do to a team dynamic.

What about Velocity, it’s agile!

Velocity is a good measure for capacity planning as well as team awareness and growth. Velocity is not a good team productivity measure, and it gets worse when used in team-to-team comparisons. When misused, velocity metrics are likely to be gamed, losing any value it had in capacity planning.

Velocity is a measure local to a team because the team contexts and constraints are always different.

Focus on global outcomes

Your metrics should focus on global system outcome, that is those that can be best influenced by all parts of the organisational system working well together.

An example of conflicting local metrics

As an example of a broken system, imagine the hosting operations team were focussed on application up-time as a primary metric, while the Product development team’s primary metric was feature output. The most likely outcome is having low-quality code shipped into Production at a fast rate – Everyone loses here. While the product dev team might improve their metric, they lose motivation shipping low-quality code, and operations are angry with the dev team for their tanking metric (and for getting support calls in the night). The big loser here is the customers and ultimately business revenue. Both these metrics feel logical at the local team level, but the conflict at the system level is perilous.

Simple system metrics that work

Metrics should never be viewed alone, and always should be viewed in context. The following four metrics make a solid starting point for considering you SaaS software delivery ecosystem as a whole.

1) Delivery lead time

There are many ways of measuring this metric, and it can often be organisation specific.

A good starting point for thinking about this measurement is from the time that the development team start work through to the time the feature gets deployed into production.

Delivery lead time is a good measure of system throughput. As a side note, be aware that poor requirements could negatively affect this metric.

2) Deployment frequency

How many deployments is the team doing? If you subscribe to modern dev-ops thinking this is a good one. A higher number of deployments usually correlates well with support responsiveness, product innovation, and quality.

Deployment frequency is a proxy for batch size which is often hard to measure. Small batch size is known achieve better flow [work-flow],  improving feedback loops to mitigate risk while also increasing levels of experimentation and motivation. See The Principles of Product Development Flow by Don G. Reinertsen

3) Time to restore service

The time to restore service is the average time it takes to get things back to normal when they go wrong. It’s a great measure of internal support responsiveness and can help identify system issues such as resource over utilisation within teams impeding flow, internal communication issues, lack of production telemetry and ineffective error monitoring.

4) Change fail rate

How many issues make it to production? Maybe as a result of new feature development, a bug fix that introduced new issues, or a networking configuration change.

This metric creates a healthy tension with the previous metrics. For example, having a high change fail rate alongside a low delivery lead time might indicate you are running too fast, and you need to slow a focus on quality.

Conclusion

There are a lot of good metrics that can be used to measure and improve your SaaS software delivery ecosystem. The set above is a great starting point, and from here you can layer on some other metrics that are more specific to the optimisations you organisation needs. Just be sure to understand your specific context, and focusing on the system as a whole.

The cost of client specific customisation on SaaS products

feature revenue addiction

When it comes to managing a SaaS product roadmap, consumer focussed SaaS product teams have it easy. Sure it’s not trivial to work out what to build at first.  But once they have established a product-market fit, the consumer-SaaS know who their customers are,  and can coordinate a good sampling of people to help them understand what to build next.

When you get into the small-medium business (SMB) or enterprise SaaS markets, this purity begins to disappear.  Some spine-chilling [to product managers] words like client funded feature, sponsored capacity, client customisations will find their way into your product priority discussions.

In the eyes of the survival-focussed founder, the CFO, or the less experienced product team, it’s hard to contain the excitement of this scenario.

“ Let me get this straight… we have clients who want to pay us for the privilege of building the features that already exist on our product roadmap. All I have to do is add some customisation here, or change a priority ordering there… this is amazing! ”

What could possibly be wrong with this scenario?!

Before I begin to answer this question, I want to make it clear that accepting client money for SaaS feature development is not always the wrong thing to do. There are going to be times when it makes sense and is precisely the right thing to do or the only way to survive. But realise that things can and likely will go wrong – nothing is free in life, and this is no exception. You need a very clever and strong product management focus to help you recognise the traps and guide you through the relationship without picking up a big penalty fee on your very tight SaaS baggage allowance¹.

So what can go wrong?

Fringe features – Anchors that are hard to shake

You can end-up under significant pressure to build features that have no rightful place in your product, the type of functionality that future clients are unlikely to use. This is not ideal as the total cost of ownership of the feature will end up being significant, every release cycle, every UI refresh, every framework version upgrade you will maintaining this product feature. Think of it as a tax on the initial decision, or more accurately, product debt. The more customised features that get added, the higher the cost of servicing the debt.

But a small number of customisations can’t be that bad?

You’re right, taking some money to build a single client-specific feature will probably not sink the ship, you might get away with 10’s of them if your product is selling well enough.

But a frequently seen darker side of this equation is often that Sales teams with targets can find themselves addicted to the short-term revenue streams. They are not tuned in to see the long-term damage that can be caused – it’s not readily apparent – just like product managers can’t likely balance a budget. A drug addiction parallel is strong but appropriate here as the long-term damage to SaaS product in a vicious cycle of custom build to short-term revenue, and the resulting Frankenstein product can be a nightmare to maintain.

How can minimise the impact of custom features?

The product management approach

A skilled team can help manage the client expectations, tightly scoping the level of customisation. Even better, we can shape the client towards a version of the feature which can be considered industry best-practice and hence could realistically (not just wishful thinking) end up being offered to other customers as part of the core SaaS offering or as an up-sell.

The technology approach

There are architectural patterns and API versioning techniques that allows to isolate the customisation. We loosely couple the customisation code from the cores SaaS platform and  reduce the lifetime cost of the custom feature.

OK, that’s a bit scary, so what else should I be aware of?

So far, we have outlined the more common scar-causing patterns you can see in mature SaaS businesses that have played the customisation game. But a bunch of other side-effects can turn up:

Configuration overload

Your product configuration options can become so complex that nobody understands the full array of settings you have and the interplay that they have with each other. This will slow everything down: implementation, development, sales, and product. You end-up at a point where no person can reason about proposed changes and their impact on your SaaS platform.

The right features left unidentified

The distraction of the custom client feature analysis phase distracts your product team from doing the higher-return task of working out the killer features that would provide the highest ROI in the market. This is what you should be spending your precious development resources on.

The right features delayed

The opportunity cost of building the customised features distracts you from delivering the features that provide value for the majority your clients – and reduces your medium-term SaaS sales pipeline and subscription revenue.

Technology team motivation

Your technology team does not like too much custom-domain complexity², especially if they did not create it themselves. So for future generations of your expensive developers, it makes their job hard, making them feel lost and ineffective with regular monotony. This hidden cost will erode productivity and hit at staff retention of your most talented staff.

Summary

When considering client specific customisation, ensure you have a good understanding of the cost/benefit trade-off.  If/when you do move forward, ensure you have tight management of the customer’s expectations, and leverage architectural patterns to isolate and minimise the maintenance cost of any fringe features you build.

 


Footnotes

¹ That was an attempt at an airline check-in counter gag – a call-out to the technical debt and operational debt that you will pick up if you take on too much customisation. You will own this debt forever, so don’t take it on likely.
² Custom-domain complexity refers to complexity as a result of your product design, rather than complexity from development frameworks and infrastructure. Interestingly developers have a higher tolerance for the later.

Scaling software delivery teams – up & out

scaling-out?

In the world of technology infrastructure (servers, networks, storage etc.), it’s normal to talk about both scaling-up and scaling-out your platform. In this context, scaling-up means making the things you already have work faster by adding in better CPU’s, more memory and speedier networks while scaling out involves creating additional parallel execution paths to get more work done in a distributed fashion.

Both approaches are sensible strategies when done at the right time as they both will increase the overall transactional throughput for your platform. The approaches are complementary and often should be used together.

A parallel from scaling hardware to scaling teams.

I now want to carefully draw a parallel between the world of scaling hardware and that of scaling modern software delivery teams. I think it fits pretty well, to a certain point, but you can be the judge.

Scaling software delivery teams

The goal of scaling your team is to increase overall feature delivery throughput, i.e. the volume of quality product you can produce as a company. Let’s talk about what scaling-up and scaling-out might look like in this context.

Effective scale-up

The goal of scaling-up a team is to deliver higher quality and/or faster software. When attempting the scale-up, the activities are generally team practice focussed including things like:

  • Adopting an agile framework such as Scrum.
  • Coaching peer solution design and review.
  • Regular improvement focussed retrospectives.
  • Hiring members with more development experience.
  • Consciously training and up-skilling team members.
  • Continuous delivery best practices like regular small scope automated releases.
  • Getting platform operational telemetry (errors, performance) feeding back into the team that build the software.
  • Better tooling and team working conditions – great hardware, the right tools.
  • Providing the headspace to focus for a reasonable period of time.

The best companies are continually scaling-up their teams, it’s not a phase, it’s a continuous improvement cycle that keeps on going. The best candidates expect this to be part of your culture.

When to scale-up

You should consider incrementally scaling-up all the time, layering on improved practices within regular continuous improvement cycles. It is much easier to do this when you are a small team as the practices will more easily propagate out as the group grows.

So there’s no bad time to scale-up. In the start-up world, a good time to consider a scale-up focus could be as soon as you validated your product in the market, where you create a little bit of headspace to avoid piling on technical debt. You will need a solid senior resource base to achieve this, some people who know what good practices look like, and more importantly understand the right time to add just-enough of them in to mix.

Effective scale-out

As a quick reminder of what we mean by scaling-out our team, we are hoping to get more parallel and independently productive work-streams, i.e. more people creating a higher feature velocity. To achieve effective scale-out of software a delivery capability requires more than just bringing on teams through hiring or outsourcing, you must support this headcount growth very deliberately to recognise the return-on-investment.

In a software product (SaaS) environment, the activities to enable the scale-out of teams have become much better understood in recent years. It still feels a little bit more like art than science as the best-practice patterns are very debatable, evolving, contextual and involve human’s changing their behaviours – hence scale-out is a regular source of organisational missteps.

Common scale-out patterns

  • Applying architectural platform patterns to remove dependencies between teams.
    • Layered architectures – for example, a split between an API team, who works with a high degree of isolation from front-end mobile and web teams.
    • Microservices architecture where aspects of the product offering are implemented as a collection of loosely coupled services that can be more readily understood and extended by the teams that manage them.
  • Creating shared-service teams that support and scale-up (up-skill) the more focussed feature (product area) teams.
  • Flattening your org structure while focussing on sharpening your mission and encouraging effective inter-team communications, allowing higher degrees of team accountability and self-organisation ¹

When to scale-out

Scaling-up teams is not an easy exercise, the additional synchronisation and communication overhead to support decentralisation of your product stream causes you pain and costs you money. It also requires clever change management to bring everyone along for this journey.

With that said, when it’s obvious you need greater feature throughput, and you have been in this position for many months with no end in site, scaling-out the team is probably the right thing to do.

Do I really need to scale-out?

When you have a small delivery team, maybe less than 20 people, with no concrete plans to grow, you may never need to action scale-out. A straightforward monolithic architecture will give you the best bang for your buck, the team’s natural size should make transparent communication and alignment easy enough.

Even if you have plans for rapid headcount growth, most platforms should start this way during the validation period – keeping everything really simple and lean. Premature segregation of product streams or architectural loose coupling can make the application harder to refactor quickly when the early learnings start coming in during your product validation.

An excellent technical lead can lay some of the foundations, i.e. the architectural seams to allow for future scale-out strategies when the team and the platform need to grow, which is often be years after the validation phase.

Common anti-patterns when scaling teams

  • Scaling-out poor practices: Businesses that scale-out without ever having a focus on scaling-up, effectively amplifying the inefficiencies that already exist.
  • Scaling-out with no supporting platform architecture: Companies not understanding that to onboard more teams, the platform and delivery support mechanisms must also morph to meet this demand. They end-up growing with no deliberate plan other than to increase headcount. At best the company will miss out on a significant competitive delivery advantage, at worst they may end up clogging the arteries of product delivery – the lifeblood of any SaaS business.
  • Over decentralisation: Tech leaders who are drunk on the microservices kool-aid with not enough platform guidance and oversight, and now own a hard-to-manage diverse application portfolio.
  • The interchangeable teams’ myth: that is scaling-out generic skilled (non-specialist) teams, expecting them all to be equally effective experts across all of the software system.

Ironically, high-demand products with paying users allow a business to gloss over fundamental team scale-out issues for many years. It is also these products that can afford the version 2.0 rewrite build, often in parallel while sunsetting version 1.0. Most SaaS companies will never get this luxury so will need to adopt more incremental strangler strategies to remove their scar tissue, or die while trying.

A note on additional headcount efficiency

As you grow your team headcount, the impact on the overall velocity of each additional developer will be significantly less than linear. That is to say, one-hundred developers are not 5x faster than twenty developers – not even close! The effectiveness of your scale-up practices will be the crucial factor in determining the incremental output of new hires.

Final thoughts

A note on incrementalism

Change is hard, it’s best undertaken with an incremental approach. Too much change too quickly will create havoc with staff morale, retention and product stability. Too little change breeds apathy, a lack of competitiveness and will also impact retention of your most talented team members. So find your Goldilocks zone of change.

Summary

To summarise, scale-up (up-skill) your teams continuously. Apply scale-out (decentralised) patterns at the right time if-and-as required, and do it very deliberately during early stages of headcount growth. While effective scale-out practices can be applied later-on, they will cost you a lot more.


Footnotes

¹ Warning – that’s highfalutin language.

Great software teams are well connected to their customers

Old skool Phone

One of the many interesting things most of the world doesn’t understand about great software people… They simply enjoy building cool things.

Why do I like this quote?

It reminds me that an important part of my job is to make the software I work on cool – to both myself and the teams I am working with.

That means focussing the team on the business mission while also connecting them to the customers.

How do you connect your team with your customers? By bringing the user base as close as we can to the delivery team using tooling like live application telemetry (APM), user and feature analytics tools, user-forum and feedback tools,  and good old-fashioned direct customer interactions.

Some of the tools I have seen in action – with mixed results

Application telemetry and APM

Product analytics

Direct customer interaction

Customer feedback

How we price – Everyone wants value, not hours

Valuable bread

Why we advise our client’s against fixed hourly & day rate agreements

Let’s start with a view of my personal goals and incentives as a consultant. What are they:

  • provide value to my clients by helping them discover and address their challenges. This keystone goal facilitates:
    • having a good relationship with the client, getting mutual satisfaction and enjoyment from the partnership – we have fun doing it.
    • future business, they want me back and recommend my services to other potential clients.
    • getting paid – inline with my clients perception of the value I am bringing to the individual situation.
  • learning new things on the job, both client specific learnings (their product & process) and transferable learnings (life lessons and technical skills) – professional and personal growth.

Over the last decade, I have frequently experienced that a granular focus on time and tasks often gets in the way of these goals.

What’s wrong with fixed hourly/day rates

A fixed rate can incentivise the wrong behaviours, both my client and I will end up with too much focus on the less relevant aspects of the agreement.

When I have billed hourly, I am in a position where my primary and most visible incentive is time spent, rather than outcomes. My clients are incentivised to get the most out of my hours, it’s the one things that is easiest to discuss and manage . The paradox here is that the very reason they are hiring your is that they have a gap in expertise, so they are often not in the best position to decide on how your hours should be spent. This will work against the results that my client is trying to buy.

I am passionate and hardworking, and will often put in extra effort to satisfy my personal drive for understanding and finding the “best” approach, increasing the value I am offering – closing personal knowledge gaps around my client’s problem. In addition to this, many of the best and valuable ideas come when I am exercising, reading or taking a shower… not always well received entries on and hourly timesheet!

The monthly fee – a significant improvement

Many engagements have a genuinely undetermined scope, due to the complexity of interrelated challenges and the many unknowns to be discovered. In this situation, a monthly fee approach is my preferred way to go. A monthly retainer promotes a partnership, analogous to the way a healthy full-time employee relationship works. It also the provides adaptability leading to a more evolutionary approach to the engagement. All incentives now shift to the macro picture of value. The consultant is incentivised to use their expertise to focus on the initiatives that will realise the highest client impact in an effective manner, and I feel less requirement to ask for permission.

The client’s value equation shifts from a tedious and often defensive microanalysis of an hourly/daily time sheet, to a much more powerful question:
How much value did the consultant add to my business this month? The client should be regularly asking:

  • Is the monthly fee worth the value my business if getting?
  • Do I want to continue with this next month?

The client also gets predictability on the cost side of the equation and is encouraged to exercise their option to end the engagement if the value side of the equation does not stack-up.

An example of where and how hourly breaks down

Imagine my client uses a specific type of technology framework that I as a consultant have no direct experience with. We have also agreed that I am going to engage with the client’s delivery team to help them improve the development practices using this framework. To best achieve this goal, and have some credibility with the dev team, I wanted to spend say 2 hours researching and tinkering with this tool, so that I can see how it hangs together when compared to other frameworks I currently work with.

The question here is, is this chargeable hourly work? You probably have an intuitive answer to this in your head?
If your answer was yes, does your answer change if I said I was really enjoying the tinkering, so I did a few late night spikes and spent 12 hours on it? Still chargeable hourly work?

In this fictional scenario, under an hourly rate, I am not comfortable charging my client 12 hours for my late night learnings, yet I would not be happy charging them zero hours – so I am forced into my own value balancing act – How many hours do I charge? What am I happy with? What would my client be happy with?

This scenario illustrates the awkwardness of hourly. In my experience, at best one party, and at worst both, at some time will feel that the hourly value does not stack up for all tasks in a timesheet. A fair-priced monthly engagement goes a long way to addressing this issue by smoothing over the peaks and troughs on hourly cost equation and pushes the focus to overall value delivered and a macro view of the activities.

What about fixed price?

Fixed price confers the same benefits I have outlined above. I have previously done fixed price projects and am happy to consider them when the scope of the deliverables is known. However, my clients tend to exist in a world where the path is less defined, and the value I add is helping them navigate on a journey of discovery and execution.

Summary

As a consultant, I believe the value you are adding to an organisation should be significantly larger than the fee you are charging. If the perception of these numbers even come close to each other, I would advise any client to look at different options. When things are aligned well, and the value significantly outweighs the cost, it is in everyone’s best interests to focus on the areas where we can further maximise value. In contrast, there is little or even negative utility in putting tight scrutiny over hourly costs.