How to help developers positively influence testability

As can be true of many things in life, those with the power to effect the greatest change are the ones who might not know it. Developers have it within their power to enhance the testability of a system, more than most roles in the organisation. When it comes to the day to day testing done by a team and the iterative quality of life improvements needed for great testing to occur.

In my experience, some members of this discipline are hardest to convince of their own influence. There is often a time displacement challenge here. Time spent making a system more testable now, has a pay off later with less incident management and rework to deal with. Never mind building the smallest, simplest thing that addresses the customer need or mitigates risk. Maximising the amount of work not done is the goal, although not a heavily adopted one in my experience.

We should also be mindful that testing changes testability too. And developers do a lot of testing, more than most testers give them credit for. Each compilation is a test after all, although you could argue a shallow one. Take a system that hasn’t changed for a long time, its original authors are long gone and there are no unit tests. what does the developer who charged with changing it do? They test it. Add tests, insert logging, step through, debug and inspect. This leads to a profound impact on the ease and shape of the testing that comes after it.

For example, a while ago I worked on an HTTP API orchestration system. Completely bespoke, with a rudimentary scripting language on top of the main LAMP application. Too much time on idle hands there. There were no low level tests for this system, which was a giant integration machine. There was one large ‘common’ class which did many things. Wrote to audit, billing, mundane helpers for times and dates. The previous team had delivered a system that was opaque in the extreme. In their defense, another organisational heuristic was in play. Turn a proof of concept into a production system serving millions of requests per day. Then ask for more change.

Over time the developers on the team changed, multiple teams formed and the mindset shifted. Testing became a first class concern. The system itself became decomposed into smaller services. Deployment automated, logs centralised and architecture documented. After that, testing diversified into performance, capacity and security. Test automation that only ran locally became part of the deployment pipeline.

It also begs the question, how much does testability matter when compared to getting a product out to market? For me, you need to be able to confirm the value you think you are generating. Generating events for each step on a customer journey for example, using some form of tracing identifier. Great for proving out value, useful for testing a system too. The developers on the orchestration system mentioned above could have done this. It was down to knowledge (I don’t know how to make this better) and pressure (perceived time to market). Once the capacity of the system was under threat though, all that unacknowledged debt became a problem.

In reality, once it became revenue generating then investment in testability was worth more. You can never be sure that revenue will arrive, so striking a balance is important (and difficult).

How do developers Influence system testability

My incomplete list of how developers affect the testability of system. There are more but I wanted this blog to not become a book:

By their subconscious actions when dealing with code they didn’t write

Your first reaction to application code you didn’t write has a massive impact on testability. Reactions such as advocating for a rewrite without any existing tests are flags here. A gradual approach where existing code is ‘illuminated’ with new tests is a positive sign here for me. Shine the light, make the change, check the impact, check over time. Scales well to higher, system level testability in the future. The ability and desire to do so suggests that real value is placed in observability. Useful when your system as a whole has a problem or its behaviour needs to be illuminated in some way. This provide a massive boost to your testability.

By the level to which they engage with generating system events

I have gone into battle with developers before over logging and instrumentation. Purity of code/data model and performance concerns are the most common hurdles to leap. Security is often a greater problem though. We must manage more transparency carefully. From a testability and operability point of view, I try and encourage developers to first enumerate the system states. Then create the log entries or events. This feels a little more controlled than added in an ad-hoc manner. It also allows me as a tester to guide towards information I want depending on what matters most. Prioritising performance or resiliency for example.

Their ability and desire to share key low level system knowledge

I recently worked on a massive scale system using Apache Spark and Kafka within a Cloudera Cluster. I had never worked with Spark before. Plus running it and Kafka within a proprietary clustered environment. This presented challenges to me. As I wanted to consider low level testing (targeting data and hows its transforms). Plus capacity testing (large data flows and batching of transformations). Many synchronous systems fed into the asynchronous Spark system. Causing race condition and timing issues. The lead developer had production experience with Spark and we spent a significant amount of time testing together. Teaching me the practicalities of the technology. Including useful CLI commands and checking in process state. The developer in question was a good teacher and had the desire to share. After an initial testing strategy session, we then tweaked feature by feature as the system evolved. When trying to gain traction with testability on a technology, a willing developer is gold. Especially one with the knowledge to target one system interaction and check the wider system state.

The size, concurrency and vintage of the changes they make

Batch size is one of the key indicators of testability and flow. The larger the batch size the more time you spend understanding and exploring. If developers are happy working in small batches or are willing to split something larger into testable chunks then all the better. It’s not only about the size of the change though, it’s about concurrency and vintage. If developers have their head on many changes then they are less likely to be able to focus on testability (or anything). As an extension of this, having many pull requests open for varying amounts of time also limits testability due to context switching. I would go further and say a long list of open (or even rejected) pull requests suggests poor testability, as we don’t know how to get them to done. Small, often integrated changes are a real boon to testability.

What they believe the testing done by testers can achieve

One previous developer I worked with use the phrase ‘why keep a dog and bark myself’ when it came to testers. If your developers still believe you are there to save them once they are ‘dev done’, then testability will struggle. Increasing the testers ability to test comes at the expense of overall testability. You will end up fighting compilation errors and uncaught exceptions, rather than getting stuck into the testing. I recently finished a six month consultancy gig. We transitioned from ‘lets hope it works’ to ‘it won’t fall over on first runtime.’ I remarked at the end that the testing had barely begun. Most of my work was getting the developers into a position to test and picking out advocates who wanted to.

Most of these are signs to look out for and each organisation will have their own set. The vast majority of developers want to contribute to more testable systems. The might not know how or think they need permission. Look out for some of the above in interactions with developers. Then find the testing advocate(s) on the team and help them increase testability for everyone.

The Team Guide to Testability has loads of tips on how to get developers (and other roles) invested in testability. Check it out here: