I had a great time answering questions as part of the Testability Power Hour on the Club, in collaboration with Nicola Owen and Ministry of Testing. I’ve collected all my answers on this blog, as I like my a record of content I create in one(ish) place. Here are some of my responses, starting with my final thought, as its important:
Your system is hard to test. My experience over the last many years in testing has helped me make that sweeping generalisation with confidence. Poor testability warps what we think testing is (and the rest of the software development world) and how it adds value. I think it’s one of the defining challenges of being a tester.
We can rise to this challenge though. Adding a focus on testability to your work will help you, your team and your organisation. Great testing comes from combining sharpening all those tester skills you have with enhanced testability. It is within our gift as testers to be catalysts for change. Always ask the question. “How can we make this more testable?”
And the rest:
Testability, at its simplest, is how easy (or hard) a system is to test. This leaves a lot to the imagination though. Which is what makes it so much fun for me. Subjective is definitely the word, so you need to drill down a little more.
For me, there is a social and a technical element. The system can have testability (mostly technical), but also the team needs the ability (and the will) to test (mostly social). If there is a gap or imbalance between those two, toil and frustration can occur. That gap is where we need to focus our efforts.
What are some practical guidelines for Developers to help them make their code more testable? Front end, API and backend (if it’s appropriate to divide them up in that way from a Testability point of view).
For this, I try and keep in mind four practices to advocate for:
- To be able to get feedback you need to slice through the architecture, rather than layer by layer. So build a small part of the whole application, the persistence layer, the API and the front end. Otherwise you store the feedback until the end, which is bad.
- Add logging and instrumentation, for what matters, function or performance. This is where testers come in. Worried performance problems? Add metrics? Need to know when a particular code path is triggered? Add an event. Ask for the information you need.
- Drive development with tests – consider minimal design to solve the problem. Less bloat, simpler to test. TDD can be a hard sell, but if you have a culture that drives design with tests and refactors often, you will have less (obvious) bugginess and you can explore for the really gnarly problems.
- Story Kick Off – Pairing – Show me what you’ve done – Demo – All of these are gold for more testable code. Lots of collaboration means less assumption and claims about what has been built, sharing that knowledge early is key. Also for testers, if a developer says ‘hey, have a look at this’ you SAY YES. Not, I’m too busy right now.
Why should Developers care about Testability… what’s in it for them? Will it take extra Dev work or actually help make their lives easier too? I’m thinking in terms of how to deal with any possible resistance in advance so we can get their buy in from the get go.
I try and go for the following angles:
- You get feedback on your code quicker. Unit tests don’t have millions of dependencies, integration tests have stubs where they need them, acceptance tests are minimal but targeted. You can run all on each change to get feedback.
- You build it, you run it – A lot more devs are on call/support now. If you want that to go smoothly (no 2am wakeups) then a testable system is a must. If your system is observable (exposes its state), understandable (logs and metrics are meaningful) and decomposable (faliure is managed and handled, rather than catastrophic) then its a whole let better to support.
- Whole team testing – go for the selfish option too. If the organisation wants everyone to take part in the testing effort, then ask them to make it easier for themselves.
- Go beyond the devs – I cant emphasis enough that your operations people will benefit from testability greatly (sys admins, DBA, application support) we have a massive amount in common with our ops friends. Give them a hug. But ask first.
Hi there! I’d like to ask something about testability team management. At this moment I have my ‘testabilitiers’ distributed across multiples squads. And they have their roadmap (as a testability team) and squad roadmap (as squad members) and I’m having a lot of misinterpretation of priorities.
What would be the best testability team format (not the best, but most recommended).
This is an exciting question! To have a team of ‘testabilitiers’ is a very progressive development, would love to speak to you about it.
In the world of ‘Team Topologies’ this is known as an ‘enabling team.’ The problem here is the team need to be in either a product development or the testabilitiers team, both will pretty much always create a conflict of interest.
Two things to consider here:
- An enabling team is supposed to be a change agent but if your testabilitiers are playing by the same rules as everyone else, what is really changing?
- How much are your product people (those with the budget) bought into the team’s existence? I would check that before continuing onwards.
Please checkout https://teamtopologies.com/ and see if there is a pattern to make this happen.
Other than developers, who else could we speak to about improving the Testability of an application?
What should we be asking for?
This is a lovely question, which speaks to the overall responsibility for testability. It’s part of the whole product, not only the function of testers requesting help from developers. Although they are key allies.
Other roles would be Business/Data Analysts (more product insight makes it easier to test), Ops Engineers (sharing system diagnostics and customer usage), Product people (money, desire for timely feedback), management (treating testability as a first class strategic requirement).
We should be asking for:
- Controllability – feature flags, test data generation, disposable environments
- Observability – structured, aggregated logs on all environments to complement exploration, dashboard that are readable and meaningful
- Decomposability – loosely coupled systems to be able to test early and isolate systems to find problems.
- Simplicity – simple architectures that aren’t a multitude of technologies and testers being in the room when these decisions are made.
They can be big changes, but with massive benefits for all.