Reviewing test code together for better production code

Recently, I had a chat with Mark Winteringham about a couple of testing topics. Seeing as I made some notes to prepare, I thought they could be converted into a blog. In this blog I will explore the first topic, what is the value in getting other to review your test automation code.

This is from the point of view that you as a tester, are producing some form of automated tests. Usually this is testers producing automated acceptance tests, driving the whole system in a browser for example. I suppose the first thing to say is that I’m not the biggest fan of testers being solely responsible for test automation. It puts up a false barrier between tester and developer, when we should be working together. And the framework and tests are the worse for it in my experience.

However, often testers are highly invested in test automation and developers are often invested in not doing test automation. Or actively discouraged in favour of feature development. Its not perfect but its the way the world is often shaped. You have start from where you are. Thats where reviews can help.

The Short Version

TL;DR – Get your team members to help out with cross discipline reviews. There are some cool benefits for all the code your team writes:

  • Better utils and library choices – maximise the good overlap.
  • Functionality is covered with the right test – minimise the bad overlap.
  • Using the same checklist as production code. Or actually creating one.
  • Be the change you want to see. Don’t ask for review on 100 file PR’s.
  • Unifying the app and acceptance test code for a holistic coverage view.
  • Developers get to give feedback on code, which they like doing and keeps them engaged.
  • Other roles might become advocates for automation.

The Long Version

Now that we’ve got the short version done, lets get into some depth.

Figuring out the overlap

On the theme of who writes what test it tends to be; developers will write unit tests and testers will pick up and write acceptance tests. Then you get overlap but in a bad way, testers acceptance tests often repeat whats already been tested. Or something thats a unit test ends up as a big horrible acceptance test that needs a browser. Sometimes you might have a grey area of integration tests which can be a good place to collaborate or fall between the roles. If everyone is invested in ‘their’ tests, they can be neglected.

There lesson here is that some overlap between tests is OK, but we should keep an eye on the type of overlap we have:

  • Bad overlap – all tests at all levels are testing for the same thing. For me, its the assertions that can tell the story. If your lower level tests and higher level tests assert for the same things, you are probably maximising the bad overlap. If all your asserts are asking the same question, you tend to have tests that just check the app can build.
  • Good overlap – all tests whether or unit or acceptance have utilities to do jobs, generate data, connect to things, real or mock. Here’s the secret. Testers should be aware that developers will have written similar utils for their tests. Or even better there are some from either that you can share and make everyones tests better. In addition, the ones developers write are usually crafted better as well.

The review is a good way to break down those barriers and start to blur the boundaries of who writes what test. And where we can start to get the amount of overlap right.

Be the change you want to see

I complain about pull requests a lot. They exist, if we paired more then code review wouldn’t take so long, they contain too many changes, I want a walkthrough, there is no checklist, the build has failed, its still in draft, on and on I can go. The best way to solve some of my many problems is to be the change I want to see. You can get your automation reviewed AND influence wider code reviews positively.

  • Interactive reviews – posting in a slack channel and asking for review is one thing, receiving an interactive review takes it to a whole new level. Offer to do a walkthrough of both the static code and seeing it run as well. Both states really help to get feedback. I’ve always liked the trio pattern, a tester, a dev and if you can get them, someone from product or a business analyst. The tests are part of the product after all. You might just enlist an advocate for testing if you cast the net a little wider.
  • Small batches – I don’t know about you but I’ve tested my fair share of 100 file pull requests. You know the ones, the ones that cause more than their fair share of production issues. Don’t get me wrong, sometimes you have to get on and test them. Just remember that by testing it, you legitimise it. When you have test automation code for review, make it small and understandable. You might set a decent example.
  • Code review checklist – so many teams don’t have a code review checklist, even with all the available resources out there. Books, blogs, videos, workflow automation all help in the creation of one. However, all of that requires someone (or a group of someones) to get everyone together to create it. Keep it similar for test automation code and production code. Resist the urge to make test automation code special, patterns for most tests (Arrange, Act, Assert) are fairly universal, as are the general principles of readability, modularity and whatever makes sense to your team.

Bringing tests together to bring the team together

I have on occasion added automated acceptance tests in a separate repo. Sometimes when you are first starting out with a team adding these types of tests, selecting the right bits of framework its OK to do so. Sometimes you need to be gentle as well, as you might get the pushback of ‘those don’t belong in the codebase’ that you will need to address. It happens less nowadays thankfully.

I don’t believe this is sustainable though, as all your tests should be in one place. This makes it much easier to review, detect the good and bad overlap (see above) and bring the team together. The best developers I’ve worked with are keen to write tests of all levels and keep them close to the code they are testing. A wonderful developer I worked with a few years ago voluntarily spent their first week or so with me learning the application by helping improve our acceptance tests. Chefs kiss.

And finally…

Developers like giving feedback on code, once you tempt them in. But bear in mind that sometimes getting feedback on your code can be a little bumpy, especially when you are starting out. If this is the case, try a trial run with a developer you work well with. If they start to rewrite things with you as a passenger, you chose the wrong developer to start the process with.

Always remember you don’t have to implement all feedback, but do some to keep reviewers involved.