Notes from Alister Scott’s “Pride and Paradev: A Collection of Agile Software Testing Contradictions”

I’ve stumbled over Alister Scott‘s WatirMelon blog some years back, probably looking for an answer to a particular question about automation, and found it to be a site worth following. There he talks about flaky tests, raising bugs you don’t know how to reproduce, junior QA professional development, the craziest bug he’s ever seen, writing code, and the classic minesweeper game. He was part of the Watir project in the past, but is now an excellence wrangler over at Automattic (which takes care of WordPress). He has also written an intriguing book, titled “Pride and Paradev“, which talks about several of the contradictions that we have over in the field of software testing. In a nutshell, it explains why there are no best practices, only practices that work well under a certain context.

Here are a number of takeaways from the book:

  • A paradev is anyone on a software team that doesn’t just do programming.
  • Agile software development is all about delivering business value sooner. That’s why we work in short iterations, seek regular business feedback, are accountable for our work and change course before it’s too hard.
  • Agile software development is all about breaking things down.
  • Agile software development is all about communication and flexibility. You must be extremely flexible to work well on an agile team. You can’t be hung up about your role’s title. Constantly delivering business value means doing what is needed, and a team of people with diverse skills thrives as they constantly adapt to get things done. Most importantly flexibility means variety which is fun!
  • Delivering software everyday is easy. Delivering working software everyday is hard. The only way an agile team can deliver working software daily is to have a solid suite of automated tests that tells us it’s still working. The only way to have reliable, up-to-date automated tests is to develop them alongside your software application and run them against every build.
  • You’re testing software day in and day out, so it makes sense to have an idea about the internals of how that software works. That requires a deep technical understanding of the application. The better your understanding of the application is, the better the bugs you raise will be.
  • Hiring testers with technical skills over having a testing mindset is a common mistake. A tester who primarily spends his/her time writing automated tests will spend more time getting his/her own code working instead of testing the functionality that your customers will use.
  • What technical skills a tester lacks can be made up for with intelligence and curiosity. Even if a tester has no deep underlying knowledge of a system, they can still be very effective at finding bugs through skilled exploratory and story testing. Often non technical testers have better shoshin: a lack of preconceptions, when testing a system. A technical tester may take technical limitations into consideration but a non technical can be better at questioning why things are they way they are and rejecting technical complacency. Often non-technical testers will have a better understanding of the subject matter and be able to communicate with business representatives more effectively about issues.
  • You can be very effective as a non-technical tester, but it’s harder work and you’ll need to develop strong collaboration skills with the development team to provide support and guidance for more technical tasks such as automated testing and test data discovery or creation.
  • Whilst you think you may determine the quality of the system, it’s actually the development team as a whole that does that. Programmers are the ones who write the good/poor quality code. Whilst you can provide information and suggestions about problems: the business can and should overrule you: it’s their product for their business that you’re building: you can’t always get what you consider to be important as business decisions often trump technical ones.
  • A tester should never be measured on how many bugs they have raised. Doing so encourages testers to game the system by raising insignificant bugs and splitting bugs which is a waste of everyone’s time. And this further widens the tester vs programmer divide. Once a tester realizes their job isn’t to record bugs but instead deliver bug free stories: they will be a lot more comfortable not raising and tracking bugs. The only true measurement of the quality of testing performed is bugs missed, which aren’t recorded anyway.
  • Everything in life is contextual. What is okay in one context, makes no sense in another. I can swear to my mates, but never my Mum. Realizing the value of context will get you a long way.
  • Probably the best thing I have ever learned in life is that no matter what life throws at you, no matter what people do to you or how they treat you, the only thing you can truly control is your response.

Lessons from James Bach and Michael Bolton’s “A Context-Driven Approach To Automation In Testing”

Automation has gained traction in recent years in the testing community because the idea of automating tests has been sold off as a solution to many of the common problems of traditional testing. And the advertisements worked, as many software businesses now rely heavily on automated checks to monitor and check known system bugs, even performance and security tests. It’s not surprising anymore that automation has been accepted by the general public as one of the cornerstones of effective testing today, especially since releases to production for most businesses has now become more frequent and fast-paced. I use such tools at work myself, because they complement the exploratory testing that I do.

James Bach and Michael Bolton, two of the finest software testers known to many in the industry, however reminds us, in their whitepaper titled ‘A Context-Driven Approach to Automation in Testing, to be vigilant about the use of these tools, as well as the practice of the terms ‘automation‘ and (especially) ‘test automation‘ when speaking about the testing work that we do, because they can imply dangerous connotations about what testing really is. They remind us that testing is so much more than automation, much more than using tools, and that to be a remarkable software tester we need to keep thinking and improving our craft, including the ways we perform and explain testing to others.

Some takeaway quotes from the whitepaper:

  • If you need good testing, then good tool support will be part of the picture, and that means you must learn how and why we can go wrong with tools.
  • The word ‘automation’ is misleading. We cannot automate users. We automate some actions they perform, but users do so much more than that. Output checking is interesting and can be automated, but testers and tools do so much more than that. Although certain user and tester actions can be simulated, users and testers themselves cannot be replicated in software. Failure to understand this simple truth will trivialize testing, and will allow many bugs to escape our notice.
  • To test is to seek the true status of a product, which in complex products is hidden from casual view. Tester’s do this to discover trouble. A tester, working with limited resources must sniff out the trouble before it’s too late. This requires careful attention to subtle clues in the behavior of a product within a rapid and ongoing learning process. Testers engage in sensemaking, critical thinking, and experimentation, none of which can be done by mechanical means.
  • Much of what informs a human tester’s behavior is tacit knowledge.
  • Everyone knows programming cannot be automated. Although many early programming languages were called ‘autocodes’ and early computers were called ‘autocoders,’ that way of speaking peaked around 1965. The term ‘compiler’ became far more popular. In other words, when software started coding, they changed the name of that activity to compiling, assembling, or interpreting. That way the programmer is someone who always sits on top of all the technology and no manager is saying ‘when can we automate all this programming?’
  • The common terms ‘manual testers’ and ‘automated testers’ to distinguish testers are misleading, because all competent testers use tools.
  • Some testers also make tools – writing code and creating utilities and instruments that aid in testing. We suggest calling such technical testers ‘toolsmiths.’
  • We emphasize experimentation because good tests are literally experiments in the scientific sense of the word. What scientists mean by experiment is precisely what we mean by test. Testing is necessarily a process of incremental, speculative, self-directed search.
  • Although routine output-checking is part of our work, we continually re-focus in non-routine, novel observations. Our attitude must be one of seeking to find trouble, not verifying the absence of trouble – otherwise we will test in shallow ways and blind ourselves to the true nature of the product.
  • Read the situation around you, discover the factors that matter, generate options, weigh your options and build a defensible case for choosing a particular option over all others. Then put that option to practice and take responsibility for what happens next. Learn and get better.
  • Testing is necessarily a human process. Only humans can learn. Only humans can determine value. Value is a social judgment, and different people value things differently.
  • Call them test tools, not ‘test automation.’