The Important Lessons I Learned This Year About Software Testing

This year has got to be the most studious I’ve ever been about software testing and test automation. I read a good handful of books, I regularly took a healthy dose of online programming courses, and kept myself up to date with many of the software testing events and conference recordings available online. It’s a weird feeling to have done that for the whole of 2015, and maybe I overdid some of them, but a great experience overall. Some of the exciting things I learned this year include:

  • The importance of writing independent automated checks, and the pain of refactoring initially poorly written working test code
  • Page objects for web element locators and page methods
  • Constructors and object-oriented programming in general
  • Parallel tests for faster testing feedback
  • Docker containers as alternative machine environments to VMs for parallel testing
  • Automated screenshots and reading/writing CSV files for test debugging and documentation
  • Using user agents, Appium, and android emulators for mobile test automation
  • Exploratory testing documentation using Licecap and Screencastify
  • Security testing using IronWasp
  • Alternative web test automation frameworks: Ruby + Watir-WebDriver (less verbose and easier to set up), Mocha + WebDriverJS (pretty reporting framework, easy to set up, for people who prefer writing tests in javascript)
  • Scheduling test runs through Jenkins
  • Integrating MongoDB with Java, an option for moving test data from code to a database

But after all that studying, I want to say that the more important things I really learned this year about software testing and test automation are these:

  • Injecting automated checks in an existing manual testing process is difficult, especially when you’re doing it all on your own. Sometimes there’s a feeling that it might not be worth doing. But it can work, it can be fulfilling, and it can be very valuable for development teams who truly want to release software early and often.
  • Testers need to be aware of how their customers are using the applications they test in Production, testers must learn how to use this live data in driving the more important testing that they do and understanding the priority of risks that they find in software.
  • There’s only so much someone can scale test automation with limited infrastructure.
  • There’s only so much a tester can know about an application (and help their customers) by doing only one kind of testing.
  • Studying programming syntax and concepts and understanding how they work through coding experience is necessary in becoming an expert in creating and running automated checks. But learning how to design efficient tests matter too and a lot of knowing how to do that depends on the application under test and the brand of testing that the tester wants to perform.
  • Exploratory testing still matters. Documented manual test procedures for others to follow do not.
  • Missing unit tests and continuous integration hurts ‘release often & early’ endeavors.
  • Many of the important questions about software testing have been discussed extensively by software testers in the past, some even years before the job title existed. The answers to the tough questions we’re tackling today might be found between the pages of an old testing book, or in the recorded talks of previously done conferences, or a dated blog entry, so it’s always good to keep doing our research and learn.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.