Where the Fun and Challenge Is

Ultimately what makes software testing fun and challenging is because it needs you to interact with people. You have to talk to both the software development team and the application users. You have to understand what they care for and why. You have to ask questions and the answers to those questions guide what you do when you test. You can test in a silo but that doesn’t help you learn anything deeper, you can’t thrive if you stick to the things you already know. You grow because you take in all these perspectives and beliefs that other people have and test them against your own, for the goal of making better judgments about how to test the thing your testing and why you’re testing them in the first place. It’s not easy, because you have to care. The challenge is that you have to learn a great deal of many things, you have to regularly check the landscape of where you are, and you often need to reflect about what you’ve done and what you want to do next. And on the journey of overcoming the challenges you set for yourself is where the fun is.

Being Reminded of All the Phases I’ve So Far Had in Writing Automated Checks

I’m currently in the midst of a test code overhaul, a re-writing project of sorts. It started about a week ago and so far I’ve made considerable progress on what I’ve wanted to achieve with the rewrite, which is basically cleaner and more maintainable code, mostly in the sense of test data management and test description language. The number of tests running everyday in our Jenkins system has grown noticeably and I’ve felt that it’s been difficult to add certain tests because of how I structured the test data in the past, which I have not upgraded since then. The two possible avenues for running tests – on the UI and HTTP layers – also adds a bit of complexity and it’d be nice if I can integrate the two smoothly. It’s an interesting development because I did not plan on any re-writing to be done anytime soon but I guess at the back of my mind I knew it’ll happen eventually. And so I decided to take a step back from writing more tests and do some cleanup before it gets tougher to change things. I plan to finish everything in about a month or so.

At the moment, I’m reminded of the phases I’ve gone through in learning to code and writing automated checks in the past few years:

  • Early 2014. It all begins with Selenium IDE, with giving the self some time to study the basic Selenese commands for writing automated checks and (more importantly) understand how to properly retrieve the page elements you want to manipulate.
  • Mid 2014. Test management in Selenium IDE becomes difficult as the number of tests grow, hence the decision to switch to Selenium WebDriver. The only programming language background I had back then was C++, which was limited to only functions and logical/conditional operators, so I chose Java to work with to lessen the learning curve.
  • Late 2014. Familiarized myself with Git, which hooked me on making daily commits and appreciating version control. Along the way I learned the concepts of classes and objects.
  • All of 2015 up to Early 2016. I was in a trance, writing code daily and pushing myself to create all the automated checks that I wanted to run for our apps before every release. Tests run on the Eclipse IDE using TestNG and I was happy with what I had, except that those end-to-end tests are really slow. Running everything took overnight to finish, which was okay for my employer but annoying for me personally.
  • Mid 2016. Re-writing existing tests in Ruby with Cucumber integration started off (when I found Jeff Morgan’s “Cucumber & Cheese” book online) as a side project for fun and testing my skill level in programming. And I did have buckets of fun! The experiment told me that there’s still a lot I need to practice on if I want to write better code, and it also told me that I can be more productive if I switch programming languages. There’s a bit less code to type in when writing code in Ruby than Java and I liked that, plus all the interesting libraries I can use. I switched to Sublime Text and used both Jenkins and the command-line interface more extensively too.
  • Late 2016. As I was looking for ways to speed up end-to-end tests total execution, which by then takes about 4 hours to complete, I ended up exploring testing apps in the HTTP layer instead of in the UI. That took a lot of studying of how our apps actually behave under the hood, what data are being passed around, how images are actually sent, how to view pages without a browser, how redirections work, among other things. After years of testing apps via the user interface, this was such a refreshing and valuable period, and I completely wondered why I never knew such a thing existed until then. It wasn’t being taught extensively to testers, perhaps because it all depends on how the app was structured to run through an API.

And these phases brings me to now, where there’s a healthy dose of API and UI layer tests all checking major app features. It’s all good, just several pieces needing a cleanup, a little parallelization, better test description language, and great documentation. It’s all good, because the lessons in both programming and testing keep piling. The two practices differ in mindset but I think they complement each other, and I think that there’s no reason anyone can’t do both.

When It’s a Burden to Update a Test (or System) Code

If it feels difficult, exhausting, and inconvenient to update an existing test (or system) code, it may be that you don’t completely understand what the code does or it may be that the code really is written badly. Either way both situations present a learning opportunity: the former to fully grasp the business rules for the particular code you want to change, the latter to review the purpose of the code and to rewrite it in a way that’s helpful to the team.

Complaining doesn’t help; it only prolongs the agony. Care instead, discuss with the team to check if the change is actually necessary, and, if you think that it is, leave the code better than when you found it.

Whenever I Can’t Replicate A Scenario

Whenever there’s an issue in production that I can’t seem to replicate quickly in our development environment, the problem reminds me that I have not tested everything in our software, that there are gaps in the model I’m currently using to test our application. The mistakes I make in diagnosing a problem helps me improve but also tells me that I have not explored our systems enough, or that maybe I have forgotten something that should help me test, or that perhaps there are modules that have changed since I last visited them.

More importantly, such scenarios prompt me to remember that I literally can’t know or test everything, that I need help in these cases, and that is precisely why it takes a whole team always extending continuous effort to bring quality up to a point where customers love our app. That means everyone’s​ testing skills need to stack up, salespeople, testers, programmers, product owners, everyone.

How To See The Thing You Test

Several days ago I came across a YouTube video by Alphonso Dunn giving tips about the different ways of seeing the thing you’re drawing. As I was watching, I couldn’t help relate what he was talking about to testing software. I’ll try to properly explain what I mean.

His first tip was that I should learn to see through the object I’m drawing (or testing) as if it was transparent. In drawing a box, some its flaps may be hidden from view and the artist remembers to take that into account since it may affect drawing the whole form of the box, including light and shadow. In testing, I felt that this might mean seeing through the UI of the thing under test, remembering that applications often have hidden functionalities. I may just be testing what the user interface can do, but understanding how the application runs under the HTTP layer or how Javascript and CSS affects its behavior helps me diagnose what risks the surprising things I find could pose including knowing where the surprise really occurs.

Then, he tells me that I should practice seeing the thing I’m drawing (or testing) in terms of its whole shape. In the box example, that could indicate only outlining the overall shape the box in order to practice thinking about proportions or completeness first instead of going deeply into the details too early. In testing, it is like asking about what problem a certain app feature is actually solving for a certain persona or asking whether the thing being tested is more or less complete with regards to what it is supposed to do. Is the functionality small or huge compared to the entire application, and what does it mean to the testing that needs to be done if it’s tiny or exceedingly big? Can we test enough within the deadline, and if not what sort of testing should be prioritized first or last?

Next, he explains that I should see the object I’m drawing (or testing) as an explosion of flat broken pieces too. This intends practicing to see the form I’m trying to draw as a collection of simple two-dimensional shapes, which means teaching myself not to get overwhelmed by the complexity of the form. In software, I thought of this as simply giving focus on testing one small functionality at a time to learn the details of how it works without thinking so much about how it helps the application as a whole. If I pass in a malicious parameter to a function they maybe I can hack the app to perform things that it isn’t supposed to do. This might also mean considering about what kind of testing is most appropriate for a particular piece. Should I care for some unit testing here? Perhaps security testing? Or possibly performance tests?

Lastly, he tells me that I should also train to see the thing I’m drawing (or testing) as a simple 3D volume. Is the object a cube, a sphere, a cone, or a cylinder? In drawing, this is supposed to help us look where the planes are, which in turn helps us in placing shadows and capturing the object in paper close to the actual thing. Now I don’t know if there are 3D volumes in software applications but I feel that this is like thinking about the app as a flowchart of processes and features, looking at the places where various integration happens and why they matter. If I can’t make out the flow, if I can’t model it properly, then is that a good thing or not? What happens to the testing I have to do if the thing is complex?

The reason why drawing is sometimes difficult, according to Alphonso as he concludes in the video, is because the artist switches to see the object from one mode to another quickly as needed. That’s the same reason why it’s engaging too. I’d like to think that testing is similar – demanding and interesting the same way because it forces the tester to think about the app under test in many perspectives.

An Experiment on Executable Specifications

What: Create executable specifications for an ongoing project sprint. Executable specifications are written examples of a business need that can be run anytime and acts as a source of truth for how applications behave. It is a living documentation of what our software does, and it helps us focus more on solving the unusual and noteworthy problems instead of wasting time with the ones that we know we shouldn’t have been worrying about.

How:

  1. Join a software development team for one sprint duration.
  2. Discuss project tasks for that sprint with everyone.
  3. Ask about requirements examples and create executable specifications for those tasks as the application is being written.
  4. Refine the specifications when necessary.
  5. Continuously update the specifications until they pass.
  6. Keep the specifications running on a preferred schedule.

Why: To see if writing executable specifications alongside development is feasible during the duration of a sprint.

Limitations: Tester writes the executable specifications, programmers work as is.

Experiment Realizations:

  • Writing executable specifications can be done during actual app development, provided that tester is experienced with the tools for implementing it and understands well why the need for such specifications.
  • It is of course more beneficial if everyone in the team learn how executable specifications work and write/run the specifications, and why the need to implement it.
  • It will take quite a long while before executable specifications becomes a norm in the existing software development process, if ever. This is a function of whether everyone believes that such specifications are actually useful in practice, and then building the skills and habit of including these specifications in the team’s definition of done.

A Mismatch Between Expectations and Practices

We want performant, scalable, and quality software. We wish to build and test applications that our customers profess their love to and share to their friends.

And yet:

  • We have nonexistent to little unit, performance, API, and integration tests
  • The organization do not closely monitor feature usage statistics
  • Some of us do not exactly feel the pains our customers face
  • We don’t have notifications for outdated dependencies, messy migration scripts, among other failures
  • Some are not curious about understanding how the apps they test and own actually work
  • We have not implemented continuous build tools
  • It is a pain to setup local versions of our applications, even to our own programmers
  • We do not write checks alongside development, we lack executable specifications
  • Some still think that testing and development happen in silos
  • It is difficult to get support for useful infrastructure, as well as recognition for good work
  • Many are comfortable with the status quo

It seems that we usually have our expectations mismatched with our practices. We’re frequently eager to show off our projects but are in many instances less diligent in taking measures about baking quality in, and therefore we fail more often than not. What we need are short feedback loops, continuous monitoring, and improved developer productivity, ownership, and happiness. The difficult thing is, it all starts with better communication and culture.