How To See The Thing You Test

Several days ago I came across a YouTube video by Alphonso Dunn giving tips about the different ways of seeing the thing you’re drawing. As I was watching, I couldn’t help relate what he was talking about to testing software. I’ll try to properly explain what I mean.

His first tip was that I should learn to see through the object I’m drawing (or testing) as if it was transparent. In drawing a box, some its flaps may be hidden from view and the artist remembers to take that into account since it may affect drawing the whole form of the box, including light and shadow. In testing, I felt that this might mean seeing through the UI of the thing under test, remembering that applications often have hidden functionalities. I may just be testing what the user interface can do, but understanding how the application runs under the HTTP layer or how Javascript and CSS affects its behavior helps me diagnose what risks the surprising things I find could pose including knowing where the surprise really occurs.

Then, he tells me that I should practice seeing the thing I’m drawing (or testing) in terms of its whole shape. In the box example, that could indicate only outlining the overall shape the box in order to practice thinking about proportions or completeness first instead of going deeply into the details too early. In testing, it is like asking about what problem a certain app feature is actually solving for a certain persona or asking whether the thing being tested is more or less complete with regards to what it is supposed to do. Is the functionality small or huge compared to the entire application, and what does it mean to the testing that needs to be done if it’s tiny or exceedingly big? Can we test enough within the deadline, and if not what sort of testing should be prioritized first or last?

Next, he explains that I should see the object I’m drawing (or testing) as an explosion of flat broken pieces too. This intends practicing to see the form I’m trying to draw as a collection of simple two-dimensional shapes, which means teaching myself not to get overwhelmed by the complexity of the form. In software, I thought of this as simply giving focus on testing one small functionality at a time to learn the details of how it works without thinking so much about how it helps the application as a whole. If I pass in a malicious parameter to a function they maybe I can hack the app to perform things that it isn’t supposed to do. This might also mean considering about what kind of testing is most appropriate for a particular piece. Should I care for some unit testing here? Perhaps security testing? Or possibly performance tests?

Lastly, he tells me that I should also train to see the thing I’m drawing (or testing) as a simple 3D volume. Is the object a cube, a sphere, a cone, or a cylinder? In drawing, this is supposed to help us look where the planes are, which in turn helps us in placing shadows and capturing the object in paper close to the actual thing. Now I don’t know if there are 3D volumes in software applications but I feel that this is like thinking about the app as a flowchart of processes and features, looking at the places where various integration happens and why they matter. If I can’t make out the flow, if I can’t model it properly, then is that a good thing or not? What happens to the testing I have to do if the thing is complex?

The reason why drawing is sometimes difficult, according to Alphonso as he concludes in the video, is because the artist switches to see the object from one mode to another quickly as needed. That’s the same reason why it’s engaging too. I’d like to think that testing is similar – demanding and interesting the same way because it forces the tester to think about the app under test in many perspectives.

An Experiment on Executable Specifications

What: Create executable specifications for an ongoing project sprint. Executable specifications are written examples of a business need that can be run anytime and acts as a source of truth for how applications behave. It is a living documentation of what our software does, and it helps us focus more on solving the unusual and noteworthy problems instead of wasting time with the ones that we know we shouldn’t have been worrying about.

How:

  1. Join a software development team for one sprint duration.
  2. Discuss project tasks for that sprint with everyone.
  3. Ask about requirements examples and create executable specifications for those tasks as the application is being written.
  4. Refine the specifications when necessary.
  5. Continuously update the specifications until they pass.
  6. Keep the specifications running on a preferred schedule.

Why: To see if writing executable specifications alongside development is feasible during the duration of a sprint.

Limitations: Tester writes the executable specifications, programmers work as is.

Experiment Realizations:

  • Writing executable specifications can be done during actual app development, provided that tester is experienced with the tools for implementing it and understands well why the need for such specifications.
  • It is of course more beneficial if everyone in the team learn how executable specifications work and write/run the specifications, and why the need to implement it.
  • It will take quite a long while before executable specifications becomes a norm in the existing software development process, if ever. This is a function of whether everyone believes that such specifications are actually useful in practice, and then building the skills and habit of including these specifications in the team’s definition of done.

A Mismatch Between Expectations and Practices

We want performant, scalable, and quality software. We wish to build and test applications that our customers profess their love to and share to their friends.

And yet:

  • We have nonexistent to little unit, performance, API, and integration tests
  • The organization do not closely monitor feature usage statistics
  • Some of us do not exactly feel the pains our customers face
  • We don’t have notifications for outdated dependencies, messy migration scripts, among other failures
  • Some are not curious about understanding how the apps they test and own actually work
  • We have not implemented continuous build tools
  • It is a pain to setup local versions of our applications, even to our own programmers
  • We do not write checks alongside development, we lack executable specifications
  • Some still think that testing and development happen in silos
  • It is difficult to get support for useful infrastructure, as well as recognition for good work
  • Many are comfortable with the status quo

It seems that we usually have our expectations mismatched with our practices. We’re frequently eager to show off our projects but are in many instances less diligent in taking measures about baking quality in, and therefore we fail more often than not. What we need are short feedback loops, continuous monitoring, and improved developer productivity, ownership, and happiness. The difficult thing is, it all starts with better communication and culture.

Contemplating In-Office Knowledge-Sharing Sessions

In the past I tend to prepare presentation slides if I want to share something to tester colleagues at work, often clippings of interesting articles which I felt could be useful for our knowledge-sharing sessions. It worked, but after some time the sessions felt monotonous and tedious. Probably because I always have to explain in detail the ideas and the lessons behind those clippings. I try to make my presentations interesting, but I think that after a while hearing the same voice over and over can get old.

These days I’m sharing videos instead. The videos are usually recorded conference talks or tutorials I have watched and learned from in recent years, and I have taken care in listing the the ones that are insightful, fun, and relatively short. It’s like I’m inviting officemates to watch a short movie for free. The big change: I don’t take a lot of time talking during the knowledge-sharing anymore. There are of course still bits of discussions before, after, or during the showing of a video, whenever necessary, for explaining why I have taken a liking to the talk or to ask them about what they understood. We take turns telling stories about our experiences related to the ideas shared by the speaker, which is nice. And compared to the powerpoint presentations I did before, I felt that because the speaker is someone from outside it makes the ideas shared in the talks and tutorials feel more fresh and real than when I’m merely showing them quoted paragraphs from blogs. That makes it easier for my colleagues to get curious and actually learn something, which is exactly the point of the activity.

 

Thank You, 2016!

Some thank you’s to start 2017 right:

Firstly, thank you to Mom and Dad, for incessantly being the loving parents that they are. And to Aris and his growing family, which is a consistent wonder at home.

Thank you to my colleagues over at DirectWithHotels, for the magical experience of being part of the software development team. To Onchie, for his thoughts and programming expertise which help shape the test suites I write into what they’re capable of doing. To Jefferson, Cedrick, Jonan, and Ryan, for their display of remarkable teamwork that habitually motivates. To my team of junior testers, Andie, Russel, Elaine, Van, and Ron, for keeping up with the testing work that we need to face day in day out. And to my supervisor, Bobby, for his trust in my experiments and work ethic.

Thank you to the software testers around the world who keep on sharing their adventures to the testing community, for showing me how wonderful the work is and continues to be. I gained a lot from Jeff MorganAlan Richardson and Jeff Nyman‘s pieces last year, and I’m sure there’s more to learn because our industry is so diverse. And with that, I’ll do my part to help.

Thank you to Junnell and Nikka, for always being a reservoir of inspiration, for unfailingly being such dear friends.

Thank you to Kleng, for her love, affection, and lessons learned.

And lastly, thank you dear readers, for the gift of your precious time and attention. I hope to keep my writing rolling on schedule this year and I hope they will be of some value to you.

Testing Goals for 2017

2016 has been a surprising year. There’s been some considerable changes at work – people came and went, several small process changes for the good, and more transparency in automation which includes a dedicated testing server – many of which are for the better. There’s still some work to do though and those will spill over to the coming year, and most likely I’ll keep on adding valuable tests to the team’s daily running suite on the cloud.

Aside from that I hope to share what I’ve learned this year to the testing team, especially that bit about REST calls because that’s something they can actually use for testing any web application.

And I plan to read more next year. Or at least the following books: Specification by Example, Pride and Paradev, The Cartoon Tester, The Psychology of Software Testing, There’s Always a Duck, Rails 4 Test Prescriptions, Creating Great Teams, Explore It!, Willful Blindness, Tools of Titans, and The 4-Hour Workweek. Maybe I’ll slip in some fiction too in the reading list.

There’s some coding experiments I’d like to try as well:

  • building a Ruby on Rails application
  • Geb + Groovy + Spock automation framework
  • Java + Serenity automation framework

Overall I don’t think there would be many changes from the things that I currently do, only continuous learning and refinement of existing workflow. But I can be wrong. We’ll see. 🙂

The Lessons I Learned This Year About Software Testing

Last year I told myself I would take a step back from automation because I’ve already built tests with a working knowledge of Java and TestNG. Apparently, with the way things went this year, I was not satisfied after all with what I had. I went on to study Ruby with Watir and Cucumber for about half of 2016, and I learned how to run web applications without using browsers after that. And for a second year in a row I feel I overdid studying again, and maybe I’m getting too comfortable about writing test code. In any case, programming has become such a useful tool in my workflow – performing checks automatically that otherwise would have eaten up a chunk of my time and enabling me to test other things.

Some other realizations during the year:

  • Re-writing test code using Ruby with Watir and Cucumber and other useful ruby gems pointed out how messy the checks I wrote last year using Java was. It helps that the Ruby language has less boilerplate syntax but my earlier mistakes are more about refactoring and design rather than about the programming language itself. I still need more deliberate practice at that.
  • I’ve found out that I actually do security testing in my normal day-to-day exploratory work, but I’m not especially great at it because nobody has taught me how to perform good web security testing before. It seems that getting comfortable reading and manipulating API calls is a step towards becoming better at it.
  • Teams who develop software with agility does not necessarily mean they implement scrum or kanban. If there’s good communication and teamwork, teams can actually forego many of the rituals and focus on the actual work.
  • Writing automated checks with healthy test description language is an important skill.
  • People only learn to solve problems when they feel that those problems are worth solving. And problems that are worth solving for me does not mean they’re worth solving for other people too, so I should never stop using perspectives.
  • We can leverage REST calls with web automation to make them run a lot faster and more stable.
  • Exploratory testing is a prerequisite to automation.
  • Building a great team is difficult. There are a lot of factors to consider – team composition, empathy, attitude, member skill set, strengths and weaknesses, to name a few. It takes some time too.