How To See The Thing You Test

Several days ago I came across a YouTube video by Alphonso Dunn giving tips about the different ways of seeing the thing you’re drawing. As I was watching, I couldn’t help relate what he was talking about to testing software. I’ll try to properly explain what I mean.

His first tip was that I should learn to see through the object I’m drawing (or testing) as if it was transparent. In drawing a box, some its flaps may be hidden from view and the artist remembers to take that into account since it may affect drawing the whole form of the box, including light and shadow. In testing, I felt that this might mean seeing through the UI of the thing under test, remembering that applications often have hidden functionalities. I may just be testing what the user interface can do, but understanding how the application runs under the HTTP layer or how Javascript and CSS affects its behavior helps me diagnose what risks the surprising things I find could pose including knowing where the surprise really occurs.

Then, he tells me that I should practice seeing the thing I’m drawing (or testing) in terms of its whole shape. In the box example, that could indicate only outlining the overall shape the box in order to practice thinking about proportions or completeness first instead of going deeply into the details too early. In testing, it is like asking about what problem a certain app feature is actually solving for a certain persona or asking whether the thing being tested is more or less complete with regards to what it is supposed to do. Is the functionality small or huge compared to the entire application, and what does it mean to the testing that needs to be done if it’s tiny or exceedingly big? Can we test enough within the deadline, and if not what sort of testing should be prioritized first or last?

Next, he explains that I should see the object I’m drawing (or testing) as an explosion of flat broken pieces too. This intends practicing to see the form I’m trying to draw as a collection of simple two-dimensional shapes, which means teaching myself not to get overwhelmed by the complexity of the form. In software, I thought of this as simply giving focus on testing one small functionality at a time to learn the details of how it works without thinking so much about how it helps the application as a whole. If I pass in a malicious parameter to a function they maybe I can hack the app to perform things that it isn’t supposed to do. This might also mean considering about what kind of testing is most appropriate for a particular piece. Should I care for some unit testing here? Perhaps security testing? Or possibly performance tests?

Lastly, he tells me that I should also train to see the thing I’m drawing (or testing) as a simple 3D volume. Is the object a cube, a sphere, a cone, or a cylinder? In drawing, this is supposed to help us look where the planes are, which in turn helps us in placing shadows and capturing the object in paper close to the actual thing. Now I don’t know if there are 3D volumes in software applications but I feel that this is like thinking about the app as a flowchart of processes and features, looking at the places where various integration happens and why they matter. If I can’t make out the flow, if I can’t model it properly, then is that a good thing or not? What happens to the testing I have to do if the thing is complex?

The reason why drawing is sometimes difficult, according to Alphonso as he concludes in the video, is because the artist switches to see the object from one mode to another quickly as needed. That’s the same reason why it’s engaging too. I’d like to think that testing is similar – demanding and interesting the same way because it forces the tester to think about the app under test in many perspectives.

An Experiment on Executable Specifications

What: Create executable specifications for an ongoing project sprint. Executable specifications are written examples of a business need that can be run anytime and acts as a source of truth for how applications behave. It is a living documentation of what our software does, and it helps us focus more on solving the unusual and noteworthy problems instead of wasting time with the ones that we know we shouldn’t have been worrying about.

How:

  1. Join a software development team for one sprint duration.
  2. Discuss project tasks for that sprint with everyone.
  3. Ask about requirements examples and create executable specifications for those tasks as the application is being written.
  4. Refine the specifications when necessary.
  5. Continuously update the specifications until they pass.
  6. Keep the specifications running on a preferred schedule.

Why: To see if writing executable specifications alongside development is feasible during the duration of a sprint.

Limitations: Tester writes the executable specifications, programmers work as is.

Experiment Realizations:

  • Writing executable specifications can be done during actual app development, provided that tester is experienced with the tools for implementing it and understands well why the need for such specifications.
  • It is of course more beneficial if everyone in the team learn how executable specifications work and write/run the specifications, and why the need to implement it.
  • It will take quite a long while before executable specifications becomes a norm in the existing software development process, if ever. This is a function of whether everyone believes that such specifications are actually useful in practice, and then building the skills and habit of including these specifications in the team’s definition of done.

Lessons from Gojko Adzic’s “Specification By Example”

Automated checking is not a new concept. Gojko Adzic, however, provides us a way to make better integration of it in our software development processes. In his book titled “Specification by Example”, he talks about executable specifications that double as a living documentation. These are examples which continuously exercise business rules, they help teams collaborate, and, along with software code, they’re supposed to be the source of truth for understanding how our applications work. He builds a strong case about the benefits of writing specifications by example by presenting case studies and testimonials of teams who have actually used it in their projects, and I think that it is a great way of moving forward, of baking quality in.

Some favorite takeaways from the book:

  • Tests are specifications; specifications are tests.
  • “If I cannot have the documentation in an automated fashion, I don’t trust it. It’s not exercised.” -Tim Andersen
  • Beginners think that there is no documentation in agile, which is not true. It’s about choosing the types of documentation that are useful. There is still documentation in an agile process, and that’s not a two-feet-high pile of paper, but something lighter, bound to the real code. When you ask, “does your system have this feature?” you don’t have a Word document that claims that something is done; you have something executable that proves that the system really does what you want. That’s real documentation.
  • Fred Brooks quote: In The Mythical Man-Month 4 he wrote, “The hardest single part of building a software system is deciding precisely what to build.” Albert Einstein himself said that “the formulation of a problem is often more essential than its solution.”
  • We don’t really want to bother with estimating stories. If you start estimating stories, with Fibonacci numbers for example, you soon realize that anything eight or higher is too big to deliver in an iteration, so we’ll make it one, two, three, and five. Then you go to the next level and say five is really big. Now that everything is one, two, and three, they’re now really the same thing. We can just break that down into stories of that size and forget about that part of estimating, and then just measure the cycle time to when it is actually delivered.
  • Sometimes people still struggle with explaining what the value of a given feature would be (even when asking them for an example). As a further step, I ask them to give an example and say what they would need to do differently (work around) if the system would not provide this feature. Usually this helps them then to express the value of a given feature.
  • QA doesn’t write [acceptance] tests for developers; they work together. The QA person owns the specification, which is expressed through the test plan, and continues to own that until we ship the feature. Developers write the feature files [specifications] with the QA involved to advise what should be covered. QA finds the holes in the feature files, points out things that are not covered, and also produces test scripts for manual testing.
  • If we don’t have enough information to design good test cases, we definitely don’t have enough information to build the system.
  • Postponing automation is just a local optimization. You might get through the stories quicker from the initial development perspective, but they’ll come back for fixing down the road. David Evans often illustrates this with an analogy of a city bus: A bus can go a lot faster if it doesn’t have to stop to pick up passengers, but it isn’t really doing its job then.
  • Workflow and session rules can often be checked only against the user interface layer. But that doesn’t mean that the only option to automate those checks is to launch a browser. Instead of automating the specifications through a browser, several teams developing web applications saved a lot of time and effort going right below the skin of the application—to the HTTP layer.
  • Automating executable specifications forces developers to experience what it’s like to use their own system, because they have to use the interfaces designed for clients. If executable specifications are hard to automate, this means that the client APIs aren’t easy to use, which means it’s time to start simplifying the APIs.
  • Automation itself isn’t a goal. It’s a tool to exercise the business processes.
  • Effective delivery with short iterations or in constant flow requires removing as many expected obstacles as possible so that unexpected issues can be addressed. Adam Geras puts this more eloquently: “Quality is about being prepared for the usual so you have time to tackle the unusual.” Living documentation simply makes common problems go away.
  • Find the most annoying thing and fix it, then something else will pop up, and after that something else will pop up. Eventually, if you keep doing this, you will create a stable system that will be really useful.

A Mismatch Between Expectations and Practices

We want performant, scalable, and quality software. We wish to build and test applications that our customers profess their love to and share to their friends.

And yet:

  • We have nonexistent to little unit, performance, API, and integration tests
  • The organization do not closely monitor feature usage statistics
  • Some of us do not exactly feel the pains our customers face
  • We don’t have notifications for outdated dependencies, messy migration scripts, among other failures
  • Some are not curious about understanding how the apps they test and own actually work
  • We have not implemented continuous build tools
  • It is a pain to setup local versions of our applications, even to our own programmers
  • We do not write checks alongside development, we lack executable specifications
  • Some still think that testing and development happen in silos
  • It is difficult to get support for useful infrastructure, as well as recognition for good work
  • Many are comfortable with the status quo

It seems that we usually have our expectations mismatched with our practices. We’re frequently eager to show off our projects but are in many instances less diligent in taking measures about baking quality in, and therefore we fail more often than not. What we need are short feedback loops, continuous monitoring, and improved developer productivity, ownership, and happiness. The difficult thing is, it all starts with better communication and culture.

Contemplating In-Office Knowledge-Sharing Sessions

In the past I tend to prepare presentation slides if I want to share something to tester colleagues at work, often clippings of interesting articles which I felt could be useful for our knowledge-sharing sessions. It worked, but after some time the sessions felt monotonous and tedious. Probably because I always have to explain in detail the ideas and the lessons behind those clippings. I try to make my presentations interesting, but I think that after a while hearing the same voice over and over can get old.

These days I’m sharing videos instead. The videos are usually recorded conference talks or tutorials I have watched and learned from in recent years, and I have taken care in listing the the ones that are insightful, fun, and relatively short. It’s like I’m inviting officemates to watch a short movie for free. The big change: I don’t take a lot of time talking during the knowledge-sharing anymore. There are of course still bits of discussions before, after, or during the showing of a video, whenever necessary, for explaining why I have taken a liking to the talk or to ask them about what they understood. We take turns telling stories about our experiences related to the ideas shared by the speaker, which is nice. And compared to the powerpoint presentations I did before, I felt that because the speaker is someone from outside it makes the ideas shared in the talks and tutorials feel more fresh and real than when I’m merely showing them quoted paragraphs from blogs. That makes it easier for my colleagues to get curious and actually learn something, which is exactly the point of the activity.

 

Notes from Alister Scott’s “Pride and Paradev: A Collection of Agile Software Testing Contradictions”

I’ve stumbled over Alister Scott‘s WatirMelon blog some years back, probably looking for an answer to a particular question about automation, and found it to be a site worth following. There he talks about flaky tests, raising bugs you don’t know how to reproduce, junior QA professional development, the craziest bug he’s ever seen, writing code, and the classic minesweeper game. He was part of the Watir project in the past, but is now an excellence wrangler over at Automattic (which takes care of WordPress). He has also written an intriguing book, titled “Pride and Paradev“, which talks about several of the contradictions that we have over in the field of software testing. In a nutshell, it explains why there are no best practices, only practices that work well under a certain context.

Here are a number of takeaways from the book:

  • A paradev is anyone on a software team that doesn’t just do programming.
  • Agile software development is all about delivering business value sooner. That’s why we work in short iterations, seek regular business feedback, are accountable for our work and change course before it’s too hard.
  • Agile software development is all about breaking things down.
  • Agile software development is all about communication and flexibility. You must be extremely flexible to work well on an agile team. You can’t be hung up about your role’s title. Constantly delivering business value means doing what is needed, and a team of people with diverse skills thrives as they constantly adapt to get things done. Most importantly flexibility means variety which is fun!
  • Delivering software everyday is easy. Delivering working software everyday is hard. The only way an agile team can deliver working software daily is to have a solid suite of automated tests that tells us it’s still working. The only way to have reliable, up-to-date automated tests is to develop them alongside your software application and run them against every build.
  • You’re testing software day in and day out, so it makes sense to have an idea about the internals of how that software works. That requires a deep technical understanding of the application. The better your understanding of the application is, the better the bugs you raise will be.
  • Hiring testers with technical skills over having a testing mindset is a common mistake. A tester who primarily spends his/her time writing automated tests will spend more time getting his/her own code working instead of testing the functionality that your customers will use.
  • What technical skills a tester lacks can be made up for with intelligence and curiosity. Even if a tester has no deep underlying knowledge of a system, they can still be very effective at finding bugs through skilled exploratory and story testing. Often non technical testers have better shoshin: a lack of preconceptions, when testing a system. A technical tester may take technical limitations into consideration but a non technical can be better at questioning why things are they way they are and rejecting technical complacency. Often non-technical testers will have a better understanding of the subject matter and be able to communicate with business representatives more effectively about issues.
  • You can be very effective as a non-technical tester, but it’s harder work and you’ll need to develop strong collaboration skills with the development team to provide support and guidance for more technical tasks such as automated testing and test data discovery or creation.
  • Whilst you think you may determine the quality of the system, it’s actually the development team as a whole that does that. Programmers are the ones who write the good/poor quality code. Whilst you can provide information and suggestions about problems: the business can and should overrule you: it’s their product for their business that you’re building: you can’t always get what you consider to be important as business decisions often trump technical ones.
  • A tester should never be measured on how many bugs they have raised. Doing so encourages testers to game the system by raising insignificant bugs and splitting bugs which is a waste of everyone’s time. And this further widens the tester vs programmer divide. Once a tester realizes their job isn’t to record bugs but instead deliver bug free stories: they will be a lot more comfortable not raising and tracking bugs. The only true measurement of the quality of testing performed is bugs missed, which aren’t recorded anyway.
  • Everything in life is contextual. What is okay in one context, makes no sense in another. I can swear to my mates, but never my Mum. Realizing the value of context will get you a long way.
  • Probably the best thing I have ever learned in life is that no matter what life throws at you, no matter what people do to you or how they treat you, the only thing you can truly control is your response.

About Selenium Conference 2016

I had time over the holidays to binge watch last years Selenium Conference talks, which was as awesome, if not more so, as the talks during the 2015 conference. Automation in testing has really come a long way, alongside the advancements in technology and software development, and this brings forth new challenges for all of us who test software. It’s not just about Selenium anymore. Mobile automation still proves to be challenging, and soon we’ll have to build repeatable test scenarios for the internet of things – homes, vehicles, stores, among others. Software testing can only get more interesting by the year.

Here are my picks for the best talks from the conference, if you’re curious: