Guard Up

Knowing how to automate things and building scheduled tests to monitor known application behavior does not guarantee bug free apps.

They do help boost confidence in what we’ve built, and that makes us more comfortable releasing changes into the wild, because it means we’ve explored a fair amount of feature breadth. It’s not the only strategy I’m sure to use given some software to test, but it’s something I’ll practice every after thoughtful exploration.

But, even with that tool to augment my testing, I have to keep my guard up. Every time, I should keep asking myself about the more valuable matter of the unknowns. What sort of things have I not considered yet? What scenarios might I still be missing? Have I really tested enough or am I merely being overconfident, just because I can automate?

Advertisements

Choosing Variables

We consider a lot of things when we build and test software.

Who are our customers? On which browsers or platforms do we target to deploy our application? Does our software load fast enough for a considerable number of users? Are we vulnerable for SQL injection and cross-site scripts? What happens when two or more people use a specific feature at the same time? Is our API stable and structured well enough for its purpose? How easy is it to set up our apps from scratch? Do we handle rollbacks? What metrics should we monitor on production? Do we feel happy about our happy paths and other not-so-happy paths? What actual problem is our app trying to solve?

There’s a fair amount of room for making mistakes. Bugs can creep in where there are gaps. Some errors are likely to occur while we are building and testing stuff because there’s just so many variables involved.

That’s how things are. There’s not one but several moving parts. It’s up to us to decide whether to be overwhelmed at the complexity or decide to get better at finding out which things to look out for and learn those.

The same is true in building and testing the life we choose to live.

Our Scrum Masters

Recently, I was asked by the Human Capital Management team at work for a list of specific requirements for our scrum master role. I obliged, and at the top of my head wrote the following:

A scrum master is someone who –

  • has great written and verbal communication skills
  • understands the software development process, has experience working with product managers and programmers; preferably with customers too
  • well-versed in the practice of software testing, enjoys exploring systems, thinking in various perspectives, and putting on different sorts of hats
  • delights in shouldering a support role to the software development team
  • is a self-starter, regularly updates himself/herself on what’s happening in the software development and testing industry
  • someone who takes pleasure in a bit of scripting / programming is a plus (Webdriver, Watir, Cypress)

It’s not an extensive list, and I may have gotten some of the details wrong about what skills scrum masters are supposed to have based on the ideal definitions that’s out there in the web, but it’s alright. These are just the things I initially thought would suffice, in the context of what I and my team does and experience most days. Our testers are scrum masters too, and I’m proud that so far we’ve been able to make stuff work on our end.

Scrum masters in other places probably need a dissimilar set of requirements, because those are what allows their systems and processes to be effective, and that’s just fine.

The Harajuku Moment

On a chapter from Timothy Ferriss‘s book titled “The 4-Hour Body“, he talks about something significant called the Harajuku moment, described as:

.. an epiphany that turns a nice-to-have into a must-have.

The expression actually came from a realization Chad Fowler (programmer, writer, co-organizer of RubyConf and RailsConf) had in Harajuku with friends some time in the past, while window shopping and lamenting how unfashionable he was. He noticed the tone of helplessness in his own words as he talked about his obesity, and felt angry at himself for being an idiot who went with the flow, making excuses, for many years.

After that defining moment, he turned things around and lost nearly a 100 pounds.

Two years in a row of annual physical exams (2015-2016), I was told that I have hypertension. The previous years I also felt that I tire quickly, becoming more so as the days went by. I’m just over 30, skinny, and believed that I should still be in my prime but was not. I wondered why things turned out the way they did, and eventually recognized that existing habits did not help me become the healthy person I thought I was.

I’m now performing weight-lifts and body-weight exercises 4 times a week, and is in my best shape in the past 10 years or so. What’s interesting is that making the change was actually fun and somewhat easy, very unlike the grueling and exasperating experience I initially thought it would be. I plan to keep things up, gaining as much strength as I can and keeping body fat percentage minimal.

What the Harajuku moment tells us is that, often, on most days, we have insufficient reason to take action. We only have nice-to-haves. We tell ourselves it would be nice to get fit, go on a date with that someone we really like, have a well refactored code, travel internationally, or learn a new skill. But the nice-to-haves do not give us enough pain to move forward. That’s why we sometimes feel we’re stuck in a rut.

Our nice-to-haves must first turn to must-haves before we can take advice and act.

A Design Problem

This weekend, while in the middle of a deep dive into the practice of discovery testing on a personal project, I realised that there are different ways of building software from the ground up. That’s actually obvious because there are various programming languages and there are tons of frameworks for quickly constructing apps and sites. It’s literally never been easier to write software from scratch. But simple to start doesn’t mean that what we create turns out to be something that’s easy to maintain or extend, even if the team decides to write more tests or use more tools to monitor the sanity of the codebase. Even if user stories are clear and there’s enough capacity for testing, it can still be difficult to deliver changes without a hitch and on schedule, when the code makes it hard to do so.

For many testers and product owners who have never before looked at the code that runs the apps they test, it may be tough to see how simple feature requests take too long. It can be mind-boggling why bugs continue to creep in. We tend to just accept things as is. Finding out why is one reason I learned how to read and write code myself, besides the desire to have better communication with the devs I work with and having the ability to write my own tools to augment how I test. And after having been writing test code for a number of years now I can say that translating ideas into new code can be easy, but revising existing code to accommodate new business rules is often  troublesome. That means, at least for me, that I have a problem about the way I am writing code for the long haul, even if my app is working at present. I can imagine other programmers having the same predicament.

I now understand that writing long-term, maintainable, software is essentially a design problem. And it seems that a lot of us needs to up our skills in that area.

On 100% Coverage

Yes, we need to write tests because it is something that we think will help us in the long term, even though it may be more work for us in the short run. If written with care and with the end in mind, tests serve as living documentation, living because they change as much as the application code changes, and they help us refer back to what some feature does and doesn’t, in as much detail as we want. Tests let us know which areas of the application matters to us, and every time they run they remind us of where our bearings currently are.

Tests may be user journeys in the user interface, a simulation of requests and response through the app’s API, or small tests within the application’s discrete units, most likely a combination of all these types of tests, perhaps more. What matters is that we find some value in whatever test we write, and that value merits its cost of writing and maintenance. What’s important is asking ourselves whether the test is actually significant enough to add to the test suite.

It is valuable to build a good enough suite of tests. It makes sense to add more tests as we find more key scenarios to exercise. It also makes sense to remove tests that were necessary in the past but aren’t anymore. However, I don’t think it is particularly helpful to advocate for 100% test coverage, because that brings the focus on a numbers game, similar to how measuring likes or stars isn’t really the point. I believe it is better when we discuss among ourselves, in the context we’re in, which tests are relevant and which are just diving into minutiae. If our test suite helps us deploy our apps with confidence, if our tests allows us to be effective in the performance of our testing, if we are continuously able to serve our customers as best as we can, then the numbers really doesn’t amount to much.

Doing Things Right, And Doing The Right Things

In testing software, automation is a tool that helps us re-run whatever repeatable checks we have on an application under test. We automate because we never have enough time to re-test everything by hand, and exploring the unknown parts of the apps we test is a far better use of our testing skills than following scripts. To automate is doing one thing right, within context, if it provides us the feedback we need. And that feedback we think we need from automation depends on what suites of tests are best repeated again and again, as well as what sort of tests costs more than the value they give.

There’s also tons of tools that helps us build good quality software. Even though we build more complex applications now than before, we have frameworks, libraries, intelligent IDEs, and other tools to help us spin up apps on a whim now too, ready to be modified as we see fit. Choosing the proper tools for the job is doing another thing right. But before we write any code, we need to be sure about the actual problem we are solving for our customer.

Yes, we need to do things right, from the get go if possible. They help us progress from one point to another faster than otherwise. However, I think it’s more important that we continuously take the necessary time to review whether we are doing the right things too, more important to actually get feedback and solve problems than merely adding tests and features.