Five People and Their Thoughts (Part 4)

Many of the things I’ve learned about software testing in recent years I discovered through reading books and blogs of actual software testers who are masters of their own craft. Some of them record videos too, of tutorials and webinars and anything they find valuable for their readers/viewers, and I try to share those which I’ve found engaging, five at a time.

Here are some interesting recordings to view today, if you’d like:

Takeaways from Alan Richardson’s “Technical Web Testing 101” online course

Alan Richardson has a “Technical Web Testing 101” online course over at his CompendiumDev site. I think it’s been there for years but I only found out about it recently. I took it and I’m really glad I did. Some of the content I was already familiar with before, but I still learned a lot from the course. I’m recommending it for software testers who frequently test web applications, including APIs, since they’re the ones who will directly benefit from enrolling. It’s not free, but it’s not expensive either and its value is really well above the price.

Some takeaways:

  • Alan explains to me that technical testing is not the tools, or automation, or the techniques. Those things are only side-effects of exploration and learning what we have explored. The goal of technical testing, from what I gather, is having deep understanding of the application under test by learning the technology stack which the app uses and studying those technologies so that we can find more effective ways of testing, augmenting our usual user interface tests.
  • There are ways to create presentation slides and PDF files simply from text files formatted using a markup language called Markdown. This is interesting, as this enables us to maintain a single source of information for various file formats.
  • We tend to forget the viewing of page sources when testing web applications because we prefer using the more powerful browser developer tools. But sometimes they’re really useful too.
  • Burp Suite, Fiddler, and Zed Attack Proxy (ZAP) are great tools for learning in more detail what actually happens in the background when we use web apps, for understanding what HTTP requests are being passed by the software to the server and how server responds to those requests. They are especially useful for bypassing validations in the user interface and replicating requests easily without physically going back to the application.
  • It pays to learn a bit of Javascript scripting for purposes of validating whether javascript functions properly do what they need to do or not. And many of the web applications that we test these days contain some amount of javascript code. It can be fun to use them too – we can build page-specific bots we can just save and call anytime within the browser through snippets.
  • Wireshark can be used to view digital network traffic in public places.
  • As long as they’re on the same network, a desktop or a laptop can view/interact with an android or an ios phone through a proxy. Network traffic passing through a phone can even be recorded on Burp Suite or Fiddler or ZAP.
  • Alan has reminded me that there’s still a lot of books about software testing I haven’t read yet, and I should make it a point to go over them in the coming months. He’s also recommended spending some time glossing over books about psychotherapy.

Needless to say, the biggest takeaway from the course is understanding how Alan approaches software testing in general, which is something I advocate. That’s how I actually do my testing, although I don’t think I’m up to Alan’s standards of being technical yet. The process is difficult because it forces us to think so much more about the applications we test and the methods we use for testing, including getting better at analyzing risks and decision making, but rewarding in itself because we are able to understand a lot more about how stuff work which in turn helps us appreciate the processes behind software development and the people too.

Practicing Awareness

It’s easy to learn something when you’re genuinely curious about it, no need to look for any other external motivation. On the other hand, it’s terribly difficult (often a waste of time) to teach something to someone who is not interested in the subject at that given point in time.

Software tester and programmer pairing in bug fixing or feature-writing situations is supposed to be a fun experience for both parties. Either of them getting frustrated at what they’re doing means something is blocking them from being able to perform well as a duo. Maybe they don’t know enough about the needs of the other person or how to fulfill those needs, maybe they don’t understand enough about the thing they’re building or how to build it in the first place.

If the people using the software have qualms regarding a feature update, maybe there happened to be a miscommunication between clients and developers about what feature the customers really want built. Solving that communication problem matters, maybe more than writing feature code quickly, definitely more than pointing fingers and blaming other people after the update’s been released.

Testing with deep understanding of why the application exists the way it is and how features actually work in both user interface and code helps us perform our testing more effectively, especially in situations with time constraints.

Pair testing happens more often than I initially thought it does, only not as explicit as how I imagine it happening.

For whatever you wish to be master of, it pays to have mentors, people you follow and respect and trust and provide feedback. If there aren’t any in your place of work, find them online.

We Refactor When We Learn Better Ways Of Doing

For the past month or so, inspired by the lessons I learned from Jeff Nyman’s test description language blog post seriesGerard Meszaros’ talk about test abstractions, and Codecademy’s online class about the Ruby programming language, I have been steadily performing good chunks of code refactoring of our existing cucumber-watir-ruby end-to-end functional checks, which included (but not limited to) the following:

  • Cucumber
    • Removing information from cucumber steps that do not directly affect what is being checked in tests
    • Explicitly describing what the cucumber check is for, instead of using vague Then statements
    • Using step commands for re-using repeated cucumber steps
    • Renaming variable, method, and class names used for assertions so they mean what they actually mean
  • Ruby
    • Applying double pipes to set default variable values when necessary
    • Using one-line ifs and unlesses whenever possible, for readability purposes
    • Converting simple multi-line if-else statements to one line whenever helpful using the ternary/conditional operator
    • Replacing + or << operators with string interpolation
    • Using symbols for hash keys instead of string names because they’re faster to process
  • Locators
    • Making xpath locators shorter/readable whenever possible using the // modifier

It’s all difficult work, possibly more strenuous work than writing new checks even though basically I’m just rewriting existing tests into a more readable and maintainable form. I’m breaking existing code because they’re not so easy to understand and then revising them in order for their intent to be more understandable in plain view, finally making sure they’re checking whatever they’re supposed to, so that they act as a better source of truth for knowing how our applications behave and deliver value.

Notes from Jeff Nyman’s “Test Description Language” blog post series

What is a test description language? How do we use that language to effectively write tests? In a series of blog posts, Jeff Nyman talks about what TDL is and how to write better tests with them. The posts are mostly written in the sense of how Cucumber checks are being used for writing tests for automation, but the lessons can also be applied in writing test cases if ever there is a need to write them for clients.

Some notes from the series of posts:

  • It’s very easy to confuse the essence of what you’re doing with the tools that you use.
  • Just because I’m using TestLink, Quality Center, QTP, Cucumber, Selenium, and so forth: that’s not testing. Those are tools that are letting me express, manage, and execute some of my tests. The “testing” part is the thinking I did ahead of time in discerning what tests to think about, what tests to write about, and so forth.
  • Testable means that it specifically calls out what needs to be tested.
  • When examples are the prime drivers, you are using the technique of specification by example and thus example-driven testing. When those examples cover business workflows, you are using the technique of scenario-based testing.
  • Requirements, tests, examples — they all talk about the same thing. They talk about how a system will behave once it’s in the hands of users.
  • The goal of a TDL (test description language) is to put a little structure and a little rigor around effective test writing, wherein “effective test writing” means tests that communicate intent (which correspond to your scenario title) and describe behavior (the steps of your scenario)
  • It is now, and always has been, imperative that we can express what we test in English (or whatever your common language is). This is a key skill of testers and this is a necessity regardless of the test tool solution that you use to manage and/or automate your testing.
  • So many tests, and thus so much development, falters due to a failure to do this simple thing: make sure the intent is clear and expressive for a very defined unit of functionality.
  • Software development teams are not used to the following ideas: The idea of testing as a design activity. The idea of test specifications that are drivers to development. The idea of effective unit tests that catch bugs at one of the most responsible times: when the code is being written.

Thoughts After a Recent Software Feature Release

Having a feature released to the customers does not mean I won’t have to test that feature anymore.

When a feature is released months later than it’s initially estimated, it could mean that the project was just a complex thing. Or it could mean that we did not give the thing enough attention to have it broken up and released into possibly much smaller chunks.

It’s terribly challenging to find confidence in releasing a new massive functionality which took months to finish, especially when the feedback you have about its quality is limited. I am being reminded that a part of testing is solving the problem of having good enough test coverage of product risks. What are those risks? What tests do I want to perform in order to cover those risks?

Just like how meetings that take hours get extremely boring, it gets annoyingly tiring for teams to spend more than a month of time building a feature before publishing it in public.

If we don’t spend some of our time now tinkering with systems that could possibly help us accomplish things better in the future, we’ll end up doing the same things as before and that reflects into our ability to grow.

Just because a defect happens to quietly slipped its way into production code does not mean the team did not work their asses off in bringing the best work they could possibly do to the project, under the circumstances. And just because a team worked hard does not guarantee they will write code free of any bug. What matters is what people do when those flaws are found. And there will always be flaws.

Some people, even testers, are unnecessarily loud when they find problems in production after a feature release. This just means that they did not have any idea how that part of the feature was built in the first place and did not care to explore it either before the release. They only assumed that everything is ought to be working. They forgot that quality is a team responsibility.

It’s important that we understand why a problems exists in production code (and maybe who did it) but I’m not sure if it’s particularly helpful that we leave the problem fixing later to the person who made the defect when we have the time and the know-how to fix it ourselves now. What’s the boy scout rule again for programmers? Yes, leave the code better than you found it.

Just because a shiny new feature is released to the public does not mean it has actual value to customers. And if we don’t find ways of getting user feedback, we’ll never find out whether what we built amounts to something or not.

Feature releases are always short-lived, like the physical store opening day launch.  There’s always a celebration, an attraction, but it’s not what really matters in the long run. What matters are the lessons learned in the duration of building the feature, what we intend to do with those lessons, the people we worked with, and building things that our customers love.

Learning How To Write Better Cucumber Scenarios

Some of the important lessons in writing automated checks are found, not in the actual implementation of the check itself, but rather in the specification. What does a green check mean? What are we really trying to find out when we’re running this check? Will somebody from another team understand why this test was written? Does this check say what’s necessary in a feature or does the check only state a procedure without context? Too often we concentrate on syntax, frameworks, and required steps in building automation, but not so much on clearly expressing what’s being checked and why the check is recorded in the first place. I’ve made that mistake and now I am trying to learn how to write better checks.

Take, for example, the cucumber checks below. Would you say that it is clear what’s being verified in the test? What would a product owner probably say about these checks when they see them for the first time?

Scenario Outline: Validating the Rate Plan Policies
  Given I am in the ShowRoomsPage for a "<hotel_type>" property from "<arrival>" to "<departure>" for a "<rateplan>" rate plan for a "confirmed" reservation
  When I view the policies for the rate plan
  Then I know that these are the correct policies
  | hotel_type | arrival         | departure       | rateplan            |
  | DWH        | 4 DAYS FROM NOW | 6 DAYS FROM NOW | Public_Partial_LT   |
  | DWH        | TOMORROW        | 3 DAYS FROM NOW | Public_Full_YesR_LT |

Some of the questions that would probably pop up in their minds include the following: How are the examples in the grid necessary for the test? What does Public_Partial_LT or Public_Full_YesR_LT mean? Do we really need to know the arrival and departure date settings for the checks? What does the Given statement mean? Most importantly, how do I know that the policies are actually correct for these tests?

This was how I wrote my checks before when I started studying how to write Cucumber-Watir-Ruby checks. And a lot of my checks in the existing test suite still are. I am, however, trying to learn how to re-write them in a better way, in terms of readability and conciseness, guided by lessons so far learned from Jeff Nyman’s test description language blog posts, so that almost everybody in our team can recognize in a glance what a particular check does and why they are included in the feature.

Re-writing the example checks above, I now have these:

Scenario: ShowRooms Prepayment Policy, DWH Partial
  When guest views the policies for a DWH property for a partial rate plan
  Then a copy of "Only 10% prepayment is required to confirm your reservation" is displayed in the prepayment policy

Scenario: ShowRooms Prepayment Policy, DWH Full Non-Refundable
  When guest views the policies for a DWH property for a full nonrefundable rate plan
  Then a copy of "Full prepayment is required to confirm your reservation" is displayed in the prepayment policy

Not perfect, but I’d like to think that they are more effective than the ones I had before. Checks are more specific, unrelated information are not included in the test, and we understand what it means when the checks pass or fail.

What do you think?