Help Yourself Perform Tasks With Personal Shortcut Commands

In a recent automation work I’ve been asked to take part in, running a single test safely, according to peers, generally follow a three-step process:

  1. Delete existing feature files:  rm -rf *.feature
  2. Create updated feature files:  PROJECT_DIRECTORY=../<dir> APP=<app> TEST=automated ruby create_tests.rb, where app is the name of the app you want to test
  3. Run the desired specific test:  cucumber --tags @<tag>

This means I have to re-run these three steps over and over for every single test I want to review. That can get tiring easily, especially if I have to re-type them repeatedly, or even if I copy-paste. Using the arrow keys in the terminal helps, but sometimes commands can get lost in the history and then it becomes a bit of a hassle to find them.

There should be a way for me to run an automated check I want to review, given just the app name and the test tag. It would be nice if I can run a specific test using a shorter command but still following the same process as provided.

I decided to set aside a little bit of time to write a personal script, and used Makefile because I’ve had experience with that before.

My personal short command for running a test:  make test app=<app> tag=<tag>

And here’s what the script looks like, which runs the known commands step-by-step plus displays some information on what’s currently happening:

Now I won’t have to worry about remembering what commands to type in as well as the sequence of the commands. I can give more focus on actually reviewing the test itself, which is what’s more important.

Advertisements

Trying Out Cypress with Circle CI

Circle CI has been around a while but I’ve never tried their service before. There hasn’t been much of a reason to do so; at work I mostly stuck with a local Jenkins instance because it got the job done. But I recently had the urge to try it out for a private project that I have. I wanted to see if it plays well with Cypress, a test tool I’ve been using for some time.

A search tells me that there’s already a Cypress-Docker-CircleCI example on Github, from Cypress themselves. Cool, I just need to copy their settings to my own project and make changes accordingly.

Here’s how the package.json file looks like:

You’ll see a cy:circle-junit command in there, which does the same thing as cy:run (which runs tests on a terminal) but also builds a JUnit report.

To run our tests on Circle CI, we’re going to need a circle.yml file inside our project, which contains the following:

From this file Circle CI is supposed to checkout the test code, run the cy:circle-junit command, which installs Cypress and other dependencies inside a Docker image, saves that image in a cache (and restores that cache for every succeeding test runs as long as the dependencies remain the same), runs our tests inside a container, creates a simple report, and stores test artifacts afterwards. Looks straightforward. 🙂

Now let’s see if things work!

Connecting my Cypress project on Circle CI and pushing code changes on the remote repository automatically creates test runs that shows up on a neat dashboard like so:

And we can zoom in on the details of every test run, which looks like this:

Awesome! Looks like I’ll continue using their service for this project, and maybe on other projects as well. I’m off to adding proper tests to the suite!

 

Notes from Jonathan Bach’s “Session-Based Test Management”

Tracking exploratory testing work is difficult for test managers. We don’t want to micro-manage testers, we want them to explore to their hearts content when they test, but we won’t know how much progress there is if we don’t track the work. We also won’t know what sort of problems testers encounter during testing, unless they have the nerve to tell us immediately. Jonathan Bach’s “Session-Based Test Management” article has one suggestion: use sessions, uninterrupted blocks of reviewable and chartered test effort.

Here are a few favorite notes from the article:

  • Unlike traditional scripted testing, exploratory testing is an ad hoc process. Everything we do is optimized to find bugs fast, so we continually adjust our plans to re-focus on the most promising risk areas; we follow hunches; we minimize the time spent on documentation. That leaves us with some problems. For one thing, keeping track of each tester’s progress can be like herding snakes into a burlap bag.
  • The first thing we realized in our effort to reinvent exploratory test management was that testers do a lot of things during the day that aren’t testing. If we wanted to track testing, we needed a way to distinguish testing from everything else. Thus, “sessions” were born. In our practice of exploratory testing, a session, not a test case or bug report, is the basic testing work unit . What we call a session is an uninterrupted block of reviewable, chartered test effort. By “chartered,” we mean that each session is associated with a mission—what we are testing or what problems we are looking for. By “uninterrupted,” we mean no significant interruptions, no email, meetings, chatting or telephone calls. By “reviewable,” we mean a report, called a session sheet, is produced that can be examined by a third-party, such as the test manager, that provides information about what happened.
  • From a distance, exploratory testing can look like one big amorphous task. But it’s actually an aggregate of sub-tasks that appear and disappear like bubbles in a Jacuzzi. We’d like to know what tasks happen during a test session, but we don’t want the reporting to be too much of a burden. Collecting data about testing takes energy away from doing testing.
  • We separate test sessions into three kinds of tasks: test design and execution, bug investigation and reporting, and session setup. We call these the “TBS” metrics. We then ask the testers to estimate the relative proportion of time they spent on each kind of task. Test design and execution means scanning the product and looking for problems. Bug investigation and reporting is what happens once the tester stumbles into behavior that looks like it might be a problem. Session setup is anything else testers do that makes the first two tasks possible, including tasks such as configuring equipment, locating materials, reading manuals, or writing a session report
  • We also ask testers to report the portion of their time they spend “on charter” versus “on opportunity”. Opportunity testing is any testing that doesn’t fit the charter of the session. Since we’re in doing exploratory testing, we remind and encourage testers that it’s okay to divert from their charter if they stumble into an off-charter problem that looks important.
  • Although these metrics can provide better visibility and insight about what we’re doing in our test process, it’s important to realize that the session-based testing process and associated metrics could easily be distorted by a confused or biased test manager. A silver-tongued tester could bias the sheets and manipulate the debriefing in such a way as to fool the test manager about the work being done. Even if everyone is completely sober and honest, the numbers may be distorted by confusion over the reporting protocol, or the fact that some testers may be far more productive than other testers. Effective use of the session sheets and metrics requires continual awareness about the potential for these problems.
  • One colleague of mine, upon hearing me talk about this approach, expressed the concern that senior testers would balk at all the paperwork associated with the session sheets. All that structure, she felt, would just get in the way of what senior testers already know how to do. Although my first instinct was to argue with her, on second thought, she was giving me an important reality check. This approach does impose a structure that is not strictly necessary in order to achieve the mission of good testing. Segmenting complex and interwoven test tasks into distinct little sessions is not always easy or natural. Session-based test management is simply one way to bring more accountability to exploratory testing, for those situations where accountability is especially important.

Web Accessibility Visualization Tool: tota11y

Testing web sites and apps come in many forms. Testers try their best to test everything, but obviously there’s only so much they can do within a schedule. Some forms of testing are more prioritized than others, and that’s not inherently bad; for a solo tester on a team, one usually tests in a way that covers more bases at the beginning.

Web accessibility testing is one of those forms of testing that often takes a backseat, sometimes even forgotten. Web accessibility helps people with disabilities get better surfing experience. Although websites are typically not built with such functionality in mind, it matters.

And tota11y is a tool from Khan Academy we can leverage for testing accessibility. It is available as an easy-to-use bookmarklet. For whatever page we want to test, we just need to go there and click the bookmarklet, after which the tool will appear on the bottom left corner of the page. Clicking the tool reveals options and using each helps us spot common accessibility violations.

Here are some screenshots of using it on a page I test at work, checking headings, contrast, and link text:

Spotted: Nonconsecutive heading level use

Multiple insufficient contrast ratio violations

And unclear link texts

Looks like there’s room for improvement, although these violations are not necessarily errors.

Automating the Windows 10 Desktop Calculator using PyAutoGui

Last year, I’ve tried automating a Windows 7 desktop calculator using a Java library called Winium. A year later, someone from the comments told me that the example didn’t work on Windows 10. I ignored the comment for a while because I didn’t have a Windows 10 machine to perform a test back then, but now that I’ve recently upgraded my home PC I decided to try it out.

What I found:

Running a Windows 7 desktop calculator automation example (using Winium) on a Windows 10 machine

The Windows 10 calculator opened up but the tests didn’t run properly. The error log told me that the program was unable to find the calculator elements it was supposed to click. Bummer. Maybe the names of the calculator elements were different, Windows 10 versus Windows 7? It wasn’t; the element names were still the same according to UI Spy. Perhaps there’s something from Winium that can point me to a clue? Oh, it seems that the library has not been updated in recent years.

If I can’t use Winium, how then can I automate the Windows 10 calculator? A Google search pointed me to PyAutoGui. Instead of Java, it says that I’ll need Python (and Pip) for this tool to work. And yes, being Windows, I also need to properly set the environment variables so I can use the Python and Pip commands on a terminal.

Let’s install PyAutoGui:

Installing PyAutoGui

And after writing some code, let’s see if we can automate the Windows 10 calculator with it:

Automating the Windows 10 desktop calculator with PyAutoGui. Click the image to view the GIF full-size on another browser tab 🙂

It works!

But here are some catches:

  • I had to rely on PyAutoGui’s keyboard control functions for performing the calculator actions, instead of finding elements via the user interface. Well, from what I’ve seen so far from the docs is that the only way to locate a UI element is by locating elements using screenshots. I tried that the first time and it was very flaky, so I opted for using the keyboard control functions instead.
  • The code introduces a time variable to wait for the calculator to appear on screen. The code also introduces a time variable to pause a portion of a second in-between each keyboard action so the actions don’t happen too fast for a person’s eye.
  • There are no assertions in the example code, because I couldn’t find any assertion functions I could use from the PyAutoGui docs. It is not a tool built for testing, only for automating desktop apps.

Source code for this experiment can be found on: Win-Calculator-PyAutoGui.

Five People and Their Thoughts (Part 8)

Sharing a new batch of engaging videos from people I follow, which I hope you’ll come to like as I do:

  • Workarounds (by Alan Richardson, about workarounds, how you can use them in testing, custom-using tools, moving boilerplate to the appendix, bypassing processes that gets in the way, understanding risks and value, technical skills, and taking control of your career)
  • 100% Coverage is Too High for Apps! (by Kent Dodds, on code coverage, what it tells you, as well as what it does not tell you)
  • Open Water Swimming (by Timothy Ferriss, about fears, habits, total immersion swimming, Terry Laughlin, compressing months of conventional training into just a few days, and the power of micro-successes)
  • How To Stop Hating Your Tests (by Justin Searls, representing Test Double, on doing only three things for tests, avoiding conditionals, consistency, apparent test purpose, redundant test coverage, optimizing feedback loops, false negatives, and building better workflows)
  • Meaning of Life (by Derek Sivers, about a classic unsolvable problem, using time wisely, making good choices, making memories, the growth mindset, inherent meaning, and a blank slate)

Set Cucumber to Retry Tests after a Failure

This is probably old news, but I only recently found out that there is a way for Cucumber tests running in Ruby to automatically retry when they fail.

Here’s a sample failing test:

Apparently we can just add a --retry command to set the test to retry if it fails. And we can set how many times we want the test to retry. If we retry the failing test above to retry two times, with a --retry 2 command, we’ll get the following output:

It did retry the test twice, after the initial failure. Neat! 🙂

But what happens if we run the test, with the same retry command, and the test passes? Does the test run three times? Let’s see:

Good, the retry command gets ignored when the test passes.