Set Cucumber to Retry Tests after a Failure

This is probably old news, but I only recently found out that there is a way for Cucumber tests running in Ruby to automatically retry when they fail.

Here’s a sample failing test:

Apparently we can just add a --retry command to set the test to retry if it fails. And we can set how many times we want the test to retry. If we retry the failing test above to retry two times, with a --retry 2 command, we’ll get the following output:

It did retry the test twice, after the initial failure. Neat! 🙂

But what happens if we run the test, with the same retry command, and the test passes? Does the test run three times? Let’s see:

Good, the retry command gets ignored when the test passes.

Advertisements

Cypress and Mochawesome

A week ago I was working on a quick automation project which asks for an HTML report of the test results to go along with the scripts. It was a straightforward request, but it was something that I don’t usually generate. Personally I find test results being displayed in a terminal to be enough, but for the said task I needed a report generator. I had already decided to use Cypress for the job so I needed something that plays well with it. In their docs I found a custom reporter called Mochawesome.

To install it, I updated the package.json file inside the project directory to include the reporter:

The Cypress documentation on Reporters also said that for Mochawesome to properly work I should also install mocha as a dev dependency, so that’s what I did.

And then run npm install on the terminal to actually install the reporter.

Before running tests, there’s a tiny change we need to write on the cypress.json file inside the project directory, which tells Cypress which reporter do we want to use for generating test reports.

And we’re all set after all that. 🙂

Run Cypress tests by running cypress run --reporter mochawesomeOr if you specified a script in the package.json file the same way I did in the first photo above, just run npm test.

After running tests, we’re going to find out that a mochawesome-report directory has been added to our project directory which houses both HTML and JSON reports of the tests.

A sample HTML test report looks something like this:

Looks nice and simple and ready for archiving.

Welcome Back, Selenium IDE!

I was browsing my Feedly news feed recently, as I routinely do once a week, and saw this particular bit of cheery bulletin:

Click to redirect to the official announcement from Selenium HQ

Long live Selenium IDE! 🙂

Nostalgia hits. I remember the days when the only automation I knew was Selenium IDE’s record-and-playback. It’s easy to install, is relatively fast, and isn’t difficult to pick up as the Selenese commands are straightforward. It was possible to run a test individually as well as to run tests as a suite. There was a way to manually change test steps, and, of course, save tests, as needed. It only worked on Firefox at the time, but that for me back then was good enough. At least until I eventually craved for the re-usability, maintenance, and customization advantages that programming has over record-and-playback.

This is what it looks like now:

And some delightful things about it (that I don’t remember – or perhaps forgot – having seen in the Legacy IDE):

  • Command dropdown options in English
  • Control flow commands baked in
  • An execute async script command
  • A run command to run tests headlessly! 😮
  • Google Chrome extension and Firefox add-on versions out of the box

Although I don’t see myself going back to record-and-playback tools for my automation needs, I’m still glad Selenium IDE is back for everyone to try. It’s still a great tool for recording quick automated browser scripts, for demo purposes or otherwise, as well as for learning automation in general.

It’s Alright to Make Mistakes

Looking at the screenshot above, it’s ironic that there’s a mismatch between the commit message and the actual pipeline result from the tests after pushing the said commit. 😛 What happened was that after making the unit tests pass on my local machine, I fetched the latest changes from the remote repository, made a commit of my changes, and then pushed the commit. Standard procedure, right? After receiving the failure email I realized that I forgot to re-run the tests locally to double-check, which I should have done since there were changes from remote. Those pulled changes broke some of the tests. Honest mistake, lesson learned.

I frequently make errors like this on all sorts of things, not just code, and I welcome them. They bewilder me in a good way, and remind me of how fallible I can be. They show me that it’s alright to make mistakes, and tell me that it’s all part of the journey to becoming better.

And yes, testers can also test on the unit level! 🙂

 

Guard Up

Knowing how to automate things and building scheduled tests to monitor known application behavior does not guarantee bug free apps.

They do help boost confidence in what we’ve built, and that makes us more comfortable releasing changes into the wild, because it means we’ve explored a fair amount of feature breadth. It’s not the only strategy I’m sure to use given some software to test, but it’s something I’ll practice every after thoughtful exploration.

But, even with that tool to augment my testing, I have to keep my guard up. Every time, I should keep asking myself about the more valuable matter of the unknowns. What sort of things have I not considered yet? What scenarios might I still be missing? Have I really tested enough or am I merely being overconfident, just because I can automate?

Docker: A Dockerfile to Fix the “Call to undefined function pg_connect()” Error when Using PHP with Postgres

When integrating a PHP application connecting to a PostgreSQL database, both services running as Docker containers using the official PHP and Postgre images, you might encounter (as I have) an error that looks something like this:

    Uncaught Error: Call to undefined function pg_connect() in ...

It’s actually a simple error, which means that there’s something wrong with the connection between the app and the PostgreSQL database, but when I first stumbled on it I had a hard time finding out what I needed to do to fix it. There was definitely something missing from the Docker setup, but I did not know what it was until I sought help from a teammate.

Apparently the official PHP docker image does not contain the PDO and PGSQL drivers necessary for the successful connection. Silly me for assuming it does.

The fix is simple. We have to create a Dockerfile that updates our PHP image with the required drivers, which contains the following code:

FROM php:7.1-fpm

RUN apt-get update

# Install PDO and PGSQL Drivers
RUN apt-get install -y libpq-dev \
  && docker-php-ext-configure pgsql -with-pgsql=/usr/local/pgsql \
  && docker-php-ext-install pdo pdo_pgsql pgsql

Easy peasy. And to run this Dockerfile from a docker-compose.yml file, we’ll need the build, context, and dockerfile commands, replacing the single image command that does not use a Dockerfile:

version: '3'
services:
  php:
    build:
      context: ./directory/of/Dockerfile
      dockerfile: ./Dockerfile
    # All your other settings follow ...

And that should be all that you need to do! 🙂

Notes from David Bryant Copeland’s “Build Awesome Command-Line Applications in Ruby 2”

The experience of writing and running automated checks, as well as building some personal apps that run on the terminal, in recent years, has given me a keen sense of appreciation on how effective command-line applications can be as tools. I’ve grown fond of quick programming experiments (scraping data, playing with Excel files, among others), which are relatively easy to write, powerful, dependable, and maintainable. Tons of libraries online help interface well with the myriad of programs in our desktop or out in the web.

Choosing to read “Build Awesome Command-Line Applications in Ruby 2” is choosing to go on an adventure about writing better CLI apps, finding out how options are designed and built, understanding how they are configured and distributed, and learning how to actually test them.

Some notes from the book:

  • Graphical user interfaces (GUIs) are great for a lot of things; they are typically much kinder to newcomers than the stark glow of a cold, blinking cursor. This comes at a price: you can get only so proficient at a GUI before you have to learn its esoteric keyboard shortcuts. Even then, you will hit the limits of productivity and efficiency. GUIs are notoriously hard to script and automate, and when you can, your script tends not to be very portable.
  • An awesome command-line app has the following characteristics:
    • Easy to use. The command-line can be an unforgiving place to be, so the easier an app is to use, the better.
    • Helpful. Being easy to use isn’t enough; the user will need clear direction on how to use an app and how to fix things they might’ve done wrong.
    • Plays well with others. The more an app can interoperate with other apps and systems, the more useful it will be, and the fewer special customizations that will be needed.
    • Has sensible defaults but is configurable. Users appreciate apps that have a clear goal and opinion on how to do something. Apps that try to be all things to all people are confusing and difficult to master. Awesome apps, however, allow advanced users to tinker under the hood and use the app in ways not imagined by the author. Striking this balance is important.
    • Installs painlessly. Apps that can be installed with one command, on any environment, are more likely to be used.
    • Fails gracefully. Users will misuse apps, trying to make them do things they weren’t designed to do, in environments where they were never designed to run. Awesome apps take this in stride and give useful error messages without being destructive. This is because they’re developed with a comprehensive test suite.
    • Gets new features and bug fixes easily. Awesome command-line apps aren’t awesome just to use; they are awesome to hack on. An awesome app’s internal structure is geared around quickly fixing bugs and easily adding new features.
    • Delights users. Not all command-line apps have to output monochrome text. Color, formatting, and interactive input all have their place and can greatly contribute to the user experience of an awesome command-line app.
  • Three guiding principles for designing command-line applications:
    • Make common tasks easy to accomplish
    • Make uncommon tasks possible (but not easy)
    • Make default behavior nondestructive
  • Options come in two-forms: short and long. Short-form options allow frequent users who use the app on the command line to quickly specify things without a lot of typing. Long-form options allow maintainers of systems that use our app to easily understand what the options do without having to go to the documentation. The existence of a short-form option signals to the user that that option is common and encouraged. The absence of a short-form option signals the opposite— that using it is unusual and possibly dangerous. You might think that unusual or dangerous options should simply be omitted, but we want our application to be as flexible as is reasonable. We want to guide our users to do things safely and correctly, but we also want to respect that they know what they’re doing if they want to do something unusual or dangerous.
  • Thinking about which behavior of an app is destructive is a great way to differentiate the common things from the uncommon things and thus drive some of your design decisions. Any feature that does something destructive shouldn’t be a feature we make easy to use, but we should make it possible.
  • The future of development won’t just be manipulating buttons and toolbars and dragging and dropping icons to create code; the efficiency and productivity inherent to a command-line interface will always have a place in a good developer’s tool chest.