6 Curious New Tools to Try for Writing Automated Checks for Browser Apps

While I don’t find myself writing a lot of browser-based automated checks these days, I still am on the look out for interesting new tools in that space. The reason: the new tool solves an existing problem I have with setting up such a testing suite from scratch or provides a solution for certain curious use cases I’ve never experienced before. While using Ruby and Watir together in writing tests running through the browser for me is sufficient for common tasks, such a new tool could be a better fit for another project.

Here’s a list of such tools that popped up in my feed in recent months:

  • Cypress. What I like about Cypress, aside from the standalone package installation option and the built-in pretty test report page, is that the pre-defined browser tests that the actual team runs on its own site is included out-of-the-box. This way they made it easy for me to write custom tests; I just had to search for an example of what I wanted to do, copy-pasted it to my own test, and updated the parts that needed changing. Tests are written in Javascript. I have yet to try running the tests via the terminal though, which is important when running tests on a CI server. Using their test runner is free for all projects, however there is a pricing plan for using their dashboard service which helps keep test recordings private.
  • Katalon Studio. This is a full-blown automation solution that is completely free. There’s a pricing plan for business support services. The record-and-playback feature built-in to the tool failed to impress me when I ran it through our legacy apps, but perhaps writing the actual test code through their GUI fares better (using which there will be a high learning curve for people like me who like to use the CLI and personally-configured IDEs).
  • PuppeteerBuild to control Google’s headless Chrome or Chromium browser, running over the DevTools protocol. Tests are written in Javascript. Easy to try and get into using their web playground. Alister Scott has tried it running with Mocha and Circle CI on a demo project.
  • Chromeless. Similar to Puppeteer, but built to automate an army of Chrome browsers running in parallel. It gives us the option to run tests on AWS Lamba too. Again, tests are written in Javascript, which we can try on their demo playground.
  • Laravel Dusk. This gives PHP developers familiar with Laravel the ability to write and run their own browser app tests, using a programming language they’re much accustomed to.
  • Appraise. Similar to BackstopJS, a tool for visually validating browsers apps. Tests are written in Markdown.
Advertisements

Running Makefile Tasks On Windows OS

In an ongoing software development project we are using Makefile tasks to make running long and repetitive commands as easy and as fun as possible to run. They’re like bash aliases, shortcuts to performing recurrent jobs we frequently have to do while writing new code or testing applications. For example, we could define a task that runs unit tests and code standard checks on an application running in a Docker container like so:

  test:
    echo "Checking application code with PSR2 standards ..."
    docker-compose exec -T php phpcs -v --standard=phpcs.xml ./app/src
    echo "Running unit tests ..."
    docker-compose exec -T php phpunit --colors=always --configuration ./app

and we would run the task with only the following command:

  make test

Cool, right? I don’t have to remember all the exact commands to do what I need to do. And even if I forget the right task name (in this case, make test) I can just run the make command in the CLI and I’ll be provided a list of the tasks that I can use for the project.

Now Makefile tasks will run on Unix terminals out of the box. For Windows however, we still have to do some setup before Makefile tasks can run. For my machine at work, I did the following:

  • Download and install GnuWin32
  • Go to the install folder C:\Program Files (x86)\GnuWin32\bin
  • Copy all files inside the bin folder to the root project directory (libiconv2.dll, libintl3.dll, make.exe)
  • Add the installation bin directory to the system environment variables Path

There are other tools that we can use to configure Makefile to run on Windows but this is a quick and easy way to do it. After that we can run make.exe test on the default cmd CLI but on some Unix-like terminals like the Docker Quickstart Terminal we can definitely use make test.

Using a Git Pre-Commit Hook for Automatic Linting, Unit Testing, and Code Standards Checking of Application Code

Problem: I want to automatically run unit tests, lint the application code, and check it’s state against team standards every time I try to commit my changes to a project. It would be nice if the commit aborts if any of the existing tests fails or if I did not follow a particular standard that the team agrees to uphold. The commit pushes through if there are no errors. If possible, I don’t have to change anything in my software development workflow.

Solution: Use a Git pre-commit hook. Under the .git/hooks hidden folder in the project directory, create a new file called pre-commit (without any file extension) containing something like the following bash script (for testing PHP code):

#!/bin/sh

stagedFiles=$(git diff-index --cached HEAD | grep ".php" | grep "^:" | sed 's:.*[DAM][ \\''t]*\([^ \\''t]*\):\1:g');
errorMessage="Please correct the errors above. Commit aborted."

printf "Linting and checking code standards ..."
for file in $stagedFiles
do
  php -l $file
  LINTVAL=$?
  if [[ $LINTVAL != 0 ]]
  then
    printf $errorMessage
    exit 1
  fi
  php core/phpcs.phar --colors --standard=phpcs.xml $file
  STANDVAL=$?
  if [[ $STANDVAL != 0 ]]
  then
    printf $errorMessage
    exit 1
  fi
done

printf "Running unit tests ..."
core/vendor/bin/phpunit --colors="always" [TESTS_DIRECTORY]
TESTSVAL=$?
if [[ $TESTSVAL != 0 ]]
then
  printf $errorMessage
  exit 1
fi

where

  • linting and code standard checks only runs for the files you want to commit changes to
  • code standard checks are based on a certain phpcs.xml file
  • unit tests inside a particular TESTS_DIRECTORY will run
  • the commit will abort whenever any of the lints, code standard checks, or unit tests fails

Favorite Talks from Sauce Conference 2017

The European Selenium Conference is underway in Berlin but before the talks for that event get uploaded online I thought I should finish binge-watching this year’s Sauce Conference videos first. Many of the talks in SauceConf 2017 are actually similar in feel to those I’ve watched in previous Selenium Conferences, except that a number of them point out how awesome Sauce Labs‘ service has been for their cross-browser and cross-platform testing needs. Not that I mind nor I disagree, even if I have not used Sauce Labs’ extensively. Test automation in the web application space has been mostly stable in recent years and that’s a good thing. Now we are left with discussing the consequences of the automated systems that we have put in place and focusing on other areas which needs further improvement.

Anyway, here are my favorite talks from the conference:

Basic API Testing with PHP’s HTTP Client Guzzle

I like writing test code in Ruby. It’s a preference; I feel I write easy-to-read and easy-to-maintain code in them than with using Java, the programming language I started with in learning to write automated checks. We use PHP in building apps though. So even if I can switch programming languages to work with, sometimes I think about how to replicate my existing test code with PHP, because maybe sometime in the future they’ll have an interest in doing what I do for themselves. If I know how to re-write my test code in a programming language they are familiar with then I can help them with that.

In today’s post, I’m sharing some notes about what I found working when building simple API tests with PHP’s HTTP client Guzzle.

To start with, we have to install necessary dependencies. One such way for PHP projects is through Composer, which we’ll have a composer.json file in the root directory. I have mine set up with the following:

{
     "require-dev": {
          "behat/behat": "2.5.5",
          "guzzlehttp/guzzle": "~6.0",
          "phpunit/phpunit": "^5.7"
     }
}

Using Guzzle in code, often in combination with Behat we’ll have something like this:

use Behat\Behat\Tester\Exception\PendingException;
use Behat\Behat\Context\Context;
use Behat\Behat\Context\SnippetAcceptingContext;
use Behat\Gherkin\Node\PyStringNode;
use Behat\Gherkin\Node\TableNode;
use GuzzleHttp\Client;

     class FeatureContext extends PHPUnit_Framework_TestCase implements Context, SnippetAcceptingContext
     {
         // test code here
     }

where test steps will become functions inside the FeatureContext class. Our API tests will live inside such functions.

Here’s an example of a GET request, where we can check if some content is displayed on a page:

/**
* @Given a sample GET API test
*/
public function aSampleGET()
{
     $client = new Client();
     $response = $client -> request('GET', 'page_URL');
     $contents = (string) $response -> getBody();
     $this -> assertContains($contents, 'content_to_check');
}

For making requests to a secure site, we’ll have to update the sample request method to:

request('GET', 'page_URL', ['verify' => 'cacert.pem']);

where cacert.pem is a certificate file in the project’s root directory. We can of course change the file location if we please.

Now here’s an example of a POST request, where we are submitting an information to a page and verifying the application’s behavior afterwards:

/**
* @Given a sample POST API test
*/
public function aSamplePOST()
{
     $client = new Client(['cookies' => true]);
     $response = $client -> request('POST', 'page_URL', ['form_params' => [
          'param_1' => 'value_1',
          'param_2' => 'value_2'
     ]]);
     $contents = (string) $response -> getBody();
     $this -> assertContains($contents, 'content_to_check');
}

This is a basic POST request. You may notice that I added a cookies parameter when initializing the Guzzle client this time. That’s because I wanted the same cookies in the initial request to be used in succeeding requests. We can remove that if we want to.

There’s a more tricky kind of POST request, something where we need to upload a certain file (often an image or a document) as a parameter to the request. We can do that by:

/**
* @Given a sample POST Multipart API test
*/
public function aSampleMultipartPOST()
{
     $client = new Client(['cookies' => true]);
     $response = $client -> request('POST', 'page_URL', ['multipart' => [
          [
               'name' => 'param_1',
               'contents' => 'value_1'
          ],
          [
               'name' => 'param_2_file',
               'contents' => fopen('file_location', 'r')
          ]
     ]]);
     $contents = (string) $response -> getBody();
     $this -> assertContains($contents, 'content_to_check');
}

and use whatever document/image we have in our machine. We just need to specify the correct location of the file we want to upload.

Building a Docker Image of Existing Test Code, with Dependencies Automatically Installed

When I first tried Docker a couple of years back, I did not find it much different from using a virtual machine. Perhaps because I was experimenting with it on Windows, or perhaps it was still a relatively new app back then. I remember not having a pleasant experience installing and running it on my machine, and at the time it was just easier to run and debug Selenium tests on a VM.

I tried again recently.

Building a Docker image containing test code with dependencies automatically installed, with Docker Toolbox on Windows 7

And I was both surprised and delighted to be able to build a Docker image with an existing test code and its dependencies automatically installed, right out of the box. This is very promising; I can now build development environments or tools which can run on any machine I own or for teams. To use them we just need install Docker and download the shared image. No more setup problems! Of course, there’s still a lot to test – we’ll probably want to have an image be slim in size, automatically update test code from a remote repository, among other cool things. I’ll try those next.

Here’s what the Dockerfile looks like:

FROM ruby:latest
RUN mkdir /usr/src/app
ADD . /usr/src/app/
WORKDIR /usr/src/app/
RUN gem install bundler
RUN bundle install

Short and easy to follow. Then we build the image by running the following command on the terminal (on the root project directory):

docker build -t [desired_image_name] .

To run and access the image as a container:

docker run -i -t [image_name]:[tag_name] /bin/bash

And from there we can run our cucumber tests inside the container the same way as we do on our local machine.

Questions for Recruiting Employers I

Having received a handful of calls these past few weeks, I’ve realized that I’m thinking more about learning opportunities now than only looking at an offer’s compensation package. Salary still matters of course but there’s more to a day job than just allocating money for expenses and savings. Work wouldn’t be fun when there’s only menial work to do. What would I learn if I took an offer in exchange for my services? Will I be doing something I’ve never done or tried before? Is this work meaningful, both for our customers and for me as an individual?

And some more questions to recruiting employers on the top of my head:

  • How does the existing software development and testing process work in the organization?
  • Are testers embedded into software teams? Or do they work in silos, away from the developers?
  • What does a software team look like and compose of?
  • What does the day-to-day work look like for the software tester?
  • How many testers are there currently in the organization? And what’s the existing ratio of developers to testers?
  • What technologies do the organization currently use for testing? Are there automated checks? Is there an existing CI system?
  • Does the organization use containers? Visual testing? Machine learning?
  • Who are the organization’s actual clients? Which lives are we trying to improve on a daily basis?
  • How do the software teams get feedback from the people who use their applications?