6 Curious New Tools to Try for Writing Automated Checks for Browser Apps

While I don’t find myself writing a lot of browser-based automated checks these days, I still am on the look out for interesting new tools in that space. The reason: the new tool solves an existing problem I have with setting up such a testing suite from scratch or provides a solution for certain curious use cases I’ve never experienced before. While using Ruby and Watir together in writing tests running through the browser for me is sufficient for common tasks, such a new tool could be a better fit for another project.

Here’s a list of such tools that popped up in my feed in recent months:

  • Cypress. What I like about Cypress, aside from the standalone package installation option and the built-in pretty test report page, is that the pre-defined browser tests that the actual team runs on its own site is included out-of-the-box. This way they made it easy for me to write custom tests; I just had to search for an example of what I wanted to do, copy-pasted it to my own test, and updated the parts that needed changing. Tests are written in Javascript. I have yet to try running the tests via the terminal though, which is important when running tests on a CI server. Using their test runner is free for all projects, however there is a pricing plan for using their dashboard service which helps keep test recordings private.
  • Katalon Studio. This is a full-blown automation solution that is completely free. There’s a pricing plan for business support services. The record-and-playback feature built-in to the tool failed to impress me when I ran it through our legacy apps, but perhaps writing the actual test code through their GUI fares better (using which there will be a high learning curve for people like me who like to use the CLI and personally-configured IDEs).
  • PuppeteerBuild to control Google’s headless Chrome or Chromium browser, running over the DevTools protocol. Tests are written in Javascript. Easy to try and get into using their web playground. Alister Scott has tried it running with Mocha and Circle CI on a demo project.
  • Chromeless. Similar to Puppeteer, but built to automate an army of Chrome browsers running in parallel. It gives us the option to run tests on AWS Lamba too. Again, tests are written in Javascript, which we can try on their demo playground.
  • Laravel Dusk. This gives PHP developers familiar with Laravel the ability to write and run their own browser app tests, using a programming language they’re much accustomed to.
  • Appraise. Similar to BackstopJS, a tool for visually validating browsers apps. Tests are written in Markdown.
Advertisements

Using a Git Pre-Commit Hook for Automatic Linting, Unit Testing, and Code Standards Checking of Application Code

Problem: I want to automatically run unit tests, lint the application code, and check it’s state against team standards every time I try to commit my changes to a project. It would be nice if the commit aborts if any of the existing tests fails or if I did not follow a particular standard that the team agrees to uphold. The commit pushes through if there are no errors. If possible, I don’t have to change anything in my software development workflow.

Solution: Use a Git pre-commit hook. Under the .git/hooks hidden folder in the project directory, create a new file called pre-commit (without any file extension) containing something like the following bash script (for testing PHP code):

#!/bin/sh

stagedFiles=$(git diff-index --cached HEAD | grep ".php" | grep "^:" | sed 's:.*[DAM][ \\''t]*\([^ \\''t]*\):\1:g');
errorMessage="Please correct the errors above. Commit aborted."

printf "Linting and checking code standards ..."
for file in $stagedFiles
do
  php -l $file
  LINTVAL=$?
  if [[ $LINTVAL != 0 ]]
  then
    printf $errorMessage
    exit 1
  fi
  php core/phpcs.phar --colors --standard=phpcs.xml $file
  STANDVAL=$?
  if [[ $STANDVAL != 0 ]]
  then
    printf $errorMessage
    exit 1
  fi
done

printf "Running unit tests ..."
core/vendor/bin/phpunit --colors="always" [TESTS_DIRECTORY]
TESTSVAL=$?
if [[ $TESTSVAL != 0 ]]
then
  printf $errorMessage
  exit 1
fi

where

  • linting and code standard checks only runs for the files you want to commit changes to
  • code standard checks are based on a certain phpcs.xml file
  • unit tests inside a particular TESTS_DIRECTORY will run
  • the commit will abort whenever any of the lints, code standard checks, or unit tests fails

Favorite Talks from Sauce Conference 2017

The European Selenium Conference is underway in Berlin but before the talks for that event get uploaded online I thought I should finish binge-watching this year’s Sauce Conference videos first. Many of the talks in SauceConf 2017 are actually similar in feel to those I’ve watched in previous Selenium Conferences, except that a number of them point out how awesome Sauce Labs‘ service has been for their cross-browser and cross-platform testing needs. Not that I mind nor I disagree, even if I have not used Sauce Labs’ extensively. Test automation in the web application space has been mostly stable in recent years and that’s a good thing. Now we are left with discussing the consequences of the automated systems that we have put in place and focusing on other areas which needs further improvement.

Anyway, here are my favorite talks from the conference:

Three Recommended Paid Software Testing Online Courses

Free online courses are nice, but some paid ones are just better at delivering value especially when instructors take a lot of care about building the best possible content they can provide to their audience. Free is always an option, but the problem with free is that we don’t necessarily have to give anything back in return for a service. We can do nothing and that’s okay. With paid courses however I find myself to be more motivated to take in everything I can. I focus more and I believe that helps me learn better.

Here are three recommended paid software testing courses I’ve taken recently, from which I’ve learned a great deal:

Basic API Testing with PHP’s HTTP Client Guzzle

I like writing test code in Ruby. It’s a preference; I feel I write easy-to-read and easy-to-maintain code in them than with using Java, the programming language I started with in learning to write automated checks. We use PHP in building apps though. So even if I can switch programming languages to work with, sometimes I think about how to replicate my existing test code with PHP, because maybe sometime in the future they’ll have an interest in doing what I do for themselves. If I know how to re-write my test code in a programming language they are familiar with then I can help them with that.

In today’s post, I’m sharing some notes about what I found working when building simple API tests with PHP’s HTTP client Guzzle.

To start with, we have to install necessary dependencies. One such way for PHP projects is through Composer, which we’ll have a composer.json file in the root directory. I have mine set up with the following:

{
     "require-dev": {
          "behat/behat": "2.5.5",
          "guzzlehttp/guzzle": "~6.0",
          "phpunit/phpunit": "^5.7"
     }
}

Using Guzzle in code, often in combination with Behat we’ll have something like this:

use Behat\Behat\Tester\Exception\PendingException;
use Behat\Behat\Context\Context;
use Behat\Behat\Context\SnippetAcceptingContext;
use Behat\Gherkin\Node\PyStringNode;
use Behat\Gherkin\Node\TableNode;
use GuzzleHttp\Client;

     class FeatureContext extends PHPUnit_Framework_TestCase implements Context, SnippetAcceptingContext
     {
         // test code here
     }

where test steps will become functions inside the FeatureContext class. Our API tests will live inside such functions.

Here’s an example of a GET request, where we can check if some content is displayed on a page:

/**
* @Given a sample GET API test
*/
public function aSampleGET()
{
     $client = new Client();
     $response = $client -> request('GET', 'page_URL');
     $contents = (string) $response -> getBody();
     $this -> assertContains($contents, 'content_to_check');
}

For making requests to a secure site, we’ll have to update the sample request method to:

request('GET', 'page_URL', ['verify' => 'cacert.pem']);

where cacert.pem is a certificate file in the project’s root directory. We can of course change the file location if we please.

Now here’s an example of a POST request, where we are submitting an information to a page and verifying the application’s behavior afterwards:

/**
* @Given a sample POST API test
*/
public function aSamplePOST()
{
     $client = new Client(['cookies' => true]);
     $response = $client -> request('POST', 'page_URL', ['form_params' => [
          'param_1' => 'value_1',
          'param_2' => 'value_2'
     ]]);
     $contents = (string) $response -> getBody();
     $this -> assertContains($contents, 'content_to_check');
}

This is a basic POST request. You may notice that I added a cookies parameter when initializing the Guzzle client this time. That’s because I wanted the same cookies in the initial request to be used in succeeding requests. We can remove that if we want to.

There’s a more tricky kind of POST request, something where we need to upload a certain file (often an image or a document) as a parameter to the request. We can do that by:

/**
* @Given a sample POST Multipart API test
*/
public function aSampleMultipartPOST()
{
     $client = new Client(['cookies' => true]);
     $response = $client -> request('POST', 'page_URL', ['multipart' => [
          [
               'name' => 'param_1',
               'contents' => 'value_1'
          ],
          [
               'name' => 'param_2_file',
               'contents' => fopen('file_location', 'r')
          ]
     ]]);
     $contents = (string) $response -> getBody();
     $this -> assertContains($contents, 'content_to_check');
}

and use whatever document/image we have in our machine. We just need to specify the correct location of the file we want to upload.

Building a Docker Image of Existing Test Code, with Dependencies Automatically Installed

When I first tried Docker a couple of years back, I did not find it much different from using a virtual machine. Perhaps because I was experimenting with it on Windows, or perhaps it was still a relatively new app back then. I remember not having a pleasant experience installing and running it on my machine, and at the time it was just easier to run and debug Selenium tests on a VM.

I tried again recently.

Building a Docker image containing test code with dependencies automatically installed, with Docker Toolbox on Windows 7

And I was both surprised and delighted to be able to build a Docker image with an existing test code and its dependencies automatically installed, right out of the box. This is very promising; I can now build development environments or tools which can run on any machine I own or for teams. To use them we just need install Docker and download the shared image. No more setup problems! Of course, there’s still a lot to test – we’ll probably want to have an image be slim in size, automatically update test code from a remote repository, among other cool things. I’ll try those next.

Here’s what the Dockerfile looks like:

FROM ruby:latest
RUN mkdir /usr/src/app
ADD . /usr/src/app/
WORKDIR /usr/src/app/
RUN gem install bundler
RUN bundle install

Short and easy to follow. Then we build the image by running the following command on the terminal (on the root project directory):

docker build -t [desired_image_name] .

To run and access the image as a container:

docker run -i -t [image_name]:[tag_name] /bin/bash

And from there we can run our cucumber tests inside the container the same way as we do on our local machine.

There Are Many Interesting Things I

There’s a lot of interesting things in the software development and testing space, lots of stuff to study and learn. The past few years I spent learning to build automated checks in Java and Ruby has been a fun ride and I’ve learned a great deal about the concepts, benefits, and pitfalls of automation. Now I’m at that point where I’m thinking of what other areas I’d like to study next. There’s JavaScript – which I’ve recently started educating myself how to use through freeCodeCamp’s ‘Beau teaches JavaScript’ playlist and James Shore’s ‘Let’s Code Test-Driven Javascript subscription course – because I can quickly write small tools with through the browser. After digesting the basics, I’ll likely spend more months understanding the TDD process and build something, hopefully becoming proficient in it before the year ends.

But there are some other intriguing experiments I’d like to try, some of which are:

Maybe you’d like to test them too. 🙂