Cypress: Stubbing Network Requests with Cy.Route()

I very much enjoy testing web apps by simulating their network requests via code. This allows me to visit websites, login, and replicate functionality, all without a browser, a slightly different sort of testing than many testers are accustomed to. I love to explore what’s happening under the hood when we click elements and submit forms, I like to play with cookies and payloads, I try to find out what bare minimum of data do I need to pass through HTTP requests to recreate a particular user behavior. People often do this kind of testing with Postman but I’ve been accustomed to implementing tests with Ruby and the rest-client gem. Recently though I looked at how Cypress plays with network requests, especially curious about how they take it further with their built-in request stubbing feature using cy.route() because I have never tried stubbing before.

First, some context on HTTP requests:

  • GET requests often simulate visiting (or redirecting to) a web page or retrieving a resource (like an image or another file)
  • POST requests often simulate a form submission, like logins or payments, and as such deals with passing inputted data in order to proceed to the next application state
  • There are other types of HTTP requests but mastering how these two work is enough at the start

And here is an example of how Cypress helps you perform a said GET request:

and for a POST request:

Pretty straightforward and easy to follow. Notice that POST requests have more information in them than GET requests, since we’re passing data – the body field is concerned with user inputs while the headers field is concerned with the user session, among other things. Of course, both requests need a url field, some place to send the request to.

And when we send a request, we receive a response. That response tells us about how a web application behaved after the request – was the user redirected to another page? was the user able to log in? did an expected web element got displayed or hidden? were we sent to an error page, perhaps?

Cypress takes network requests further by introducing routing to testing. Here’s an example:

What the above code says is that we want to use Cypress as a server and we want to wait and listen for a POST request that’s going to the /login URL after a submit button (with an id of #submitButton) is clicked, after which we want to respond with a { success: false } result. This means that the actual response from our application from that url is going to be taken over by a fake response that we designed ourselves. This is what stubbing a network request looks like.

Now why would we want to do this? Some reasons:

  • We want to check how an application behaves for scenarios where a request fails to reach the application server. Do we redirect the user? Or do we show an error popup? Or does the application also work offline? To do this without stubbing, we would need some help from a programmer to shut down the app server at the right time after we perform the scenario.
  • We want to speed up tests by stubbing the response of some requests with less data than the actual responses deliver. We can even have the response data to be empty, if we don’t necessarily need that specific data for a test.
  • We want to see what happens to the application when it receives an incorrect response value from a request.

This is one thing I am loving about Cypress. Out of the box, they allow me to play with network requests alongside testing the user interface, and lets me tinker with it some more.

Advertisements

Dockerizing Our Legacy Apps: Some Notes

I spent the recent weeks of January tinkering with Docker in both Windows 7 and Mac OS. I played with it a lot because I thought it’s something that’s useful for a grand new project we have at work, and I thought that integrating our legacy application code to it would help me learn about it more. And the exercise did help me understand the tool better, including some nuances with application performance and database connections. I was able to dockerize our legacy apps too! 🙂

Some notes to remember related to the exercise:

  • Windows 7 Docker Toolbox and Docker for Mac has a performance issue with volume mounts through docker-compose. Legacy apps composed of a large number of files (especially with dependency directories) will run, but they will be painfully slow out-of-the-box. Fortunately for Docker for Mac users, docker-sync has an effective workaround for this problem. It involves running an in-sync container for the application code, separate from the docker-compose file. Unfortunately, I have not found any workarounds for said performance issue for Windows 7 (and perhaps Windows 8) Docker Toolbox users.
  • Often we have to update the host file of our server machine so that we can run applications locally using a distinct, easy-to-remember URL through a browser. This means we need to add extra hosts to necessary docker containers too if we dockerize our apps. We can do this by using the extra_hosts command in docker-compose.
  • The official Postgresql docker container does not include the pdo, pdo_pgsql, and pgsql drivers, which handles the connection between the application and the database. To install those drivers inside the official container, we’ll need to use a Dockerfile and build it from the docker-compose file with the build and context commands.
  • Sometimes we have a need to copy the Postgresql DB files from a running container to set up a proper volume mount of database data from host to container. We can copy that data by using the convenient docker cp <source> <destination> command. I had this work in Docker for Mac. However, for Windows 7 Docker Toolbox users, a docker container is unable to use such copied data, perhaps because of the difference in OS between host and container, so I had to resort to restoring and backing up data every time I start and stop my application containers.
  • As a tester, what Docker provides me is a convenient tool to test all sorts of interesting application configurations as much as I want to on a single machine, see if the apps break if I changed some service config, and find out which configurations work or not. I can add or remove a new service, or even update an existing service to a new version, like updating PHP from 5.6 to 7.1, and immediately see what impact it has on the apps themselves. These kinds of tests are often left to operations engineers, but I’m glad there is a now a way to do such tests on my own machine, before application changes even reach a dedicated testing server.
  • Even if Docker makes it easy to setup an application development environment from scratch with docker-compose and Dockerfiles, it is still important to maintain a wiki of the necessary machine configurations a programmer needs to perform in order to reset or build the apps with only a single command, or two. Subtle things like custom docker-compose files, .env and php.ini files, host files, Nginx configs, or turning long docker commands into shortcuts with shell scripts or make commands.
  • Makefile tasks can help put specific scripts into an easy-to-remember command with context. I assume Rakefile does the same thing in Ruby, or Jakefile in JavaScript.
  • Dockerizing our legacy apps pushed me on a discussion with programmers about the ways they run said applications on their machines. Most of them actually just test code changes directly on Staging or another available development server. That speaks about one habit we have as a development team, and likely the reason why our apps are a pain to setup locally.

6 Curious New Tools to Try for Writing Automated Checks for Browser Apps

While I don’t find myself writing a lot of browser-based automated checks these days, I still am on the look out for interesting new tools in that space. The reason: the new tool solves an existing problem I have with setting up such a testing suite from scratch or provides a solution for certain curious use cases I’ve never experienced before. While using Ruby and Watir together in writing tests running through the browser for me is sufficient for common tasks, such a new tool could be a better fit for another project.

Here’s a list of such tools that popped up in my feed in recent months:

  • Cypress. What I like about Cypress, aside from the standalone package installation option and the built-in pretty test report page, is that the pre-defined browser tests that the actual team runs on its own site is included out-of-the-box. This way they made it easy for me to write custom tests; I just had to search for an example of what I wanted to do, copy-pasted it to my own test, and updated the parts that needed changing. Tests are written in Javascript. I have yet to try running the tests via the terminal though, which is important when running tests on a CI server. Using their test runner is free for all projects, however there is a pricing plan for using their dashboard service which helps keep test recordings private.
  • Katalon Studio. This is a full-blown automation solution that is completely free. There’s a pricing plan for business support services. The record-and-playback feature built-in to the tool failed to impress me when I ran it through our legacy apps, but perhaps writing the actual test code through their GUI fares better (using which there will be a high learning curve for people like me who like to use the CLI and personally-configured IDEs).
  • PuppeteerBuild to control Google’s headless Chrome or Chromium browser, running over the DevTools protocol. Tests are written in Javascript. Easy to try and get into using their web playground. Alister Scott has tried it running with Mocha and Circle CI on a demo project.
  • Chromeless. Similar to Puppeteer, but built to automate an army of Chrome browsers running in parallel. It gives us the option to run tests on AWS Lamba too. Again, tests are written in Javascript, which we can try on their demo playground.
  • Laravel Dusk. This gives PHP developers familiar with Laravel the ability to write and run their own browser app tests, using a programming language they’re much accustomed to.
  • Appraise. Similar to BackstopJS, a tool for visually validating browsers apps. Tests are written in Markdown.

Running Makefile Tasks On Windows OS

In an ongoing software development project we are using Makefile tasks to make running long and repetitive commands as easy and as fun as possible to run. They’re like bash aliases, shortcuts to performing recurrent jobs we frequently have to do while writing new code or testing applications. For example, we could define a task that runs unit tests and code standard checks on an application running in a Docker container like so:

  test:
    echo "Checking application code with PSR2 standards ..."
    docker-compose exec -T php phpcs -v --standard=phpcs.xml ./app/src
    echo "Running unit tests ..."
    docker-compose exec -T php phpunit --colors=always --configuration ./app

and we would run the task with only the following command:

  make test

Cool, right? I don’t have to remember all the exact commands to do what I need to do. And even if I forget the right task name (in this case, make test) I can just run the make command in the CLI and I’ll be provided a list of the tasks that I can use for the project.

Now Makefile tasks will run on Unix terminals out of the box. For Windows however, we still have to do some setup before Makefile tasks can run. For my machine at work, I did the following:

  • Download and install GnuWin32
  • Go to the install folder C:\Program Files (x86)\GnuWin32\bin
  • Copy all files inside the bin folder to the root project directory (libiconv2.dll, libintl3.dll, make.exe)
  • Add the installation bin directory to the system environment variables Path

There are other tools that we can use to configure Makefile to run on Windows but this is a quick and easy way to do it. After that we can run make.exe test on the default cmd CLI but on some Unix-like terminals like the Docker Quickstart Terminal we can definitely use make test.

Using a Git Pre-Commit Hook for Automatic Linting, Unit Testing, and Code Standards Checking of Application Code

Problem: I want to automatically run unit tests, lint the application code, and check it’s state against team standards every time I try to commit my changes to a project. It would be nice if the commit aborts if any of the existing tests fails or if I did not follow a particular standard that the team agrees to uphold. The commit pushes through if there are no errors. If possible, I don’t have to change anything in my software development workflow.

Solution: Use a Git pre-commit hook. Under the .git/hooks hidden folder in the project directory, create a new file called pre-commit (without any file extension) containing something like the following bash script (for testing PHP code):

#!/bin/sh

stagedFiles=$(git diff-index --cached HEAD | grep ".php" | grep "^:" | sed 's:.*[DAM][ \\''t]*\([^ \\''t]*\):\1:g');
errorMessage="Please correct the errors above. Commit aborted."

printf "Linting and checking code standards ..."
for file in $stagedFiles
do
  php -l $file
  LINTVAL=$?
  if [[ $LINTVAL != 0 ]]
  then
    printf $errorMessage
    exit 1
  fi
  php core/phpcs.phar --colors --standard=phpcs.xml $file
  STANDVAL=$?
  if [[ $STANDVAL != 0 ]]
  then
    printf $errorMessage
    exit 1
  fi
done

printf "Running unit tests ..."
core/vendor/bin/phpunit --colors="always" [TESTS_DIRECTORY]
TESTSVAL=$?
if [[ $TESTSVAL != 0 ]]
then
  printf $errorMessage
  exit 1
fi

where

  • linting and code standard checks only runs for the files you want to commit changes to
  • code standard checks are based on a certain phpcs.xml file
  • unit tests inside a particular TESTS_DIRECTORY will run
  • the commit will abort whenever any of the lints, code standard checks, or unit tests fails

Basic API Testing with PHP’s HTTP Client Guzzle

I like writing test code in Ruby. It’s a preference; I feel I write easy-to-read and easy-to-maintain code in them than with using Java, the programming language I started with in learning to write automated checks. We use PHP in building apps though. So even if I can switch programming languages to work with, sometimes I think about how to replicate my existing test code with PHP, because maybe sometime in the future they’ll have an interest in doing what I do for themselves. If I know how to re-write my test code in a programming language they are familiar with then I can help them with that.

In today’s post, I’m sharing some notes about what I found working when building simple API tests with PHP’s HTTP client Guzzle.

To start with, we have to install necessary dependencies. One such way for PHP projects is through Composer, which we’ll have a composer.json file in the root directory. I have mine set up with the following:

{
     "require-dev": {
          "behat/behat": "2.5.5",
          "guzzlehttp/guzzle": "~6.0",
          "phpunit/phpunit": "^5.7"
     }
}

Using Guzzle in code, often in combination with Behat we’ll have something like this:

use Behat\Behat\Tester\Exception\PendingException;
use Behat\Behat\Context\Context;
use Behat\Behat\Context\SnippetAcceptingContext;
use Behat\Gherkin\Node\PyStringNode;
use Behat\Gherkin\Node\TableNode;
use GuzzleHttp\Client;

     class FeatureContext extends PHPUnit_Framework_TestCase implements Context, SnippetAcceptingContext
     {
         // test code here
     }

where test steps will become functions inside the FeatureContext class. Our API tests will live inside such functions.

Here’s an example of a GET request, where we can check if some content is displayed on a page:

/**
* @Given a sample GET API test
*/
public function aSampleGET()
{
     $client = new Client();
     $response = $client -> request('GET', 'page_URL');
     $contents = (string) $response -> getBody();
     $this -> assertContains($contents, 'content_to_check');
}

For making requests to a secure site, we’ll have to update the sample request method to:

request('GET', 'page_URL', ['verify' => 'cacert.pem']);

where cacert.pem is a certificate file in the project’s root directory. We can of course change the file location if we please.

Now here’s an example of a POST request, where we are submitting an information to a page and verifying the application’s behavior afterwards:

/**
* @Given a sample POST API test
*/
public function aSamplePOST()
{
     $client = new Client(['cookies' => true]);
     $response = $client -> request('POST', 'page_URL', ['form_params' => [
          'param_1' => 'value_1',
          'param_2' => 'value_2'
     ]]);
     $contents = (string) $response -> getBody();
     $this -> assertContains($contents, 'content_to_check');
}

This is a basic POST request. You may notice that I added a cookies parameter when initializing the Guzzle client this time. That’s because I wanted the same cookies in the initial request to be used in succeeding requests. We can remove that if we want to.

There’s a more tricky kind of POST request, something where we need to upload a certain file (often an image or a document) as a parameter to the request. We can do that by:

/**
* @Given a sample POST Multipart API test
*/
public function aSampleMultipartPOST()
{
     $client = new Client(['cookies' => true]);
     $response = $client -> request('POST', 'page_URL', ['multipart' => [
          [
               'name' => 'param_1',
               'contents' => 'value_1'
          ],
          [
               'name' => 'param_2_file',
               'contents' => fopen('file_location', 'r')
          ]
     ]]);
     $contents = (string) $response -> getBody();
     $this -> assertContains($contents, 'content_to_check');
}

and use whatever document/image we have in our machine. We just need to specify the correct location of the file we want to upload.

Building a Docker Image of Existing Test Code, with Dependencies Automatically Installed

When I first tried Docker a couple of years back, I did not find it much different from using a virtual machine. Perhaps because I was experimenting with it on Windows, or perhaps it was still a relatively new app back then. I remember not having a pleasant experience installing and running it on my machine, and at the time it was just easier to run and debug Selenium tests on a VM.

I tried again recently.

Building a Docker image containing test code with dependencies automatically installed, with Docker Toolbox on Windows 7

And I was both surprised and delighted to be able to build a Docker image with an existing test code and its dependencies automatically installed, right out of the box. This is very promising; I can now build development environments or tools which can run on any machine I own or for teams. To use them we just need install Docker and download the shared image. No more setup problems! Of course, there’s still a lot to test – we’ll probably want to have an image be slim in size, automatically update test code from a remote repository, among other cool things. I’ll try those next.

Here’s what the Dockerfile looks like:

FROM ruby:latest
RUN mkdir /usr/src/app
ADD . /usr/src/app/
WORKDIR /usr/src/app/
RUN gem install bundler
RUN bundle install

Short and easy to follow. Then we build the image by running the following command on the terminal (on the root project directory):

docker build -t [desired_image_name] .

To run and access the image as a container:

docker run -i -t [image_name]:[tag_name] /bin/bash

And from there we can run our cucumber tests inside the container the same way as we do on our local machine.