Basic API Testing with PHP’s HTTP Client Guzzle

I like writing test code in Ruby. It’s a preference; I feel I write easy-to-read and easy-to-maintain code in them than with using Java, the programming language I started with in learning to write automated checks. We use PHP in building apps though. So even if I can switch programming languages to work with, sometimes I think about how to replicate my existing test code with PHP, because maybe sometime in the future they’ll have an interest in doing what I do for themselves. If I know how to re-write my test code in a programming language they are familiar with then I can help them with that.

In today’s post, I’m sharing some notes about what I found working when building simple API tests with PHP’s HTTP client Guzzle.

To start with, we have to install necessary dependencies. One such way for PHP projects is through Composer, which we’ll have a composer.json file in the root directory. I have mine set up with the following:

{
     "require-dev": {
          "behat/behat": "2.5.5",
          "guzzlehttp/guzzle": "~6.0",
          "phpunit/phpunit": "^5.7"
     }
}

Using Guzzle in code, often in combination with Behat we’ll have something like this:

use Behat\Behat\Tester\Exception\PendingException;
use Behat\Behat\Context\Context;
use Behat\Behat\Context\SnippetAcceptingContext;
use Behat\Gherkin\Node\PyStringNode;
use Behat\Gherkin\Node\TableNode;
use GuzzleHttp\Client;

     class FeatureContext extends PHPUnit_Framework_TestCase implements Context, SnippetAcceptingContext
     {
         // test code here
     }

where test steps will become functions inside the FeatureContext class. Our API tests will live inside such functions.

Here’s an example of a GET request, where we can check if some content is displayed on a page:

/**
* @Given a sample GET API test
*/
public function aSampleGET()
{
     $client = new Client();
     $response = $client -> request('GET', 'page_URL');
     $contents = (string) $response -> getBody();
     $this -> assertContains($contents, 'content_to_check');
}

For making requests to a secure site, we’ll have to update the sample request method to:

request('GET', 'page_URL', ['verify' => 'cacert.pem']);

where cacert.pem is a certificate file in the project’s root directory. We can of course change the file location if we please.

Now here’s an example of a POST request, where we are submitting an information to a page and verifying the application’s behavior afterwards:

/**
* @Given a sample POST API test
*/
public function aSamplePOST()
{
     $client = new Client(['cookies' => true]);
     $response = $client -> request('POST', 'page_URL', ['form_params' => [
          'param_1' => 'value_1',
          'param_2' => 'value_2'
     ]]);
     $contents = (string) $response -> getBody();
     $this -> assertContains($contents, 'content_to_check');
}

This is a basic POST request. You may notice that I added a cookies parameter when initializing the Guzzle client this time. That’s because I wanted the same cookies in the initial request to be used in succeeding requests. We can remove that if we want to.

There’s a more tricky kind of POST request, something where we need to upload a certain file (often an image or a document) as a parameter to the request. We can do that by:

/**
* @Given a sample POST Multipart API test
*/
public function aSampleMultipartPOST()
{
     $client = new Client(['cookies' => true]);
     $response = $client -> request('POST', 'page_URL', ['multipart' => [
          [
               'name' => 'param_1',
               'contents' => 'value_1'
          ],
          [
               'name' => 'param_2_file',
               'contents' => fopen('file_location', 'r')
          ]
     ]]);
     $contents = (string) $response -> getBody();
     $this -> assertContains($contents, 'content_to_check');
}

and use whatever document/image we have in our machine. We just need to specify the correct location of the file we want to upload.

Advertisements

Building a Docker Image of Existing Test Code, with Dependencies Automatically Installed

When I first tried Docker a couple of years back, I did not find it much different from using a virtual machine. Perhaps because I was experimenting with it on Windows, or perhaps it was still a relatively new app back then. I remember not having a pleasant experience installing and running it on my machine, and at the time it was just easier to run and debug Selenium tests on a VM.

I tried again recently.

Building a Docker image containing test code with dependencies automatically installed, with Docker Toolbox on Windows 7

And I was both surprised and delighted to be able to build a Docker image with an existing test code and its dependencies automatically installed, right out of the box. This is very promising; I can now build development environments or tools which can run on any machine I own or for teams. To use them we just need install Docker and download the shared image. No more setup problems! Of course, there’s still a lot to test – we’ll probably want to have an image be slim in size, automatically update test code from a remote repository, among other cool things. I’ll try those next.

Here’s what the Dockerfile looks like:

FROM ruby:latest
RUN mkdir /usr/src/app
ADD . /usr/src/app/
WORKDIR /usr/src/app/
RUN gem install bundler
RUN bundle install

Short and easy to follow. Then we build the image by running the following command on the terminal (on the root project directory):

docker build -t [desired_image_name] .

To run and access the image as a container:

docker run -i -t [image_name]:[tag_name] /bin/bash

And from there we can run our cucumber tests inside the container the same way as we do on our local machine.

Questions for Recruiting Employers I

Having received a handful of calls these past few weeks, I’ve realized that I’m thinking more about learning opportunities now than only looking at an offer’s compensation package. Salary still matters of course but there’s more to a day job than just allocating money for expenses and savings. Work wouldn’t be fun when there’s only menial work to do. What would I learn if I took an offer in exchange for my services? Will I be doing something I’ve never done or tried before? Is this work meaningful, both for our customers and for me as an individual?

And some more questions to recruiting employers on the top of my head:

  • How does the existing software development and testing process work in the organization?
  • Are testers embedded into software teams? Or do they work in silos, away from the developers?
  • What does a software team look like and compose of?
  • What does the day-to-day work look like for the software tester?
  • How many testers are there currently in the organization? And what’s the existing ratio of developers to testers?
  • What technologies do the organization currently use for testing? Are there automated checks? Is there an existing CI system?
  • Does the organization use containers? Visual testing? Machine learning?
  • Who are the organization’s actual clients? Which lives are we trying to improve on a daily basis?
  • How do the software teams get feedback from the people who use their applications?

Favorite Talks from Selenium Conference Austin 2017

It’s amazing that the recently concluded Selenium Conference over at Austin Texas continues to live up to expectations, building up on the previous conferences and keeps delivering quality talks on automation and testing. And what’s more interesting is to know how they’ve been keeping up with everything with help from the Software Freedom Conservancy and the testing community. There’s even a European Selenium Conference happening on October 9-10, which I’m very much looking forward to.

Meanwhile, here are some of my favorite talks from the Austin conference:

  • Automate Windows and Mac Apps with the WebDriver Protocol (by Dan Cuellar, Yosef Durr, and Stuart Russell, about easily automating Windows and Mac apps using Appium)
  • Automating Restaurant POS with Selenium – A Case Study (by Jeffrey Payne, about automating a point-of-sale system for testing, including credit card readers, printers, cash drawers, and caller IDs)
  • Selenium State of the Union (by Simon Stewart, on an overview of the W3C spec process and about naming things in the Selenium project, how APIs should be correct and how people won’t write their own, how not everyone is a sophisticated developer, and how testing is under resourced)
  • Leverage your Load Tests with JMeter and Selenium Grid (by Christina Thalayasingam, on adding load tests for your system using JMeter and Selenium Grid in combination)
  • Selenium and the Software Freedom Conservancy (by Karen Sandler, on the safety and efficacy of proprietary medical devices and how often open-source software are more likely to be safer and better over time, and about what kind of organization the Software Freedom Conservancy is and how it helps open-source projects like Selenium continue to live on for the long-term)
  • Stop Inspecting, Start Glancing (by Dan Gilkerson, on automating web apps without looking at the DOM structure)
  • Transformative Culture (by Ashley Hunsberger, about moving from QA to Engineering Productivity and the culture changes necessary for getting better at testing)

Playing with Excel Files in Ruby Using Roo and Write_XLSX

Because of a small tool pitch I gave to our technical writer, I had a chance to play with Excel files through code for several hours last week. I have not done that before, but I felt that it was a pretty easy enough challenge as I have worked with CSV files in the past and thought that the experience would be something similar. I knew I just have to find out what existing libraries I could take advantage in order to build what I thought the program should be capable of doing, which is basically reading two Excel files, comparing them, and creating a new file with updated data based on both files.

Some googling brought me to the ruby gems Roo and Write_XLSX.

Roo helped with reading Excel files, and we can do this by:

xlsx = Roo::Spreadsheet.open(FILE_PATH)
xlsx = Roo::Excelx.new(xlsx)
spreadsheet = xlsx.sheet(SHEET_NUMBER)

Once we have access to a desired Excel file, we can get whatever data we need from it:

spreadsheet.each(column_data_1: COLUMN_NAME_1) do |info|
    // do something with information
end

After that, it’s just a matter of what information do we want to retrieve and manipulate from desired files. I mostly like to use arrays and hashes from here on.

As for creating new Excel files, we can use write_xlsx:

workbook = WriteXLSX.new(FILE_PATH)
worksheet = workbook.add_worksheet
worksheet.write(ROW_NUMBER, COLUMN_NUMBER, INFORMATION)
workbook.close

These two libraries can actually do more. Roo has an interesting read/write access with Google spreadsheets using roo-google. Write_XLSX on the other hand have formatting features we can leverage for better looking Excel outputs. I didn’t have to use those functionalies for the current test project though so I’ll leave those for another time.

Being Reminded of All the Phases I’ve So Far Had in Writing Automated Checks

I’m currently in the midst of a test code overhaul, a re-writing project of sorts. It started about a week ago and so far I’ve made considerable progress on what I’ve wanted to achieve with the rewrite, which is basically cleaner and more maintainable code, mostly in the sense of test data management and test description language. The number of tests running everyday in our Jenkins system has grown noticeably and I’ve felt that it’s been difficult to add certain tests because of how I structured the test data in the past, which I have not upgraded since then. The two possible avenues for running tests – on the UI and HTTP layers – also adds a bit of complexity and it’d be nice if I can integrate the two smoothly. It’s an interesting development because I did not plan on any re-writing to be done anytime soon but I guess at the back of my mind I knew it’ll happen eventually. And so I decided to take a step back from writing more tests and do some cleanup before it gets tougher to change things. I plan to finish everything in about a month or so.

At the moment, I’m reminded of the phases I’ve gone through in learning to code and writing automated checks in the past few years:

  • Early 2014. It all begins with Selenium IDE, with giving the self some time to study the basic Selenese commands for writing automated checks and (more importantly) understand how to properly retrieve the page elements you want to manipulate.
  • Mid 2014. Test management in Selenium IDE becomes difficult as the number of tests grow, hence the decision to switch to Selenium WebDriver. The only programming language background I had back then was C++, which was limited to only functions and logical/conditional operators, so I chose Java to work with to lessen the learning curve.
  • Late 2014. Familiarized myself with Git, which hooked me on making daily commits and appreciating version control. Along the way I learned the concepts of classes and objects.
  • All of 2015 up to Early 2016. I was in a trance, writing code daily and pushing myself to create all the automated checks that I wanted to run for our apps before every release. Tests run on the Eclipse IDE using TestNG and I was happy with what I had, except that those end-to-end tests are really slow. Running everything took overnight to finish, which was okay for my employer but annoying for me personally.
  • Mid 2016. Re-writing existing tests in Ruby with Cucumber integration started off (when I found Jeff Morgan’s “Cucumber & Cheese” book online) as a side project for fun and testing my skill level in programming. And I did have buckets of fun! The experiment told me that there’s still a lot I need to practice on if I want to write better code, and it also told me that I can be more productive if I switch programming languages. There’s a bit less code to type in when writing code in Ruby than Java and I liked that, plus all the interesting libraries I can use. I switched to Sublime Text and used both Jenkins and the command-line interface more extensively too.
  • Late 2016. As I was looking for ways to speed up end-to-end tests total execution, which by then takes about 4 hours to complete, I ended up exploring testing apps in the HTTP layer instead of in the UI. That took a lot of studying of how our apps actually behave under the hood, what data are being passed around, how images are actually sent, how to view pages without a browser, how redirections work, among other things. After years of testing apps via the user interface, this was such a refreshing and valuable period, and I completely wondered why I never knew such a thing existed until then. It wasn’t being taught extensively to testers, perhaps because it all depends on how the app was structured to run through an API.

And these phases brings me to now, where there’s a healthy dose of API and UI layer tests all checking major app features. It’s all good, just several pieces needing a cleanup, a little parallelization, better test description language, and great documentation. It’s all good, because the lessons in both programming and testing keep piling. The two practices differ in mindset but I think they complement each other, and I think that there’s no reason anyone can’t do both.

When It’s a Burden to Update a Test (or System) Code

If it feels difficult, exhausting, and inconvenient to update an existing test (or system) code, it may be that you don’t completely understand what the code does or it may be that the code really is written badly. Either way both situations present a learning opportunity: the former to fully grasp the business rules for the particular code you want to change, the latter to review the purpose of the code and to rewrite it in a way that’s helpful to the team.

Complaining doesn’t help; it only prolongs the agony. Care instead, discuss with the team to check if the change is actually necessary, and, if you think that it is, leave the code better than when you found it.