About failing Automated Watir checks on Firefox v.48 and successfully running them on Google Chrome v.52

The recent updates on the Mozilla Firefox browser starting from version 46 onwards broke my automated end-to-end checks. The browser loads when started but does not go to a test page or do anything until it eventually fails, because the latest versions of Firefox supposedly doesn’t use the FirefoxDriver anymore for automation and instead makes use of a new driver implementation in what is called Marionette. In order to get my checks running again in Firefox, I had to resort (as what many others in the community also did) to using the Extended Support Release version of Firefox v. 45, a temporary measure until I finally get my Cucumber-Watir checks running properly on the latest Firefox version.

At the moment I am stumped by an error when running automated checks on Firefox v. 48.0.1 using Marionette (which seems to be a problem on permissions on my local Windows machine, and not on watir or selenium, although port 4444 points to the port which selenium grid is connected to):

Permission denied - bind(2) for "::1" port 4444 (Errno::EACCES)
C:/Ruby/lib/ruby/gems/2.2.0/gems/selenium-webdriver-2.53.4/lib/selenium/webdriver/firefox/service.rb:103:in `stop_process': undefined method `poll_for_exit' for nil:NilClass (NoMethodError)
from C:/Ruby/lib/ruby/gems/2.2.0/gems/selenium-webdriver-2.53.4/lib/selenium/webdriver/firefox/service.rb:83:in `stop'
from C:/Ruby/lib/ruby/gems/2.2.0/gems/selenium-webdriver-2.53.4/lib/selenium/webdriver/firefox/service.rb:64:in `block in start'
from C:/Ruby/lib/ruby/gems/2.2.0/gems/selenium-webdriver-2.53.4/lib/selenium/webdriver/common/platform.rb:161:in `block in exit_hook'

So, while I’m trying to sort out that problem (anybody experienced this?), I decided to move my checks to run on the latest version of Google Chrome by default instead of running them on an old version of Firefox. To do that, I needed to update my browser capabilities:

if browser_type == :chrome
    arguments = "--ignore-certificate-errors" // Add more desired arguments
    capabilities = Selenium::WebDriver::Remote::Capabilities.chrome "chromeOptions" => {"args" => [ arguments ]}
elsif browser_type == :firefox
    capabilities = Selenium::WebDriver::Remote::Capabilities.firefox
end
browser = Watir::Browser.new :remote, :url => url, :desired_capabilities => capabilities

And all that is left is to see how the checks behave on Google Chrome.

Some findings:

  • Button or link elements in the browser (especially when they are only span or heading elements that are being used functionally as buttons) that are not immediately visible in the Chrome viewport fail to be clicked properly. To fix this, focus on the element first (or alternatively call a scroll method in the browser using javascript to a position where the element becomes visible in the viewport) before running the click step.
  • Some wait methods that didn’t necessarily have to be written (when checks were run in Firefox) need to be explicitly stated when running checks on Chrome.

Takeaways from Jeff Nyman’s “Modern Testing” blog posts series

Jeff Nyman recently wrote a series of blog posts about modern testing, describing what he thinks it could be, while also introducing his idea of a lucid approach in testing. The posts themselves are not easy to read but they contain well-formed thoughts about artifacts, acceptance criteria, testing, software engineering, among other things, which deserve a look through. Thinking about his ideas on manual tests, acceptance criteria, user interfaces, software development constraints, and sources of truth was refreshing and insightful and I wonder how he would implement these ideas in a specific tool.

Quoting some of the words that I liked from the series of posts:

  • Tickets – in systems like JIRA or whatever you use – are entirely transient. They are there simply to allow us to track and measure various work (tasks, bugs, etc). They are not there to store acceptance criteria or to frame epics that have numerous tickets associated to them. Acceptance criteria should live as close as possible to the code, be versioned with the code, capable of being elaborated upon by the code pushing out relevant information. The elaboration of that acceptance criteria should be the code.
  • Each development project is a journey of discovery. And the real constraint isn’t really time or budget or some metric of man-hours for programmers or testers. The real constraint is the lack of knowledge about what needs to be built and how it should be built.
  • The fact is that manual test and an automated test are really just checks. What differs is what is doing the execution: human or machine. The testing part comes in when we have the conversations and collaborations to decide what in fact should ultimately be checked. Test tooling comes in with how that information is encoded as code and what kind of framework will support that encoding. Tests should be reflective of the work that was done by development. This means that the testing framework – whatever that happens to be – must be good enough to quickly encode the business understanding and push the business intent.
  • The closer the tests sit to the code, the closer the tests are aligned to the business criteria, the less artifacts you have to use to manage everything. When you have less artifacts, being lucid becomes quite a bit easier because your sight lines of what quality means are not obfuscated by an intermediating set of artifacts stored in a variety of tools, only some of which are directly executable.
  • Take testing out of the acceptance criteria writing business. Take out the focus on the right way to generate acceptance criteria. Don’t (necessarily) focus on Given/When/Then for acceptance criteria.
  • You want to enable fast and useful feedback which in turn allow you to fail-fast. When you can fail fast, it means you have safe-to-fail changes. When your changes are safe-to-fail, that means you can worry less about failure and more about experimenting with ideas, creating working solutions quickly, and even abandoning work that is going down an unfruitful path.
  • Going fast is not possible unless the quality is under control at all times, so we need a testing strategy that says testing is a design activity and automation is a core part of the strategy. Going fast on the long term is not possible unless the quality of the design and of the understanding of the business domain is under control at all times, so we need a executable source of truth strategy so that it’s possible to reason about the system at all times.
  • Any application is the end product at any given moment in time. That end product is a reflection of a set of features working together. The value of the application is delivered through its behavior, as perceived through its user interface. Behavior and user interface are critical there. That’s what your users are going to base their view of quality in. And, to be sure, user interface can mean browser, mobile app, desktop app, service (i.e., API), and so on. It’s whatever someone can use to consume some aspect of behavior of the application. So everything comes down to the behavior and the interfaces and the consumers of those interfaces.
  • Any act of testing acts as a means to aid communications, to drive a clean, simple design, to understand the domain, to focus on and deliver the business value, and to give everyone involved clarity and confidence that the value is being delivered.

A Basic Guide to Creating XML API Automated Checks

Checking out the Agoda YCS5 API

API testing is interesting. They’re very different from functional user interface testing because they do not require any UI, although programmers can provide testers some user interface for checking them if needed. They’re actually pretty straightforward to perform, they’re relatively easy to code, and they also provide fast feedback, unlike UI tests which often breaks easily, complicated to code, and are generally slow. I’ve been focusing on writing automated API checks for some apps in recent weeks and found out that the process was simple.

And here’s a basic guide of what that process looks like, in three steps:

  • Build the request content
    A basic XML request document source looks like this:

    <?xml version="1.0"?>
    <request>
       <element attribute="value">
       possibly a chain of more elements will show here
       </element>
    </request>

    In creating checks, this request source is just treated as a String data type, and then converted later to XML if necessary. I usually strip the source into three parts, namely header, footer, and body, and then just combine the parts. Usually it’s just the body part of the request that changes for scenario checks. How the actual request looks like, of course, is often found in the documentation of the API that you desire to test.

    For our example:

    header = <request>
    body = <element attribute="value"></element>
    footer = </request>

    The body can be further stripped down into many parts if desired, as it happens that some requests can contain multiple elements and sub-elements.

    In building a request, I just create a method that I can call anytime:

    def build_request(request_type)
       return get_header(request_type) + get_body(request_type) + get_footer(request_type)
    end

  • Send the request to the API endpoint, and retrieve the response
  • In API testing, what we check is the response of the API when we send a particular type of request to it. And to be able to generate a valid response from the API, we need to send our request to a valid endpoint. Sometimes that endpoint needs access credentials, sometimes it is the request themselves which contain said credentials. These details are often requested from the people who manage the API, or are sometimes found in your account information if the API requires you to create an account.

    As for how automation can send a request and generate a response, I often use the rest-client gem this way inside a method:

    def get_response(endpoint, request)
       begin
         response = RestClient.post endpoint, request, {:content_type => :xml}
       rescue => e
         response = e.response
       end
       return response
    end

  • Validate the API response
  • Checks are of course not complete if there are no actual comparison between actual and expected results. This defines whether the check passes or fails, and by default any check will pass if there are no conditions about response failure (except for failure of the test to reach the API endpoint).

    For the example request, this might be one of the ways I want to validate the API response using the nokogiri and rspec gems (assuming that a successful response contains a result element with a count attribute of value 1 inside it):

    def validate_response(response)
       expect(response.xpath(//result).attr('count').to_s).to eq('1'), "Result count should be 1"
    end

    In the end, the implementation of how to properly validate responses vary. There are scenarios where tests need to check for error elements in the response, and there are some tests where data in the response also needs to be verified.

Questions For The Software Tester Applicant

  • What is software testing to you?
  • What are your favorite testing tools?
  • How do you test?
  • Does the words agile and scrum master have meaning to you? What do they signify?
  • Can you write code?
  • Do you keep yourself up-to-date with the software testing industry? How?
  • When was the last time you learned a new skill?
  • Who are your customers? What sort of meaning do you want to create for them?
  • What software testing best practices do you know?
  • Why are you moving to us? How do you say that our company will enable your professional growth?
  • What problems do you think you can solve for us? Do you have an idea of how to pursue those solutions?
  • Do you have questions for us, questions which will let you know whether we are fit for you or not?
  • How can we help empower you?

Notes from Rob Lambert’s “Remaining Relevant and Employable in a Changing World”

Here’s one thing about software testing: it is a specialization that’s tough to pursue as a long-term career. It is difficult to know if you’ve become someone that’s remarkable at it. There are no degrees about software testing in school, people seldom discuss what it means (even among software development teams), and it’s hard to find mentors. It is common for software testers to start their profession in testing only by chance, like how I stumbled with the work myself, taking my first job because I wanted to work with computers (but having no experience in both programming and building computer networks) and because I needed some way of earning money. It was fairly easy to get into, but after being in the industry for quite some time I know how perplexing it is to build from the basics, how hard it is to find out where to go next.

Fortunately there are people like Rob Lambert, previously known as the Social Tester, who are concerned about helping other software testers in the industry. He wrote the Remaining Relevant and Employable in a Changing World ebook for software testers who really enjoy what they do and wants to stand out from their peers. It’s a great read, and reading and deeply thinking about other people’s ideas always play a huge role in learning many things connected to software testing as it does too in other fields of work, including how to get better at what what we do.

Some lessons from the book:

  • Ensure that each and every day you are shipping something that pushes you towards the end goal.
  • Learning and building your skills should be a core fundamental aspect to your life as a software tester. Learn about technology, industries, people, the product under test, yourself or your co-workers.
  • Testing isn’t about conforming to standards. It’s about helping to deliver great software. It’s about more than test techniques and approaches. It’s about working with people, communicating clearly, understanding market conditions, embracing technology, understanding end user needs, influencing design, and a whole lot more besides.
  • As a minimum, do no less than one hour of learning per day, ideally two.
  • It isn’t the company you work for who are in charge of your career. It’s you.

Jenkins User Interface Tweaks

Last week, I have found myself tinkering with Jenkins again because of several reasons. I wanted to schedule the CucumberWatir functional checks I’ve written in the past two months so I don’t have to run them every time in the terminal. I also wanted to have a record of the results of the latest checks so I can look back at them when I need to. More importantly, I wanted a place where anyone from the team can view the said checks and see if our Staging environment is stable or not according to the smoke tests.

Of course, I also wanted this place to be somewhat nice-looking, which the default Jenkins page somewhat lacks.

And this is what I ended up with. :)

And this is what I ended up with.🙂

And here are the tweaks I did to make the Jenkins user interface a little better:

Upload a Jenkins Plugin

Upload a Jenkins Plugin

Set the Jenkins CSS theme to Material

Set the Jenkins CSS theme to Material

Update jenkins.xml to allow CSS content

Update jenkins.xml to allow CSS content

  • Allow HTML content in views and jobs descriptions. To do this:
    • Go to Manage Jenkins -> Configure Global Security
    • In the Markup Formatter field, select Raw HTML or Safe HTML
    • Click Save
Set View and Job descriptions to display HTML content

Set View and Job descriptions to display HTML content

  • Use the Ansicolor Plugin to properly display colors in the job’s console output. To do this:
    • Download the Ansicolor Plugin
    • Install the downloaded plugin to Jenkins (Manage Jenkins -> Manage Plugins -> Advanced -> Upload plugin)
    • Configure the desired job’s Build Environment to use the Color ANSI Console Output
Set a desired Jenkins job to use colored outputs in the console

Set a desired Jenkins job to use colored outputs in the console

Sample Jenkins colored console output

Sample Jenkins colored console output

Mid-Year Questions To Ponder

Who is in charge of the testing that the team needs to do?

Who do I want to connect with today?

How do I need to think when I’m testing?

Did I help someone yesterday?

Why do I say that a product is of good quality?

What do I really do?

How much documentation is necessary?

Where did I fail this week? Did I fail enough, or was I playing safe?

What stories do we need to tell? To whom do we tell our stories?

What am I afraid of? Why am I afraid?

What am I supposed to do next?