There Are Many Interesting Things I

There’s a lot of interesting things in the software development and testing space, lots of stuff to study and learn. The past few years I spent learning to build automated checks in Java and Ruby has been a fun ride and I’ve learned a great deal about the concepts, benefits, and pitfalls of automation. Now I’m at that point where I’m thinking of what other areas I’d like to study next. There’s JavaScript – which I’ve recently started educating myself how to use through freeCodeCamp’s ‘Beau teaches JavaScript’ playlist and James Shore’s ‘Let’s Code Test-Driven Javascript subscription course – because I can quickly write small tools with through the browser. After digesting the basics, I’ll likely spend more months understanding the TDD process and build something, hopefully becoming proficient in it before the year ends.

But there are some other intriguing experiments I’d like to try, some of which are:

Maybe you’d like to test them too. 🙂

Playing with Excel Files in Ruby Using Roo and Write_XLSX

Because of a small tool pitch I gave to our technical writer, I had a chance to play with Excel files through code for several hours last week. I have not done that before, but I felt that it was a pretty easy enough challenge as I have worked with CSV files in the past and thought that the experience would be something similar. I knew I just have to find out what existing libraries I could take advantage in order to build what I thought the program should be capable of doing, which is basically reading two Excel files, comparing them, and creating a new file with updated data based on both files.

Some googling brought me to the ruby gems Roo and Write_XLSX.

Roo helped with reading Excel files, and we can do this by:

xlsx =
xlsx =
spreadsheet = xlsx.sheet(SHEET_NUMBER)

Once we have access to a desired Excel file, we can get whatever data we need from it:

spreadsheet.each(column_data_1: COLUMN_NAME_1) do |info|
    // do something with information

After that, it’s just a matter of what information do we want to retrieve and manipulate from desired files. I mostly like to use arrays and hashes from here on.

As for creating new Excel files, we can use write_xlsx:

workbook =
worksheet = workbook.add_worksheet

These two libraries can actually do more. Roo has an interesting read/write access with Google spreadsheets using roo-google. Write_XLSX on the other hand have formatting features we can leverage for better looking Excel outputs. I didn’t have to use those functionalies for the current test project though so I’ll leave those for another time.

Automating the Windows Desktop Calculator’s UI using Winium

Ever since I learned how to use Selenium to automate browsers and web applications a few years back, I’ve wondered from time to time whether I can use the same process or technology to automate Windows desktop applications via its user interface. There’s no particular use case at work considering our apps are all in the web, but I challenged myself to find an answer only for curiosity’s sake. It didn’t turn out to be a thought-provoking experiment but at least it was somehow amusing.

Here’s what I got from the quick study:

Click the image to view the GIF full-size on another browser tab 🙂

And a few notes:

  • We can’t use Selenium to automate Windows desktop applications.
  • A quick search about automating Windows apps tells us that there’s actually a number of tools we can use for such purpose, some paid, others free. Other than Winium, I found out that we can also use Sikuli (or its successor SikuliX), WinTask, autoIT, TestStack.White, Automa, MacroSchedulerCobra, or Pywinauto. There’s probably more tools out there. I just chose to use Winium because writing tests and automating the app with it is similar to how Selenium works, which means very little learning curve for me.
  • Source code for this short experiment can be found here: Win-Calculator. It uses Maven for handling installation of project dependencies while the test code is written in Java. The tests run locally, I’m not sure whether there’s a way to manipulate a Windows app on another machine using Winium. I have not researched that far.
  • Winium will actually take control of the machine’s mouse to move to the element locations set in the tests. This means that interrupting the position of the mouse while the tests are running would likely fail the test. This is annoying, and I’m not sure whether the other Windows app automation tools behave otherwise or the same.

About Selenium Conference 2016

I had time over the holidays to binge watch last years Selenium Conference talks, which was as awesome, if not more so, as the talks during the 2015 conference. Automation in testing has really come a long way, alongside the advancements in technology and software development, and this brings forth new challenges for all of us who test software. It’s not just about Selenium anymore. Mobile automation still proves to be challenging, and soon we’ll have to build repeatable test scenarios for the internet of things – homes, vehicles, stores, among others. Software testing can only get more interesting by the year.

Here are my picks for the best talks from the conference, if you’re curious:

The Lessons I Learned This Year About Software Testing

Last year I told myself I would take a step back from automation because I’ve already built tests with a working knowledge of Java and TestNG. Apparently, with the way things went this year, I was not satisfied after all with what I had. I went on to study Ruby with Watir and Cucumber for about half of 2016, and I learned how to run web applications without using browsers after that. And for a second year in a row I feel I overdid studying again, and maybe I’m getting too comfortable about writing test code. In any case, programming has become such a useful tool in my workflow – performing checks automatically that otherwise would have eaten up a chunk of my time and enabling me to test other things.

Some other realizations during the year:

  • Re-writing test code using Ruby with Watir and Cucumber and other useful ruby gems pointed out how messy the checks I wrote last year using Java was. It helps that the Ruby language has less boilerplate syntax but my earlier mistakes are more about refactoring and design rather than about the programming language itself. I still need more deliberate practice at that.
  • I’ve found out that I actually do security testing in my normal day-to-day exploratory work, but I’m not especially great at it because nobody has taught me how to perform good web security testing before. It seems that getting comfortable reading and manipulating API calls is a step towards becoming better at it.
  • Teams who develop software with agility does not necessarily mean they implement scrum or kanban. If there’s good communication and teamwork, teams can actually forego many of the rituals and focus on the actual work.
  • Writing automated checks with healthy test description language is an important skill.
  • People only learn to solve problems when they feel that those problems are worth solving. And problems that are worth solving for me does not mean they’re worth solving for other people too, so I should never stop using perspectives.
  • We can leverage REST calls with web automation to make them run a lot faster and more stable.
  • Exploratory testing is a prerequisite to automation.
  • Building a great team is difficult. There are a lot of factors to consider – team composition, empathy, attitude, member skill set, strengths and weaknesses, to name a few. It takes some time too.

Lessons from James Bach and Michael Bolton’s “A Context-Driven Approach To Automation In Testing”

Automation has gained traction in recent years in the testing community because the idea of automating tests has been sold off as a solution to many of the common problems of traditional testing. And the advertisements worked, as many software businesses now rely heavily on automated checks to monitor and check known system bugs, even performance and security tests. It’s not surprising anymore that automation has been accepted by the general public as one of the cornerstones of effective testing today, especially since releases to production for most businesses has now become more frequent and fast-paced. I use such tools at work myself, because they complement the exploratory testing that I do.

James Bach and Michael Bolton, two of the finest software testers known to many in the industry, however reminds us, in their whitepaper titled ‘A Context-Driven Approach to Automation in Testing, to be vigilant about the use of these tools, as well as the practice of the terms ‘automation‘ and (especially) ‘test automation‘ when speaking about the testing work that we do, because they can imply dangerous connotations about what testing really is. They remind us that testing is so much more than automation, much more than using tools, and that to be a remarkable software tester we need to keep thinking and improving our craft, including the ways we perform and explain testing to others.

Some takeaway quotes from the whitepaper:

  • If you need good testing, then good tool support will be part of the picture, and that means you must learn how and why we can go wrong with tools.
  • The word ‘automation’ is misleading. We cannot automate users. We automate some actions they perform, but users do so much more than that. Output checking is interesting and can be automated, but testers and tools do so much more than that. Although certain user and tester actions can be simulated, users and testers themselves cannot be replicated in software. Failure to understand this simple truth will trivialize testing, and will allow many bugs to escape our notice.
  • To test is to seek the true status of a product, which in complex products is hidden from casual view. Tester’s do this to discover trouble. A tester, working with limited resources must sniff out the trouble before it’s too late. This requires careful attention to subtle clues in the behavior of a product within a rapid and ongoing learning process. Testers engage in sensemaking, critical thinking, and experimentation, none of which can be done by mechanical means.
  • Much of what informs a human tester’s behavior is tacit knowledge.
  • Everyone knows programming cannot be automated. Although many early programming languages were called ‘autocodes’ and early computers were called ‘autocoders,’ that way of speaking peaked around 1965. The term ‘compiler’ became far more popular. In other words, when software started coding, they changed the name of that activity to compiling, assembling, or interpreting. That way the programmer is someone who always sits on top of all the technology and no manager is saying ‘when can we automate all this programming?’
  • The common terms ‘manual testers’ and ‘automated testers’ to distinguish testers are misleading, because all competent testers use tools.
  • Some testers also make tools – writing code and creating utilities and instruments that aid in testing. We suggest calling such technical testers ‘toolsmiths.’
  • We emphasize experimentation because good tests are literally experiments in the scientific sense of the word. What scientists mean by experiment is precisely what we mean by test. Testing is necessarily a process of incremental, speculative, self-directed search.
  • Although routine output-checking is part of our work, we continually re-focus in non-routine, novel observations. Our attitude must be one of seeking to find trouble, not verifying the absence of trouble – otherwise we will test in shallow ways and blind ourselves to the true nature of the product.
  • Read the situation around you, discover the factors that matter, generate options, weigh your options and build a defensible case for choosing a particular option over all others. Then put that option to practice and take responsibility for what happens next. Learn and get better.
  • Testing is necessarily a human process. Only humans can learn. Only humans can determine value. Value is a social judgment, and different people value things differently.
  • Call them test tools, not ‘test automation.’

Takeaways from Alan Richardson’s “Dear Evil Tester”

Alan Richardson is the Evil Tester. He talks and writes about evil testing, about how to be the best software testers we desire. He gives talks at conferences, offers consultancy services and training over at Compendium Development, and writes about Selenium on Selenium Simplified. He’s been in the testing industry for many years and certainly knows its ins and outs, and he’s continuously trying to keep himself and other software testers in the field updated and valuable. His book, “Dear Evil Tester“, is his way of providing us with an approach to testing founded on responsibility and laughter.

Some lessons from the book:

  • The point is we don’t need permission. We should do whatever it takes.
  • If you short change yourself then that isn’t self-preservation. It is allowing your skills and integrity to slowly rot, wither and die. It is condemning yourself to victimhood as a response to other people’s actions. Don’t do that to yourself. Always work to the highest level that you can be proud of. That is an act of self-preservation.
  • I try very hard to get in the habit of evaluating myself. Not in terms of the actions of others, or in terms of their expectations of me, or in terms if a ‘generic’ tester. I try to evaluate myself in terms of my expectations of me. And I try to continually raise my expectations.
  • Stop doing the same things you did on the first day.
  • Test with intent, with minimal up-front planning, taking responsibility for your path and the communication of your journey.
  • Anyone can test. And you need to be able to test better than anyone off the street. And be capable of demonstrating that you can test better than anyone off the street.
  • I don’t believe that systems ‘regress’. I believe systems ‘are’. I believe systems can exhibit behavior that we don’t want. I believe systems can exhibit behavior that we don’t want and which we have seen the system before. But I don’t call that regression.
  • The best test tool to help me test is my brain. I then use that to help me find other tools that help me observe, interrogate, and manipulate the system at the different technological levels I’ve found the system to be composed of.
  • I don’t think I do test the quality of a product. I think I test ‘qualities’ of a product. I test observations of those ‘qualities’ against models that I have made of the ‘qualities’ independently of the product. And then I put the product into different states, allowing me to observe the product in different ways. And when I see a difference, I can then offer commentary upon the differences between my observation and its comparison to my model.
  • The hard part is not finding tools, and making lists of tools, and learning how to use tools. The hard part is figuring out what you need to do, to add value to your test approach.
  • I look at my model of the system and I see parts I’m not observing, or manipulating, or interrogating. I try and work out what risks I’m not testing for, because of that. Then I work out which of those risks I want to target. And I go looking for a tool to help.
  • Good and Evil are relative terms. If you accept that, then words hold less power over you. Choose your actions well. Stop labelling them ‘Good’ and ‘Evil’. Take responsibility. Build your own models. Do what you need to. Do what you have to, to be the change you want to see.
  • Not all testers are evil, just the good ones.