Automating the Windows Desktop Calculator’s UI using Winium

Ever since I learned how to use Selenium to automate browsers and web applications a few years back, I’ve wondered from time to time whether I can use the same process or technology to automate Windows desktop applications via its user interface. There’s no particular use case at work considering our apps are all in the web, but I challenged myself to find an answer only for curiosity’s sake. It didn’t turn out to be a thought-provoking experiment but at least it was somehow amusing.

Here’s what I got from the quick study:

Click the image to view the GIF full-size on another browser tab 🙂

And a few notes:

  • We can’t use Selenium to automate Windows desktop applications.
  • A quick search about automating Windows apps tells us that there’s actually a number of tools we can use for such purpose, some paid, others free. Other than Winium, I found out that we can also use Sikuli (or its successor SikuliX), WinTask, autoIT, TestStack.White, Automa, MacroSchedulerCobra, or Pywinauto. There’s probably more tools out there. I just chose to use Winium because writing tests and automating the app with it is similar to how Selenium works, which means very little learning curve for me.
  • Source code for this short experiment can be found here: Win-Calculator. It uses Maven for handling installation of project dependencies while the test code is written in Java. The tests run locally, I’m not sure whether there’s a way to manipulate a Windows app on another machine using Winium. I have not researched that far.
  • Winium will actually take control of the machine’s mouse to move to the element locations set in the tests. This means that interrupting the position of the mouse while the tests are running would likely fail the test. This is annoying, and I’m not sure whether the other Windows app automation tools behave otherwise or the same.

About Selenium Conference 2016

I had time over the holidays to binge watch last years Selenium Conference talks, which was as awesome, if not more so, as the talks during the 2015 conference. Automation in testing has really come a long way, alongside the advancements in technology and software development, and this brings forth new challenges for all of us who test software. It’s not just about Selenium anymore. Mobile automation still proves to be challenging, and soon we’ll have to build repeatable test scenarios for the internet of things – homes, vehicles, stores, among others. Software testing can only get more interesting by the year.

Here are my picks for the best talks from the conference, if you’re curious:

The Lessons I Learned This Year About Software Testing

Last year I told myself I would take a step back from automation because I’ve already built tests with a working knowledge of Java and TestNG. Apparently, with the way things went this year, I was not satisfied after all with what I had. I went on to study Ruby with Watir and Cucumber for about half of 2016, and I learned how to run web applications without using browsers after that. And for a second year in a row I feel I overdid studying again, and maybe I’m getting too comfortable about writing test code. In any case, programming has become such a useful tool in my workflow – performing checks automatically that otherwise would have eaten up a chunk of my time and enabling me to test other things.

Some other realizations during the year:

  • Re-writing test code using Ruby with Watir and Cucumber and other useful ruby gems pointed out how messy the checks I wrote last year using Java was. It helps that the Ruby language has less boilerplate syntax but my earlier mistakes are more about refactoring and design rather than about the programming language itself. I still need more deliberate practice at that.
  • I’ve found out that I actually do security testing in my normal day-to-day exploratory work, but I’m not especially great at it because nobody has taught me how to perform good web security testing before. It seems that getting comfortable reading and manipulating API calls is a step towards becoming better at it.
  • Teams who develop software with agility does not necessarily mean they implement scrum or kanban. If there’s good communication and teamwork, teams can actually forego many of the rituals and focus on the actual work.
  • Writing automated checks with healthy test description language is an important skill.
  • People only learn to solve problems when they feel that those problems are worth solving. And problems that are worth solving for me does not mean they’re worth solving for other people too, so I should never stop using perspectives.
  • We can leverage REST calls with web automation to make them run a lot faster and more stable.
  • Exploratory testing is a prerequisite to automation.
  • Building a great team is difficult. There are a lot of factors to consider – team composition, empathy, attitude, member skill set, strengths and weaknesses, to name a few. It takes some time too.

Lessons from James Bach and Michael Bolton’s “A Context-Driven Approach To Automation In Testing”

Automation has gained traction in recent years in the testing community because the idea of automating tests has been sold off as a solution to many of the common problems of traditional testing. And the advertisements worked, as many software businesses now rely heavily on automated checks to monitor and check known system bugs, even performance and security tests. It’s not surprising anymore that automation has been accepted by the general public as one of the cornerstones of effective testing today, especially since releases to production for most businesses has now become more frequent and fast-paced. I use such tools at work myself, because they complement the exploratory testing that I do.

James Bach and Michael Bolton, two of the finest software testers known to many in the industry, however reminds us, in their whitepaper titled ‘A Context-Driven Approach to Automation in Testing, to be vigilant about the use of these tools, as well as the practice of the terms ‘automation‘ and (especially) ‘test automation‘ when speaking about the testing work that we do, because they can imply dangerous connotations about what testing really is. They remind us that testing is so much more than automation, much more than using tools, and that to be a remarkable software tester we need to keep thinking and improving our craft, including the ways we perform and explain testing to others.

Some takeaway quotes from the whitepaper:

  • If you need good testing, then good tool support will be part of the picture, and that means you must learn how and why we can go wrong with tools.
  • The word ‘automation’ is misleading. We cannot automate users. We automate some actions they perform, but users do so much more than that. Output checking is interesting and can be automated, but testers and tools do so much more than that. Although certain user and tester actions can be simulated, users and testers themselves cannot be replicated in software. Failure to understand this simple truth will trivialize testing, and will allow many bugs to escape our notice.
  • To test is to seek the true status of a product, which in complex products is hidden from casual view. Tester’s do this to discover trouble. A tester, working with limited resources must sniff out the trouble before it’s too late. This requires careful attention to subtle clues in the behavior of a product within a rapid and ongoing learning process. Testers engage in sensemaking, critical thinking, and experimentation, none of which can be done by mechanical means.
  • Much of what informs a human tester’s behavior is tacit knowledge.
  • Everyone knows programming cannot be automated. Although many early programming languages were called ‘autocodes’ and early computers were called ‘autocoders,’ that way of speaking peaked around 1965. The term ‘compiler’ became far more popular. In other words, when software started coding, they changed the name of that activity to compiling, assembling, or interpreting. That way the programmer is someone who always sits on top of all the technology and no manager is saying ‘when can we automate all this programming?’
  • The common terms ‘manual testers’ and ‘automated testers’ to distinguish testers are misleading, because all competent testers use tools.
  • Some testers also make tools – writing code and creating utilities and instruments that aid in testing. We suggest calling such technical testers ‘toolsmiths.’
  • We emphasize experimentation because good tests are literally experiments in the scientific sense of the word. What scientists mean by experiment is precisely what we mean by test. Testing is necessarily a process of incremental, speculative, self-directed search.
  • Although routine output-checking is part of our work, we continually re-focus in non-routine, novel observations. Our attitude must be one of seeking to find trouble, not verifying the absence of trouble – otherwise we will test in shallow ways and blind ourselves to the true nature of the product.
  • Read the situation around you, discover the factors that matter, generate options, weigh your options and build a defensible case for choosing a particular option over all others. Then put that option to practice and take responsibility for what happens next. Learn and get better.
  • Testing is necessarily a human process. Only humans can learn. Only humans can determine value. Value is a social judgment, and different people value things differently.
  • Call them test tools, not ‘test automation.’

Takeaways from Alan Richardson’s “Dear Evil Tester”

Alan Richardson is the Evil Tester. He talks and writes about evil testing, about how to be the best software testers we desire. He gives talks at conferences, offers consultancy services and training over at Compendium Development, and writes about Selenium on Selenium Simplified. He’s been in the testing industry for many years and certainly knows its ins and outs, and he’s continuously trying to keep himself and other software testers in the field updated and valuable. His book, “Dear Evil Tester“, is his way of providing us with an approach to testing founded on responsibility and laughter.

Some lessons from the book:

  • The point is we don’t need permission. We should do whatever it takes.
  • If you short change yourself then that isn’t self-preservation. It is allowing your skills and integrity to slowly rot, wither and die. It is condemning yourself to victimhood as a response to other people’s actions. Don’t do that to yourself. Always work to the highest level that you can be proud of. That is an act of self-preservation.
  • I try very hard to get in the habit of evaluating myself. Not in terms of the actions of others, or in terms of their expectations of me, or in terms if a ‘generic’ tester. I try to evaluate myself in terms of my expectations of me. And I try to continually raise my expectations.
  • Stop doing the same things you did on the first day.
  • Test with intent, with minimal up-front planning, taking responsibility for your path and the communication of your journey.
  • Anyone can test. And you need to be able to test better than anyone off the street. And be capable of demonstrating that you can test better than anyone off the street.
  • I don’t believe that systems ‘regress’. I believe systems ‘are’. I believe systems can exhibit behavior that we don’t want. I believe systems can exhibit behavior that we don’t want and which we have seen the system before. But I don’t call that regression.
  • The best test tool to help me test is my brain. I then use that to help me find other tools that help me observe, interrogate, and manipulate the system at the different technological levels I’ve found the system to be composed of.
  • I don’t think I do test the quality of a product. I think I test ‘qualities’ of a product. I test observations of those ‘qualities’ against models that I have made of the ‘qualities’ independently of the product. And then I put the product into different states, allowing me to observe the product in different ways. And when I see a difference, I can then offer commentary upon the differences between my observation and its comparison to my model.
  • The hard part is not finding tools, and making lists of tools, and learning how to use tools. The hard part is figuring out what you need to do, to add value to your test approach.
  • I look at my model of the system and I see parts I’m not observing, or manipulating, or interrogating. I try and work out what risks I’m not testing for, because of that. Then I work out which of those risks I want to target. And I go looking for a tool to help.
  • Good and Evil are relative terms. If you accept that, then words hold less power over you. Choose your actions well. Stop labelling them ‘Good’ and ‘Evil’. Take responsibility. Build your own models. Do what you need to. Do what you have to, to be the change you want to see.
  • Not all testers are evil, just the good ones.

Steps In Starting A Cucumber-Watir-Ruby ATDD Automation Project using TestGen (Windows)

I said that I was going to lay low on automation this year. However, I was also still curious about implementing a Watir-Ruby counterpart of my existing Java-Webdriver automation framework, including an acceptance-test driven development (ATDD) or a behavior-driven development (BDD) functionality built-in, just so I can understand how different the two frameworks are, based on my own experience. Long story short, I learned how to do it, with help from Jeff Morgan’s “Cucumber & Cheese” ebook, and was surprised how easy it was to start, and eventually write code.

If you ever want to try building a Cucumber-Watir-Ruby project from scratch on a Windows machine, try this:

  1. Download the latest version of Ruby for your machine
  2. Run the installer, ticking the following options when prompted
    • Install Tcl/Tk support
    • Add Ruby executables to your PATH
    • Associate .rb and .rbw files with this Ruby installation
  3. Run the command prompt, a shortcut is by pressing the Window and R keyboard buttons at the same time, typing in cmd, and then pressing Enter
  4. Test the Ruby installation by running ruby -v
  5. Close the command prompt if the Ruby version is displayed in the screen. If not, try re-installing Ruby or checking if Ruby is set in your machine’s Path under system environment variables
  6. Download the Ruby Development Kit for your version of Ruby
  7. Create a new folder under C:\ directory called devkit
  8. Extract the Development Kit files to the newly created directory
  9. Run the command prompt again
  10. In the command prompt, navigate to the devkit directory by running cd C:\devkit
  11. Once inside the directory, type in ruby dk.rb init and then press Enter
  12. After that, type in ruby dk.rb install and press Enter
  13. Then let’s install some gems by running this: gem install cucumber testgen rake bundler yard watir-webdriver page-object fig_newton
  14. When the gems finish installing, go back to the command prompt and run cd C:\Users\Your_Machine_Name or go to a desired directory where you want to house your automation project
  15. Still in the command prompt, run testgen project Project_Name –pageobject-driver=watir. This should create a new directory structure inside your project folder which cucumber will use to run your automated checks.
  16. Create a test_filename.feature file (without anything in it) inside your project directory
  17. Test if the project works by going to the project directory and then running cucumber test_filename.feature in the command prompt

If you encounter problems with a ffi_c file, such as a “cannot load such file — ffi_c (LoadError)” error, doing the following may help:

  1. Run gem uninstall ffi
  2. Then run gem install ffi –platform ruby

If the ffi_c issues persists, uninstalling the latest version of Ruby, installing an older Ruby version (say v2.1.8) and redoing Steps 3-17 may fix the problem.

Note: For a Mac or an Ubuntu machine, the Ruby and Devkit installations may differ.

Underrated Software Testing Tools

  • Pen and paper
  • Whiteboard and markers
  • Notepads
  • Flow charts
  • Mind maps

There’s a lot of hype now about test management and automated checking tools, and I understand why. They’re powerful, they make testing tasks management, documentation, and output validation faster and easier for software testers if used properly, if integrated to the testing process by testers who fully comprehend the complexities of what testing is as well as the tools themselves. But the thing is, that big if matters. Such tools often don’t help beginners much, and are usually counterproductive. When trainees focus on automation and test scripting, they can forget what testing is really all about, and they can actually hinder the practice of testing skills that are more important when they concentrate on syntax and assertions instead of test ideas. Problem solving, collaboration, empathy, storytelling, and thinking skills matter more, and the listed tools above help us master those.