An Encounter with the ‘cucumber.yml was found, but could not be parsed’ Error

One day last week I merged a colleague’s new feature tests to the existing test repository. Routinely after that, I logged in to our dedicated test server, updated the existing running test suite by retrieving the latest merged code, and ran a ‘bundle update’ because the updated code required the latest versions of the dependencies. The updates finished without hitches as expected after a minute or so. Task’s done, I just need to run a sample test to be certain everything’s well and good.

An error blows up in my face:

cucumber.yml was found, but cannot be parsed. Please refer to cucumber’s documentation on correct profile usage.

Before all these, I had run a simulation of the gem updates on my local machine including running all tests to make sure there were no surprises when updating the test server. But I was still caught off-guard. I didn’t see that coming.

What? That’s always the first question. I squinted at the screen and looked at the error message again; it says something about the cucumber.yml file unable to be read by the machine, like it was corrupted. I opened the file to check (which contains various cucumber profile configuration set up for running tests on different test environments), was able to view its contents in the text editor, inspected it for unfamiliar code and found nothing. It looks okay, but the machine says otherwise. Why?

I tried an initial Google search for the error message and saw a fix that was written five years ago, saying that the error occurs because of a particular ‘rerun.txt’ file. The post tells me that the problem will go away if I delete that file.

Except that I don’t have that file in my code repository. What now?

Maybe there’s really something going on with the test code. Let’s see what happens if I delete a particular profile in the cucumber.yml file. Done. No changes in behavior, error still exist. What if I delete everything? Done. Error didn’t go away. Hmm.. that’s odd. It seems that the file contents are not the problem, is valid YAML code too according to online checkers, and the file is not damaged in any way I can see.

I went to back to looking at the search results for possible solutions People keep telling me its about the rerun.txt file. Others say I need to edit a cucumber library file in order to see what test code specifically causes the error for cucumber’s runner. No more other clues. Now this is difficult.

I kept researching for plausible fixes online for a few hours, I thought that there may still be something that can help me but I missed. No such luck. Okay, let’s try editing that library file and see what happens. It was a first time viewing library code, because I didn’t have any reason to do it before, and told myself that maybe I should actually check it out more often.

I found the command-line interface profile_loader file and the particular code which loads the cucumber.yml  file in question:

Found you, cucumber's profile loader!

Found you, cucumber’s profile loader!

Commented out some few lines of code as suggested:

Now let's see what your problem really is

Now let’s see what your problem really is

Then ran the sample cucumber test again:

A problem with the Psych module! What's that? :O

A problem with the Psych module! What’s that? :O

Okay. It says that a missing private method named ‘load’ is being called for a particular Psych module. No wonder cucumber is failing. Bummer, I don’t have a single idea about a Psych module that cucumber runs. All I can do is another Google search for the new error message and maybe find a workaround.

I am reminded that problems in building systems for automated test suites are not limited to writing test code and the application under test. Just like any other software, they can sometimes break in areas we do not know anything.

Eventually I found this enlightening post on a Github repository:

Interestingly, a very short solution. It was a rubygems bug after all, and what I needed to do was to run a ‘gem update –system’ command to get our cucumber tests back up and running.

 

Contemplating In-Office Knowledge-Sharing Sessions

In the past I tend to prepare presentation slides if I want to share something to tester colleagues at work, often clippings of interesting articles which I felt could be useful for our knowledge-sharing sessions. It worked, but after some time the sessions felt monotonous and tedious. Probably because I always have to explain in detail the ideas and the lessons behind those clippings. I try to make my presentations interesting, but I think that after a while hearing the same voice over and over can get old.

These days I’m sharing videos instead. The videos are usually recorded conference talks or tutorials I have watched and learned from in recent years, and I have taken care in listing the the ones that are insightful, fun, and relatively short. It’s like I’m inviting officemates to watch a short movie for free. The big change: I don’t take a lot of time talking during the knowledge-sharing anymore. There are of course still bits of discussions before, after, or during the showing of a video, whenever necessary, for explaining why I have taken a liking to the talk or to ask them about what they understood. We take turns telling stories about our experiences related to the ideas shared by the speaker, which is nice. And compared to the powerpoint presentations I did before, I felt that because the speaker is someone from outside it makes the ideas shared in the talks and tutorials feel more fresh and real than when I’m merely showing them quoted paragraphs from blogs. That makes it easier for my colleagues to get curious and actually learn something, which is exactly the point of the activity.

 

Interesting Talks from Google Test Automation Conference 2016

There’s a lot of stuff going on in the software testing community at the moment, specifically in the field of automation, because of how software is now being deployed into various other platforms besides personal computers. Google needs to worry about testing their eyeglasses, virtual reality headsets, and cars. Others care about testing robots and televisions. This is why it is fun to watch talks from conferences, like the Selenium Conference or the recently concluded Google Test Automation Conference: I get to find out what problems they’re facing and see how they try to solve them, and maybe learn a thing or two. Sometimes I get to pick up a new tool to try for my own testing too, a great bonus.

Some favorite talks from the conference are:

Notes from Alister Scott’s “Pride and Paradev: A Collection of Agile Software Testing Contradictions”

I’ve stumbled over Alister Scott‘s WatirMelon blog some years back, probably looking for an answer to a particular question about automation, and found it to be a site worth following. There he talks about flaky tests, raising bugs you don’t know how to reproduce, junior QA professional development, the craziest bug he’s ever seen, writing code, and the classic minesweeper game. He was part of the Watir project in the past, but is now an excellence wrangler over at Automattic (which takes care of WordPress). He has also written an intriguing book, titled “Pride and Paradev“, which talks about several of the contradictions that we have over in the field of software testing. In a nutshell, it explains why there are no best practices, only practices that work well under a certain context.

Here are a number of takeaways from the book:

  • A paradev is anyone on a software team that doesn’t just do programming.
  • Agile software development is all about delivering business value sooner. That’s why we work in short iterations, seek regular business feedback, are accountable for our work and change course before it’s too hard.
  • Agile software development is all about breaking things down.
  • Agile software development is all about communication and flexibility. You must be extremely flexible to work well on an agile team. You can’t be hung up about your role’s title. Constantly delivering business value means doing what is needed, and a team of people with diverse skills thrives as they constantly adapt to get things done. Most importantly flexibility means variety which is fun!
  • Delivering software everyday is easy. Delivering working software everyday is hard. The only way an agile team can deliver working software daily is to have a solid suite of automated tests that tells us it’s still working. The only way to have reliable, up-to-date automated tests is to develop them alongside your software application and run them against every build.
  • You’re testing software day in and day out, so it makes sense to have an idea about the internals of how that software works. That requires a deep technical understanding of the application. The better your understanding of the application is, the better the bugs you raise will be.
  • Hiring testers with technical skills over having a testing mindset is a common mistake. A tester who primarily spends his/her time writing automated tests will spend more time getting his/her own code working instead of testing the functionality that your customers will use.
  • What technical skills a tester lacks can be made up for with intelligence and curiosity. Even if a tester has no deep underlying knowledge of a system, they can still be very effective at finding bugs through skilled exploratory and story testing. Often non technical testers have better shoshin: a lack of preconceptions, when testing a system. A technical tester may take technical limitations into consideration but a non technical can be better at questioning why things are they way they are and rejecting technical complacency. Often non-technical testers will have a better understanding of the subject matter and be able to communicate with business representatives more effectively about issues.
  • You can be very effective as a non-technical tester, but it’s harder work and you’ll need to develop strong collaboration skills with the development team to provide support and guidance for more technical tasks such as automated testing and test data discovery or creation.
  • Whilst you think you may determine the quality of the system, it’s actually the development team as a whole that does that. Programmers are the ones who write the good/poor quality code. Whilst you can provide information and suggestions about problems: the business can and should overrule you: it’s their product for their business that you’re building: you can’t always get what you consider to be important as business decisions often trump technical ones.
  • A tester should never be measured on how many bugs they have raised. Doing so encourages testers to game the system by raising insignificant bugs and splitting bugs which is a waste of everyone’s time. And this further widens the tester vs programmer divide. Once a tester realizes their job isn’t to record bugs but instead deliver bug free stories: they will be a lot more comfortable not raising and tracking bugs. The only true measurement of the quality of testing performed is bugs missed, which aren’t recorded anyway.
  • Everything in life is contextual. What is okay in one context, makes no sense in another. I can swear to my mates, but never my Mum. Realizing the value of context will get you a long way.
  • Probably the best thing I have ever learned in life is that no matter what life throws at you, no matter what people do to you or how they treat you, the only thing you can truly control is your response.

About Selenium Conference 2016

I had time over the holidays to binge watch last years Selenium Conference talks, which was as awesome, if not more so, as the talks during the 2015 conference. Automation in testing has really come a long way, alongside the advancements in technology and software development, and this brings forth new challenges for all of us who test software. It’s not just about Selenium anymore. Mobile automation still proves to be challenging, and soon we’ll have to build repeatable test scenarios for the internet of things – homes, vehicles, stores, among others. Software testing can only get more interesting by the year.

Here are my picks for the best talks from the conference, if you’re curious:

Thank You, 2016!

Some thank you’s to start 2017 right:

Firstly, thank you to Mom and Dad, for incessantly being the loving parents that they are. And to Aris and his growing family, which is a consistent wonder at home.

Thank you to my colleagues over at DirectWithHotels, for the magical experience of being part of the software development team. To Onchie, for his thoughts and programming expertise which help shape the test suites I write into what they’re capable of doing. To Jefferson, Cedrick, Jonan, and Ryan, for their display of remarkable teamwork that habitually motivates. To my team of junior testers, Andie, Russel, Elaine, Van, and Ron, for keeping up with the testing work that we need to face day in day out. And to my supervisor, Bobby, for his trust in my experiments and work ethic.

Thank you to the software testers around the world who keep on sharing their adventures to the testing community, for showing me how wonderful the work is and continues to be. I gained a lot from Jeff MorganAlan Richardson and Jeff Nyman‘s pieces last year, and I’m sure there’s more to learn because our industry is so diverse. And with that, I’ll do my part to help.

Thank you to Junnell and Nikka, for always being a reservoir of inspiration, for unfailingly being such dear friends.

Thank you to Kleng, for her love, affection, and lessons learned.

And lastly, thank you dear readers, for the gift of your precious time and attention. I hope to keep my writing rolling on schedule this year and I hope they will be of some value to you.

Testing Goals for 2017

2016 has been a surprising year. There’s been some considerable changes at work – people came and went, several small process changes for the good, and more transparency in automation which includes a dedicated testing server – many of which are for the better. There’s still some work to do though and those will spill over to the coming year, and most likely I’ll keep on adding valuable tests to the team’s daily running suite on the cloud.

Aside from that I hope to share what I’ve learned this year to the testing team, especially that bit about REST calls because that’s something they can actually use for testing any web application.

And I plan to read more next year. Or at least the following books: Specification by Example, Pride and Paradev, The Cartoon Tester, The Psychology of Software Testing, There’s Always a Duck, Rails 4 Test Prescriptions, Creating Great Teams, Explore It!, Willful Blindness, Tools of Titans, and The 4-Hour Workweek. Maybe I’ll slip in some fiction too in the reading list.

There’s some coding experiments I’d like to try as well:

  • building a Ruby on Rails application
  • Geb + Groovy + Spock automation framework
  • Java + Serenity automation framework

Overall I don’t think there would be many changes from the things that I currently do, only continuous learning and refinement of existing workflow. But I can be wrong. We’ll see. 🙂