Team Questions I

What  does our team actually want to accomplish this particular sprint?

Do I know my teammates well enough?

Does everyone in the team feel safe working together with each other?

Can I rely on everybody in the team?

What does a user story mean to us?

Do we even need tickets in order to perform our best work?

Do we truly need this specific document to move forward?

How can I help a teammate feel good with the work he/she is doing?

What’s the minimum amount of input do we need in order to start?

What’s the 20% of activities that contribute to 80% of outputs we desire?

Do we believe in what we are building?

Which rules, in reality, help us become the best versions of ourselves? Which don’t?

Being Reminded of All the Phases I’ve So Far Had in Writing Automated Checks

I’m currently in the midst of a test code overhaul, a re-writing project of sorts. It started about a week ago and so far I’ve made considerable progress on what I’ve wanted to achieve with the rewrite, which is basically cleaner and more maintainable code, mostly in the sense of test data management and test description language. The number of tests running everyday in our Jenkins system has grown noticeably and I’ve felt that it’s been difficult to add certain tests because of how I structured the test data in the past, which I have not upgraded since then. The two possible avenues for running tests – on the UI and HTTP layers – also adds a bit of complexity and it’d be nice if I can integrate the two smoothly. It’s an interesting development because I did not plan on any re-writing to be done anytime soon but I guess at the back of my mind I knew it’ll happen eventually. And so I decided to take a step back from writing more tests and do some cleanup before it gets tougher to change things. I plan to finish everything in about a month or so.

At the moment, I’m reminded of the phases I’ve gone through in learning to code and writing automated checks in the past few years:

  • Early 2014. It all begins with Selenium IDE, with giving the self some time to study the basic Selenese commands for writing automated checks and (more importantly) understand how to properly retrieve the page elements you want to manipulate.
  • Mid 2014. Test management in Selenium IDE becomes difficult as the number of tests grow, hence the decision to switch to Selenium WebDriver. The only programming language background I had back then was C++, which was limited to only functions and logical/conditional operators, so I chose Java to work with to lessen the learning curve.
  • Late 2014. Familiarized myself with Git, which hooked me on making daily commits and appreciating version control. Along the way I learned the concepts of classes and objects.
  • All of 2015 up to Early 2016. I was in a trance, writing code daily and pushing myself to create all the automated checks that I wanted to run for our apps before every release. Tests run on the Eclipse IDE using TestNG and I was happy with what I had, except that those end-to-end tests are really slow. Running everything took overnight to finish, which was okay for my employer but annoying for me personally.
  • Mid 2016. Re-writing existing tests in Ruby with Cucumber integration started off (when I found Jeff Morgan’s “Cucumber & Cheese” book online) as a side project for fun and testing my skill level in programming. And I did have buckets of fun! The experiment told me that there’s still a lot I need to practice on if I want to write better code, and it also told me that I can be more productive if I switch programming languages. There’s a bit less code to type in when writing code in Ruby than Java and I liked that, plus all the interesting libraries I can use. I switched to Sublime Text and used both Jenkins and the command-line interface more extensively too.
  • Late 2016. As I was looking for ways to speed up end-to-end tests total execution, which by then takes about 4 hours to complete, I ended up exploring testing apps in the HTTP layer instead of in the UI. That took a lot of studying of how our apps actually behave under the hood, what data are being passed around, how images are actually sent, how to view pages without a browser, how redirections work, among other things. After years of testing apps via the user interface, this was such a refreshing and valuable period, and I completely wondered why I never knew such a thing existed until then. It wasn’t being taught extensively to testers, perhaps because it all depends on how the app was structured to run through an API.

And these phases brings me to now, where there’s a healthy dose of API and UI layer tests all checking major app features. It’s all good, just several pieces needing a cleanup, a little parallelization, better test description language, and great documentation. It’s all good, because the lessons in both programming and testing keep piling. The two practices differ in mindset but I think they complement each other, and I think that there’s no reason anyone can’t do both.

Favorite Talks from Agile Testing Days 2016

One of the great ways to be updated with what’s happening in the testing community is to attend software testing conferences, talk to the speakers and attendees, and ask questions. But if you don’t have the budget to fly over to where the conference is (like me), the next best thing is to wait for the conference recordings to be uploaded online. That’s how I’ve kept up with the latest news in test automation, following both the Selenium Conference and Google’s Test Automation Conference. All these recordings on YouTube paints a picture of what experiences and opportunities are currently out there for software testers.

And recently, I decided to free some time to follow Agile Testing Days and see what went on over at Potsdam, Germany. It was my way of checking out what’s happening in the agile testing community someplace far away from where I am. And today I’d like to share some of my favorite talks from that event:

  • Designed to Learn (by Melissa Perri, about testing product ideas, experiments and safe spaces for them, and learning what customers want and need)
  • Snow White and the 777.777.777 Dwarfs (by Gojko Adzic, about factors that may likely change testing policies and practices in the coming years as a result of computing power getting cheaper, such as third party platforms, per-request payments, multi-versioning, machine learning, mutation and approval testing, testing after deployment and failure budgets)
  • Continuous Delivery Anti-Patterns (by Jeff ‘Cheezy’ Morgan, on eliminating branches, test data management, stable environments, and keeping code quality high)
  • NoEstimates (by Vasco Duarte, about leaving out estimation in software development projects)
  • From Waterfall to Agile, The Advantage Is Clear (by Michael ‘The Wanz’ Wansley, on software testers being gatekeepers of quality, growing up in a waterfall system, and the wonderful experience that is software testing)

An Experiment on Executable Specifications

What: Create executable specifications for an ongoing project sprint. Executable specifications are written examples of a business need that can be run anytime and acts as a source of truth for how applications behave. It is a living documentation of what our software does, and it helps us focus more on solving the unusual and noteworthy problems instead of wasting time with the ones that we know we shouldn’t have been worrying about.

How:

  1. Join a software development team for one sprint duration.
  2. Discuss project tasks for that sprint with everyone.
  3. Ask about requirements examples and create executable specifications for those tasks as the application is being written.
  4. Refine the specifications when necessary.
  5. Continuously update the specifications until they pass.
  6. Keep the specifications running on a preferred schedule.

Why: To see if writing executable specifications alongside development is feasible during the duration of a sprint.

Limitations: Tester writes the executable specifications, programmers work as is.

Experiment Realizations:

  • Writing executable specifications can be done during actual app development, provided that tester is experienced with the tools for implementing it and understands well why the need for such specifications.
  • It is of course more beneficial if everyone in the team learn how executable specifications work and write/run the specifications, and why the need to implement it.
  • It will take quite a long while before executable specifications becomes a norm in the existing software development process, if ever. This is a function of whether everyone believes that such specifications are actually useful in practice, and then building the skills and habit of including these specifications in the team’s definition of done.

Lessons from Gojko Adzic’s “Specification By Example”

Automated checking is not a new concept. Gojko Adzic, however, provides us a way to make better integration of it in our software development processes. In his book titled “Specification by Example”, he talks about executable specifications that double as a living documentation. These are examples which continuously exercise business rules, they help teams collaborate, and, along with software code, they’re supposed to be the source of truth for understanding how our applications work. He builds a strong case about the benefits of writing specifications by example by presenting case studies and testimonials of teams who have actually used it in their projects, and I think that it is a great way of moving forward, of baking quality in.

Some favorite takeaways from the book:

  • Tests are specifications; specifications are tests.
  • “If I cannot have the documentation in an automated fashion, I don’t trust it. It’s not exercised.” -Tim Andersen
  • Beginners think that there is no documentation in agile, which is not true. It’s about choosing the types of documentation that are useful. There is still documentation in an agile process, and that’s not a two-feet-high pile of paper, but something lighter, bound to the real code. When you ask, “does your system have this feature?” you don’t have a Word document that claims that something is done; you have something executable that proves that the system really does what you want. That’s real documentation.
  • Fred Brooks quote: In The Mythical Man-Month 4 he wrote, “The hardest single part of building a software system is deciding precisely what to build.” Albert Einstein himself said that “the formulation of a problem is often more essential than its solution.”
  • We don’t really want to bother with estimating stories. If you start estimating stories, with Fibonacci numbers for example, you soon realize that anything eight or higher is too big to deliver in an iteration, so we’ll make it one, two, three, and five. Then you go to the next level and say five is really big. Now that everything is one, two, and three, they’re now really the same thing. We can just break that down into stories of that size and forget about that part of estimating, and then just measure the cycle time to when it is actually delivered.
  • Sometimes people still struggle with explaining what the value of a given feature would be (even when asking them for an example). As a further step, I ask them to give an example and say what they would need to do differently (work around) if the system would not provide this feature. Usually this helps them then to express the value of a given feature.
  • QA doesn’t write [acceptance] tests for developers; they work together. The QA person owns the specification, which is expressed through the test plan, and continues to own that until we ship the feature. Developers write the feature files [specifications] with the QA involved to advise what should be covered. QA finds the holes in the feature files, points out things that are not covered, and also produces test scripts for manual testing.
  • If we don’t have enough information to design good test cases, we definitely don’t have enough information to build the system.
  • Postponing automation is just a local optimization. You might get through the stories quicker from the initial development perspective, but they’ll come back for fixing down the road. David Evans often illustrates this with an analogy of a city bus: A bus can go a lot faster if it doesn’t have to stop to pick up passengers, but it isn’t really doing its job then.
  • Workflow and session rules can often be checked only against the user interface layer. But that doesn’t mean that the only option to automate those checks is to launch a browser. Instead of automating the specifications through a browser, several teams developing web applications saved a lot of time and effort going right below the skin of the application—to the HTTP layer.
  • Automating executable specifications forces developers to experience what it’s like to use their own system, because they have to use the interfaces designed for clients. If executable specifications are hard to automate, this means that the client APIs aren’t easy to use, which means it’s time to start simplifying the APIs.
  • Automation itself isn’t a goal. It’s a tool to exercise the business processes.
  • Effective delivery with short iterations or in constant flow requires removing as many expected obstacles as possible so that unexpected issues can be addressed. Adam Geras puts this more eloquently: “Quality is about being prepared for the usual so you have time to tackle the unusual.” Living documentation simply makes common problems go away.
  • Find the most annoying thing and fix it, then something else will pop up, and after that something else will pop up. Eventually, if you keep doing this, you will create a stable system that will be really useful.

Notes from Alister Scott’s “Pride and Paradev: A Collection of Agile Software Testing Contradictions”

I’ve stumbled over Alister Scott‘s WatirMelon blog some years back, probably looking for an answer to a particular question about automation, and found it to be a site worth following. There he talks about flaky tests, raising bugs you don’t know how to reproduce, junior QA professional development, the craziest bug he’s ever seen, writing code, and the classic minesweeper game. He was part of the Watir project in the past, but is now an excellence wrangler over at Automattic (which takes care of WordPress). He has also written an intriguing book, titled “Pride and Paradev“, which talks about several of the contradictions that we have over in the field of software testing. In a nutshell, it explains why there are no best practices, only practices that work well under a certain context.

Here are a number of takeaways from the book:

  • A paradev is anyone on a software team that doesn’t just do programming.
  • Agile software development is all about delivering business value sooner. That’s why we work in short iterations, seek regular business feedback, are accountable for our work and change course before it’s too hard.
  • Agile software development is all about breaking things down.
  • Agile software development is all about communication and flexibility. You must be extremely flexible to work well on an agile team. You can’t be hung up about your role’s title. Constantly delivering business value means doing what is needed, and a team of people with diverse skills thrives as they constantly adapt to get things done. Most importantly flexibility means variety which is fun!
  • Delivering software everyday is easy. Delivering working software everyday is hard. The only way an agile team can deliver working software daily is to have a solid suite of automated tests that tells us it’s still working. The only way to have reliable, up-to-date automated tests is to develop them alongside your software application and run them against every build.
  • You’re testing software day in and day out, so it makes sense to have an idea about the internals of how that software works. That requires a deep technical understanding of the application. The better your understanding of the application is, the better the bugs you raise will be.
  • Hiring testers with technical skills over having a testing mindset is a common mistake. A tester who primarily spends his/her time writing automated tests will spend more time getting his/her own code working instead of testing the functionality that your customers will use.
  • What technical skills a tester lacks can be made up for with intelligence and curiosity. Even if a tester has no deep underlying knowledge of a system, they can still be very effective at finding bugs through skilled exploratory and story testing. Often non technical testers have better shoshin: a lack of preconceptions, when testing a system. A technical tester may take technical limitations into consideration but a non technical can be better at questioning why things are they way they are and rejecting technical complacency. Often non-technical testers will have a better understanding of the subject matter and be able to communicate with business representatives more effectively about issues.
  • You can be very effective as a non-technical tester, but it’s harder work and you’ll need to develop strong collaboration skills with the development team to provide support and guidance for more technical tasks such as automated testing and test data discovery or creation.
  • Whilst you think you may determine the quality of the system, it’s actually the development team as a whole that does that. Programmers are the ones who write the good/poor quality code. Whilst you can provide information and suggestions about problems: the business can and should overrule you: it’s their product for their business that you’re building: you can’t always get what you consider to be important as business decisions often trump technical ones.
  • A tester should never be measured on how many bugs they have raised. Doing so encourages testers to game the system by raising insignificant bugs and splitting bugs which is a waste of everyone’s time. And this further widens the tester vs programmer divide. Once a tester realizes their job isn’t to record bugs but instead deliver bug free stories: they will be a lot more comfortable not raising and tracking bugs. The only true measurement of the quality of testing performed is bugs missed, which aren’t recorded anyway.
  • Everything in life is contextual. What is okay in one context, makes no sense in another. I can swear to my mates, but never my Mum. Realizing the value of context will get you a long way.
  • Probably the best thing I have ever learned in life is that no matter what life throws at you, no matter what people do to you or how they treat you, the only thing you can truly control is your response.

Notes from Jeff Nyman’s “Test Description Language” blog post series

What is a test description language? How do we use that language to effectively write tests? In a series of blog posts, Jeff Nyman talks about what TDL is and how to write better tests with them. The posts are mostly written in the sense of how Cucumber checks are being used for writing tests for automation, but the lessons can also be applied in writing test cases if ever there is a need to write them for clients.

Some notes from the series of posts:

  • It’s very easy to confuse the essence of what you’re doing with the tools that you use.
  • Just because I’m using TestLink, Quality Center, QTP, Cucumber, Selenium, and so forth: that’s not testing. Those are tools that are letting me express, manage, and execute some of my tests. The “testing” part is the thinking I did ahead of time in discerning what tests to think about, what tests to write about, and so forth.
  • Testable means that it specifically calls out what needs to be tested.
  • When examples are the prime drivers, you are using the technique of specification by example and thus example-driven testing. When those examples cover business workflows, you are using the technique of scenario-based testing.
  • Requirements, tests, examples — they all talk about the same thing. They talk about how a system will behave once it’s in the hands of users.
  • The goal of a TDL (test description language) is to put a little structure and a little rigor around effective test writing, wherein “effective test writing” means tests that communicate intent (which correspond to your scenario title) and describe behavior (the steps of your scenario)
  • It is now, and always has been, imperative that we can express what we test in English (or whatever your common language is). This is a key skill of testers and this is a necessity regardless of the test tool solution that you use to manage and/or automate your testing.
  • So many tests, and thus so much development, falters due to a failure to do this simple thing: make sure the intent is clear and expressive for a very defined unit of functionality.
  • Software development teams are not used to the following ideas: The idea of testing as a design activity. The idea of test specifications that are drivers to development. The idea of effective unit tests that catch bugs at one of the most responsible times: when the code is being written.