Favorite Talks from Agile Testing Days 2016

One of the great ways to be updated with what’s happening in the testing community is to attend software testing conferences, talk to the speakers and attendees, and ask questions. But if you don’t have the budget to fly over to where the conference is (like me), the next best thing is to wait for the conference recordings to be uploaded online. That’s how I’ve kept up with the latest news in test automation, following both the Selenium Conference and Google’s Test Automation Conference. All these recordings on YouTube paints a picture of what experiences and opportunities are currently out there for software testers.

And recently, I decided to free some time to follow Agile Testing Days and see what went on over at Potsdam, Germany. It was my way of checking out what’s happening in the agile testing community someplace far away from where I am. And today I’d like to share some of my favorite talks from that event:

  • Designed to Learn (by Melissa Perri, about testing product ideas, experiments and safe spaces for them, and learning what customers want and need)
  • Snow White and the 777.777.777 Dwarfs (by Gojko Adzic, about factors that may likely change testing policies and practices in the coming years as a result of computing power getting cheaper, such as third party platforms, per-request payments, multi-versioning, machine learning, mutation and approval testing, testing after deployment and failure budgets)
  • Continuous Delivery Anti-Patterns (by Jeff ‘Cheezy’ Morgan, on eliminating branches, test data management, stable environments, and keeping code quality high)
  • NoEstimates (by Vasco Duarte, about leaving out estimation in software development projects)
  • From Waterfall to Agile, The Advantage Is Clear (by Michael ‘The Wanz’ Wansley, on software testers being gatekeepers of quality, growing up in a waterfall system, and the wonderful experience that is software testing)

Five People and Their Thoughts (Part 4)

Many of the things I’ve learned about software testing in recent years I discovered through reading books and blogs of actual software testers who are masters of their own craft. Some of them record videos too, of tutorials and webinars and anything they find valuable for their readers/viewers, and I try to share those which I’ve found engaging, five at a time.

Here are some interesting recordings to view today, if you’d like:

About Estimation

Recent impromptu meetings with the software team about our existing development process had me re-think my thoughts about our way of estimating user stories and tasks. Personally, over the many months, I’ve grown to dislike estimation of user stories by complexity points and tasks by man-hours for two reasons: one, sprint planning sessions take too long and boring because of them, and two, they do not help the team develop better software. They actually delay us a bit, because we can’t start right away. I’ve recently wanted to completely remove the practice because of those grounds.

I vividly remember Gojko Adzic making a point in one blog post about burn-down charts and velocity being negative metrics (like high-blood pressure) and bewaring us about using it for measuring software development success. I also clearly remember Ron Jeffries talking about estimation as being evil in an article. If I’m solely looking at the ideas pointed out by these two intellectual minds in these posts, I don’t see why we should keep doing them.

Ron, in another article about estimation (two months after his ‘Estimation is Evil’ post) gives me one compelling reason: responsibility. He tells me that it is the team’s responsibility to guide the project well in all the ways we can, including estimation. He tells me that our stakeholders are asking us for help about a business decision and it is important that we provide them with the best educated guess we can spare. Okay, that was a wonderful counter-argument.

So, we keep estimating. But we must find ways of doing it that helps both the stakeholders and ourselves. If the idea of estimation is not the problem, maybe it is the way we estimate that’s annoying us. Maybe it’s because we estimate by points, by man-hours, by numbers. Maybe it’s because daily re-estimations distracts the team from simply doing the work. Maybe simplifying how we estimate, by using non-numeric estimates (to speed up the process) and by providing an estimated timeline (which might be what they really want to see) instead of documenting day-to-day computed numeric estimates, is what the team needs to do.

Takeaways From Godjko Adzic & David Evan’s “50 Quick Ideas To Improve Your User Stories”

Actively seeking and reading interesting blogs related to the work that I do leads me to interesting places. This time around, I ended up purchasing a copy of Godjko Adzic & David Evan’s 50 Quick Ideas To Improve Your User Stories e-book because I was looking for ideas to implement better software development at the office. I knew there are ways we could be more effective in shipping software (we’re good, but we’re still far behind bigger companies in terms of project releases) and it looks like the lessons in this book will definitely help us improve our agility, one small step at a time.

Here are some of my favorite lessons:

  • Always ask if a story improves the current business goal.
  • Impact maps can help organizations (and teams involved in a project) in understanding why a story needs to be developed and released in Production or why it may only be a pet feature and doesn’t need to be considered right now.
  • Measuring team velocity and creating burn-down charts only tell us how well members of a team is working with each other. It is a negative metric, like blood pressure, and is great for finding out problems, if those are what we’re looking for. It is bad for estimating how far we are from achieving a goal, because the team can be moving at good speed but going in the wrong direction.
  • Instead of waiting months to finish a project (and accomplishing a goal), why not find ways of shipping to customers piece by piece by piece in much less time and get feedback? There are many ways of splitting stories.
  • Stop using numeric story sizes for estimation. Use ‘too big’, ‘too small’, and ‘just right’ instead. This way, teams will be forced to make each story into something they can manage and they can go ahead in implementing them. Stories that are ‘too big’ often needs more discussion and planning, which can be started in later sprints. As for estimating how many stories can be squeezed in for next iterations, just use the running average number of stories the team was able to finish for the past few iterations.
  • To determine if a project is a successful experiment, test the stories with real end-users after release. See if the expected change in behavior (goal of the story) occurs by measuring changes. If the goal is reached, great. If not, revisit the assumptions that were set for the story. Successfully providing a feature to users doesn’t mean that the business goal is achieved.

Godjko Adzic also writes about impact maps in more detail in his other book, Impact Mapping, if you’re interested.

Five People and Their Thoughts (Part I)

From time to time I stumble on really interesting articles and videos about software testing and software development practices being carefully thought out and analyzed by great testers and developers. I’d like to share five of them to you today.

About Software Applications And Their Required Regression Testing Time After Each Update

About two years ago, when I was a new software tester in my current company, I started estimating the time I need in performing regression testing on each of the organization’s major products. I wanted to know how much time was enough in order for me to do my tests properly, without rushing and without dilly-dallying, because it helps me prioritize and pace tasks. If asked if I can deliver results after a specified period, I would be able to answer yes or no immediately. If I say I can’t deliver on the initial negotiated time, I would be able to tell my superiors how much more I require to be able to provide the information they want.

For one application, I found out that I have to commit a day, provided that tests are focused, not rushed, without much distraction from everyone else. I got my estimation where I then based all my decisions and made guided expectations. For that product, one day provided enough testing time such that every further project release for that software often went well. There were no troubles in the required testing time. I had no worries in pacing myself well.

That is, until recently. There was a scheduled major release and I told my boss I can finish testing before the deadline but I wasn’t able to, even after giving overtime work. Imagine my surprise: that day, the estimated testing time that I was so sure of looked like it wasn’t enough.

Several days later, I realized two obvious things that I never thought could lead to a disaster in decision-making:

  1. software applications grow in complexity as they continue to be updated, and, because of that,
  2. the regression testing time required for growing applications also needs to increase by a properly factored amount.