Lessons from Gojko Adzic’s “Specification By Example”

Automated checking is not a new concept. Gojko Adzic, however, provides us a way to make better integration of it in our software development processes. In his book titled “Specification by Example”, he talks about executable specifications that double as a living documentation. These are examples which continuously exercise business rules, they help teams collaborate, and, along with software code, they’re supposed to be the source of truth for understanding how our applications work. He builds a strong case about the benefits of writing specifications by example by presenting case studies and testimonials of teams who have actually used it in their projects, and I think that it is a great way of moving forward, of baking quality in.

Some favorite takeaways from the book:

  • Tests are specifications; specifications are tests.
  • “If I cannot have the documentation in an automated fashion, I don’t trust it. It’s not exercised.” -Tim Andersen
  • Beginners think that there is no documentation in agile, which is not true. It’s about choosing the types of documentation that are useful. There is still documentation in an agile process, and that’s not a two-feet-high pile of paper, but something lighter, bound to the real code. When you ask, “does your system have this feature?” you don’t have a Word document that claims that something is done; you have something executable that proves that the system really does what you want. That’s real documentation.
  • Fred Brooks quote: In The Mythical Man-Month 4 he wrote, “The hardest single part of building a software system is deciding precisely what to build.” Albert Einstein himself said that “the formulation of a problem is often more essential than its solution.”
  • We don’t really want to bother with estimating stories. If you start estimating stories, with Fibonacci numbers for example, you soon realize that anything eight or higher is too big to deliver in an iteration, so we’ll make it one, two, three, and five. Then you go to the next level and say five is really big. Now that everything is one, two, and three, they’re now really the same thing. We can just break that down into stories of that size and forget about that part of estimating, and then just measure the cycle time to when it is actually delivered.
  • Sometimes people still struggle with explaining what the value of a given feature would be (even when asking them for an example). As a further step, I ask them to give an example and say what they would need to do differently (work around) if the system would not provide this feature. Usually this helps them then to express the value of a given feature.
  • QA doesn’t write [acceptance] tests for developers; they work together. The QA person owns the specification, which is expressed through the test plan, and continues to own that until we ship the feature. Developers write the feature files [specifications] with the QA involved to advise what should be covered. QA finds the holes in the feature files, points out things that are not covered, and also produces test scripts for manual testing.
  • If we don’t have enough information to design good test cases, we definitely don’t have enough information to build the system.
  • Postponing automation is just a local optimization. You might get through the stories quicker from the initial development perspective, but they’ll come back for fixing down the road. David Evans often illustrates this with an analogy of a city bus: A bus can go a lot faster if it doesn’t have to stop to pick up passengers, but it isn’t really doing its job then.
  • Workflow and session rules can often be checked only against the user interface layer. But that doesn’t mean that the only option to automate those checks is to launch a browser. Instead of automating the specifications through a browser, several teams developing web applications saved a lot of time and effort going right below the skin of the application—to the HTTP layer.
  • Automating executable specifications forces developers to experience what it’s like to use their own system, because they have to use the interfaces designed for clients. If executable specifications are hard to automate, this means that the client APIs aren’t easy to use, which means it’s time to start simplifying the APIs.
  • Automation itself isn’t a goal. It’s a tool to exercise the business processes.
  • Effective delivery with short iterations or in constant flow requires removing as many expected obstacles as possible so that unexpected issues can be addressed. Adam Geras puts this more eloquently: “Quality is about being prepared for the usual so you have time to tackle the unusual.” Living documentation simply makes common problems go away.
  • Find the most annoying thing and fix it, then something else will pop up, and after that something else will pop up. Eventually, if you keep doing this, you will create a stable system that will be really useful.
Advertisements

A Mismatch Between Expectations and Practices

We want performant, scalable, and quality software. We wish to build and test applications that our customers profess their love to and share to their friends.

And yet:

  • We have nonexistent to little unit, performance, API, and integration tests
  • The organization do not closely monitor feature usage statistics
  • Some of us do not exactly feel the pains our customers face
  • We don’t have notifications for outdated dependencies, messy migration scripts, among other failures
  • Some are not curious about understanding how the apps they test and own actually work
  • We have not implemented continuous build tools
  • It is a pain to setup local versions of our applications, even to our own programmers
  • We do not write checks alongside development, we lack executable specifications
  • Some still think that testing and development happen in silos
  • It is difficult to get support for useful infrastructure, as well as recognition for good work
  • Many are comfortable with the status quo

It seems that we usually have our expectations mismatched with our practices. We’re frequently eager to show off our projects but are in many instances less diligent in taking measures about baking quality in, and therefore we fail more often than not. What we need are short feedback loops, continuous monitoring, and improved developer productivity, ownership, and happiness. The difficult thing is, it all starts with better communication and culture.

Notes from Alister Scott’s “Pride and Paradev: A Collection of Agile Software Testing Contradictions”

I’ve stumbled over Alister Scott‘s WatirMelon blog some years back, probably looking for an answer to a particular question about automation, and found it to be a site worth following. There he talks about flaky tests, raising bugs you don’t know how to reproduce, junior QA professional development, the craziest bug he’s ever seen, writing code, and the classic minesweeper game. He was part of the Watir project in the past, but is now an excellence wrangler over at Automattic (which takes care of WordPress). He has also written an intriguing book, titled “Pride and Paradev“, which talks about several of the contradictions that we have over in the field of software testing. In a nutshell, it explains why there are no best practices, only practices that work well under a certain context.

Here are a number of takeaways from the book:

  • A paradev is anyone on a software team that doesn’t just do programming.
  • Agile software development is all about delivering business value sooner. That’s why we work in short iterations, seek regular business feedback, are accountable for our work and change course before it’s too hard.
  • Agile software development is all about breaking things down.
  • Agile software development is all about communication and flexibility. You must be extremely flexible to work well on an agile team. You can’t be hung up about your role’s title. Constantly delivering business value means doing what is needed, and a team of people with diverse skills thrives as they constantly adapt to get things done. Most importantly flexibility means variety which is fun!
  • Delivering software everyday is easy. Delivering working software everyday is hard. The only way an agile team can deliver working software daily is to have a solid suite of automated tests that tells us it’s still working. The only way to have reliable, up-to-date automated tests is to develop them alongside your software application and run them against every build.
  • You’re testing software day in and day out, so it makes sense to have an idea about the internals of how that software works. That requires a deep technical understanding of the application. The better your understanding of the application is, the better the bugs you raise will be.
  • Hiring testers with technical skills over having a testing mindset is a common mistake. A tester who primarily spends his/her time writing automated tests will spend more time getting his/her own code working instead of testing the functionality that your customers will use.
  • What technical skills a tester lacks can be made up for with intelligence and curiosity. Even if a tester has no deep underlying knowledge of a system, they can still be very effective at finding bugs through skilled exploratory and story testing. Often non technical testers have better shoshin: a lack of preconceptions, when testing a system. A technical tester may take technical limitations into consideration but a non technical can be better at questioning why things are they way they are and rejecting technical complacency. Often non-technical testers will have a better understanding of the subject matter and be able to communicate with business representatives more effectively about issues.
  • You can be very effective as a non-technical tester, but it’s harder work and you’ll need to develop strong collaboration skills with the development team to provide support and guidance for more technical tasks such as automated testing and test data discovery or creation.
  • Whilst you think you may determine the quality of the system, it’s actually the development team as a whole that does that. Programmers are the ones who write the good/poor quality code. Whilst you can provide information and suggestions about problems: the business can and should overrule you: it’s their product for their business that you’re building: you can’t always get what you consider to be important as business decisions often trump technical ones.
  • A tester should never be measured on how many bugs they have raised. Doing so encourages testers to game the system by raising insignificant bugs and splitting bugs which is a waste of everyone’s time. And this further widens the tester vs programmer divide. Once a tester realizes their job isn’t to record bugs but instead deliver bug free stories: they will be a lot more comfortable not raising and tracking bugs. The only true measurement of the quality of testing performed is bugs missed, which aren’t recorded anyway.
  • Everything in life is contextual. What is okay in one context, makes no sense in another. I can swear to my mates, but never my Mum. Realizing the value of context will get you a long way.
  • Probably the best thing I have ever learned in life is that no matter what life throws at you, no matter what people do to you or how they treat you, the only thing you can truly control is your response.

More Lessons from James Bach and Michael Bolton’s “Rapid Software Testing”

I’ve mentioned Jame Bach and Michael Bolton’s ‘Rapid Software Testing’ ebook before. I am citing their resource once more today because they’re worth revisiting again and again, for software testers. Many of the things I’ve come to believe in about software testing are here, explained well in interesting lists, charts, and descriptions. Such are the things that junior software testers need to fully understand, as well as the stuff that seniors should thoroughly teach.

Quoting some lines from the slides:

  • All good testers think for themselves. We look at things differently than everyone else so that we can find problems no one else will find.
  • In testing, a lot of the methodology or how-tos are tacit (unspoken). To learn to test you must experience testing and struggle to solve testing problems.
  • Asking questions is a fundamental testing skill.
  • Any “expert” can be wrong, and “expert advice” is always wrong in some context.
  • The first question about quality is always “whose opinions matter?” If someone who doesn’t matter thinks your product is terrible, you don’t necessarily change anything. So, an important bug is something about a product that really bugs someone important. One implication of this is that, if we see something that we think is a bug and we’re overruled, we don’t matter.
  • It’s not your job to decide if something is a bug. You do need to form a justified belief that it might be a threat to product value in the opinion of someone who matters, and you must be able to say why you think so.
  • In exploratory testing, the first oracle to trigger is usually a personal feeling. Then you move from that to something more explicit and defensible, because testing is social. We can’t just go with our gut and leave it at that, because we need to convince other people that it’s really a bug.
  • Find problems that matter, rather than whether the product merely satisfies all relevant standards. Instead of pass vs fail, think problem vs no problem.
  • How do you think like a tester? Think like a scientist, think like a detective. Learn to love solving mysteries.

Lessons from Bernadette Jiwa’s ‘Meaningful: The Story of Ideas That Fly’

Though specifically not a book about scrum or agile software development, Bernadette Jiwa’s ‘Meaningful’ describes well the central idea behind user stories and how powerful they can be when used properly.

Some favorite lines from the book:

  • Making things is an art. Making things meaningful is an art and a science. When we understand what doesn’t work, we can fix it. When we know what people want, we can give it to them. When we realise what people care about, we can create more meaningful experiences. When we make things people love, we don’t have to make people love our things. When our values align with the worldviews of our customers, we succeed. When business exists to create meaning, not just money, we all win.
  • Early on in the process, we are so focused on ideation and creation that we forget to think about the story we will ask the customer to believe when the product launches, and so we miss an opportunity to make the product or service better.
  • Better is not defined by you; it’s defined by your customers. And just because they saw your Facebook ad, sponsored update or promoted tweet doesn’t mean they cared about it. Just because you reached them doesn’t mean you have affected them. Just because they heard you doesn’t mean they’re listening.
  • ‘Love’ is not a word we are comfortable with using in business circles. Business by definition is transactional, not emotional. But this is the one thing we need to hear (and live) most in business and not just in life. When you genuinely care about and empathise with the people you make things for, those things can’t help but become meaningful. It turns out that the best way to create a solution is to name someone’s problem or aspiration. Meaningful solutions are those that are created for actual people with problems, limitations, frustrations, wants, needs, hopes, dreams and desires that we then have a chance of fulfilling. These solutions are born from investing time in hearing what people say, watching what they do (or don’t do, but want to) and caring about them enough to want to solve that problem or create that solution that takes them to where they want to go.
  • We have less chance of engaging with our audience if we don’t fully understand the context in which they will use our product, no matter how good that product is.
  • Innovation is a by-product of empathy. Winning ideas are a by-product of taking risks. Excellence is the by-product of empowered cultures. Profits are the by-product of happy customers. Success is a by-product of mattering.
  • It’s seeing the invisible problem, not just the obvious problem, that’s important, not just for product design, but for everything we do. You see, there are invisible problems all around us, ones we can solve. But first we need to see them, to feel them.
  • Every one of us, from a software designer to a cab driver, is in the meaning business. Without meaning, products and services are just commodities.

Takeaways from Janet Gregory and Lisa Crispin’s “More Agile Testing”

If I am to be asked to name female software testers who I look up to, I’ll probably say I can only name two so far: Lisa Crispin and Janet Gregory. Why them? Because of the many software testing lessons they’ve shared to the testing community through the books they’ve written, especially ‘More Agile Testing‘, which is a treasure trove. Some favorite excerpts:

About software testing:

  • Testing expertise is a lot more valuable at the beginning of any project.
  • Testers need T-shaped skills. To be effective on any given team, we need both deep and broad skill sets. Deep knowledge and extensive practice in a single field make sure that we bring something essential to the team. On the other hand, broad knowledge in areas other than our own specialty helps us collaborate with experts in other roles.
  • Abilities such as communication, collaboration, facilitation, problem solving, and prioritization can be the most difficult to master, yet they are the most crucial for success in agile testing.
  • The most useful thinking skill is to know how to help your team address its problems, rather than going in and fixing the symptoms. And you don’t have to be in a management position to provide leadership for your team and help them improve their problem-solving effectiveness.
  • It’s possible to automate lots of tests and still fail to deliver what the customer really wants.
  • Testing does not produce quality; it produces information about quality. At a high level, the outcome we want from testing is confidence in our decisions to release software changes. Better testing produces greater confidence in those decisions. Therefore, the valuable product of effective testing is confidence.
  • Testing is more than just testing software. It is about testing ideas, helping business experts identify the most valuable features to develop next, finding a common understanding for each feature, preventing defects, and testing for business value.

About being a test manager:

  • The role of a test manager extends beyond the scope of the day-to-day work of the agile team. Much of his work involves looking outside of the iteration activities and more at the cultural and strategic needs of the testing operation. A test manager can also be a type of coach for the communities of practice within an organization. The test manager is not there to prescribe but can facilitate learning sessions where interested parties can discuss new ideas. Management and Leadership is the ability to hire and inspire great people, to remove or overcome organizational constraints and impediments that hinder them from doing their best work, to see and exploit the opportunities for people to excel, and to motivate individuals to achieve what is best for the team.

About learning, and agile:

  • Our real goal isn’t implementing agile; it’s delivering products our customers want. And when we start asking why the customers want the feature (the problem they are trying to solve), we’re more likely to build the right thing.
  • In the software profession (as in many others), people seem to forget that it takes time to learn and practice a new skill.
  • Learning is itself an agile process. Learning is not a big bang process: you know nothing, read a book, and two days later you’re an expert. Learning, like features, is something you do in iterations, adding a bit more knowledge at a time, and then building on it the next iteration. It’s not a race to the finish, and there is always something more to learn.
  • Empathy is essential for providing good feedback.
  • Changing an established culture is difficult.

When Is Software Stable?

Is it when all reported bugs have been dealt with and fixed? Or is it when all test cases written based on explicit software requirements are passing? Is it when our app users do not find anything wrong with the app, or is it when some features in the program still need work but our customers can’t seem to stop talking to their tribe about how they love what we’ve done? Is the system stable when it has many followers and likes but does not generate revenue for us, or is it unstable when some clients are delighted but others are not? Is the software doing good when there’s ROI in exchange of disregarding some of our user’s rights? Is it stable because the tester or the programmer or the product owner or the CEO said it is so? Is it stable when business goals are met, or is it when product reviews are top-notch and customer feedback are off the charts?