Lessons from Gojko Adzic’s “Specification By Example”

Automated checking is not a new concept. Gojko Adzic, however, provides us a way to make better integration of it in our software development processes. In his book titled “Specification by Example”, he talks about executable specifications that double as a living documentation. These are examples which continuously exercise business rules, they help teams collaborate, and, along with software code, they’re supposed to be the source of truth for understanding how our applications work. He builds a strong case about the benefits of writing specifications by example by presenting case studies and testimonials of teams who have actually used it in their projects, and I think that it is a great way of moving forward, of baking quality in.

Some favorite takeaways from the book:

  • Tests are specifications; specifications are tests.
  • “If I cannot have the documentation in an automated fashion, I don’t trust it. It’s not exercised.” -Tim Andersen
  • Beginners think that there is no documentation in agile, which is not true. It’s about choosing the types of documentation that are useful. There is still documentation in an agile process, and that’s not a two-feet-high pile of paper, but something lighter, bound to the real code. When you ask, “does your system have this feature?” you don’t have a Word document that claims that something is done; you have something executable that proves that the system really does what you want. That’s real documentation.
  • Fred Brooks quote: In The Mythical Man-Month 4 he wrote, “The hardest single part of building a software system is deciding precisely what to build.” Albert Einstein himself said that “the formulation of a problem is often more essential than its solution.”
  • We don’t really want to bother with estimating stories. If you start estimating stories, with Fibonacci numbers for example, you soon realize that anything eight or higher is too big to deliver in an iteration, so we’ll make it one, two, three, and five. Then you go to the next level and say five is really big. Now that everything is one, two, and three, they’re now really the same thing. We can just break that down into stories of that size and forget about that part of estimating, and then just measure the cycle time to when it is actually delivered.
  • Sometimes people still struggle with explaining what the value of a given feature would be (even when asking them for an example). As a further step, I ask them to give an example and say what they would need to do differently (work around) if the system would not provide this feature. Usually this helps them then to express the value of a given feature.
  • QA doesn’t write [acceptance] tests for developers; they work together. The QA person owns the specification, which is expressed through the test plan, and continues to own that until we ship the feature. Developers write the feature files [specifications] with the QA involved to advise what should be covered. QA finds the holes in the feature files, points out things that are not covered, and also produces test scripts for manual testing.
  • If we don’t have enough information to design good test cases, we definitely don’t have enough information to build the system.
  • Postponing automation is just a local optimization. You might get through the stories quicker from the initial development perspective, but they’ll come back for fixing down the road. David Evans often illustrates this with an analogy of a city bus: A bus can go a lot faster if it doesn’t have to stop to pick up passengers, but it isn’t really doing its job then.
  • Workflow and session rules can often be checked only against the user interface layer. But that doesn’t mean that the only option to automate those checks is to launch a browser. Instead of automating the specifications through a browser, several teams developing web applications saved a lot of time and effort going right below the skin of the application—to the HTTP layer.
  • Automating executable specifications forces developers to experience what it’s like to use their own system, because they have to use the interfaces designed for clients. If executable specifications are hard to automate, this means that the client APIs aren’t easy to use, which means it’s time to start simplifying the APIs.
  • Automation itself isn’t a goal. It’s a tool to exercise the business processes.
  • Effective delivery with short iterations or in constant flow requires removing as many expected obstacles as possible so that unexpected issues can be addressed. Adam Geras puts this more eloquently: “Quality is about being prepared for the usual so you have time to tackle the unusual.” Living documentation simply makes common problems go away.
  • Find the most annoying thing and fix it, then something else will pop up, and after that something else will pop up. Eventually, if you keep doing this, you will create a stable system that will be really useful.

History

I was never fond of history subjects back in formal school. I don’t know why exactly, but there’s something about history that felt uninteresting to me – maybe I’m just not keen on memorizing facts and figures, maybe it’s because what’s being discussed are stuff that happened decades in the past and those are often terribly difficult to relate to the context of things that are happening now, or maybe I just can’t imagine history questions as puzzles the way I can do it with math, science, english, and the other subjects. Of course I studied for the quizzes and the exams, and I read the books as much as I could, but I don’t remember myself enjoying the process of learning history in school. I wonder if reading those books again today will change something.

On the contrary, I like to stay updated about the history of the applications that I test. The product history is akin to a colleague who helps me find out what important changes have happened during a particular point in time in an application’s active life. He takes notes of how systems evolved from one day to another and tells me why some applications are behaving differently compared to what’s expected. He shows me how software can get sick some of the time, points me to the root causes of problems, and aids programmers in building fixes. He shares stories of brilliant creative work of people who care, which often do not get celebrated, as well as stories of mistakes and challenges that those same people have faced. Most of all, he displays the truth about learning, teamwork, growth, and how the road looks like from starting to continuously delivering.

About Sprint Data Gathering

Looking at a sprint document I designed a few years back (and updated by my scrum master colleagues), what I see are all sorts of data: planned total story points, actual finished story points, planned total task hours, actual finished task hours (total, per developer, team per day), burndown chart, attendance records, retrospective notes, tickets notes, returned tasks, unplanned tasks, underestimated tasks. So many information being gathered and compiled every sprint iteration; it’s not really too difficult to document but it does kind of takes away a good chunk of time to manage. Back then when we were new to scrum I thought that crunching numbers will provide insights as to how we can improve estimates and productivity, I thought that maybe we’d become faster and better at what we do after performing some mathematics.

It turns out that it doesn’t work that way. It turns out that estimates and productive behavior is a lot more complicated than it seems. It turns out that personalities and preferences and work environment and team dynamics and other small stuff we don’t pay attention much to play big roles in being good at software development as a team. Data about these things are not shown in the sprint document, and I don’t have the slightest idea how to measure them. So, looking back at all these stuff we’re trying to analyze, I’m haunted by questions: Which data are important enough to keep taking notes of, sprint after sprint, to keep reviewing? Why are they important? How do these data help me define how my team improves in succeeding iterations? Are they just pointing out problems but no solutions in sight? What if I stop documenting these data, would that help me focus my energy towards other possibly more important to-dos?