Some engaging articles I’d like to share today:
- The Failures of ‘Intro to TDD’ (by Justin Searls, about classic test-driven development, code katas, what TDD’s primary benefit actually is, mocks, and discovery testing)
- Branching is Easy. So? Git-flow Is Not Agile (by Camille Fournier, on Git, tooling, teams, developing software with agility, how easy it is to create feature branches with Git, and some reasons why you don’t need it)
- What’s Wrong With Estimates? (by Yorgos Saslis, about estimates, why we are being asked to provide estimates, and the pitfalls of the ways we are using them)
- A New Social Contract for Open Source (by Eran Hammer, on open-source, free code but paid time, sponsorship, making clear rules and specifying benefits, and sustainability)
- Introducing Example Mapping (by Matt Wynne, about conversations to clarify and confirm acceptance criteria, feedback, colored index cards, and the benefits of writing examples when exploring a problem domain)
Automated checking is not a new concept. Gojko Adzic, however, provides us a way to make better integration of it in our software development processes. In his book titled “Specification by Example”, he talks about executable specifications that double as a living documentation. These are examples which continuously exercise business rules, they help teams collaborate, and, along with software code, they’re supposed to be the source of truth for understanding how our applications work. He builds a strong case about the benefits of writing specifications by example by presenting case studies and testimonials of teams who have actually used it in their projects, and I think that it is a great way of moving forward, of baking quality in.
Some favorite takeaways from the book:
- Tests are specifications; specifications are tests.
- “If I cannot have the documentation in an automated fashion, I don’t trust it. It’s not exercised.” -Tim Andersen
- Beginners think that there is no documentation in agile, which is not true. It’s about choosing the types of documentation that are useful. There is still documentation in an agile process, and that’s not a two-feet-high pile of paper, but something lighter, bound to the real code. When you ask, “does your system have this feature?” you don’t have a Word document that claims that something is done; you have something executable that proves that the system really does what you want. That’s real documentation.
- Fred Brooks quote: In The Mythical Man-Month 4 he wrote, “The hardest single part of building a software system is deciding precisely what to build.” Albert Einstein himself said that “the formulation of a problem is often more essential than its solution.”
- We don’t really want to bother with estimating stories. If you start estimating stories, with Fibonacci numbers for example, you soon realize that anything eight or higher is too big to deliver in an iteration, so we’ll make it one, two, three, and five. Then you go to the next level and say five is really big. Now that everything is one, two, and three, they’re now really the same thing. We can just break that down into stories of that size and forget about that part of estimating, and then just measure the cycle time to when it is actually delivered.
- Sometimes people still struggle with explaining what the value of a given feature would be (even when asking them for an example). As a further step, I ask them to give an example and say what they would need to do differently (work around) if the system would not provide this feature. Usually this helps them then to express the value of a given feature.
- QA doesn’t write [acceptance] tests for developers; they work together. The QA person owns the specification, which is expressed through the test plan, and continues to own that until we ship the feature. Developers write the feature files [specifications] with the QA involved to advise what should be covered. QA finds the holes in the feature files, points out things that are not covered, and also produces test scripts for manual testing.
- If we don’t have enough information to design good test cases, we definitely don’t have enough information to build the system.
- Postponing automation is just a local optimization. You might get through the stories quicker from the initial development perspective, but they’ll come back for fixing down the road. David Evans often illustrates this with an analogy of a city bus: A bus can go a lot faster if it doesn’t have to stop to pick up passengers, but it isn’t really doing its job then.
- Workflow and session rules can often be checked only against the user interface layer. But that doesn’t mean that the only option to automate those checks is to launch a browser. Instead of automating the specifications through a browser, several teams developing web applications saved a lot of time and effort going right below the skin of the application—to the HTTP layer.
- Automating executable specifications forces developers to experience what it’s like to use their own system, because they have to use the interfaces designed for clients. If executable specifications are hard to automate, this means that the client APIs aren’t easy to use, which means it’s time to start simplifying the APIs.
- Automation itself isn’t a goal. It’s a tool to exercise the business processes.
- Effective delivery with short iterations or in constant flow requires removing as many expected obstacles as possible so that unexpected issues can be addressed. Adam Geras puts this more eloquently: “Quality is about being prepared for the usual so you have time to tackle the unusual.” Living documentation simply makes common problems go away.
- Find the most annoying thing and fix it, then something else will pop up, and after that something else will pop up. Eventually, if you keep doing this, you will create a stable system that will be really useful.
I recently finished reading Lisa Crispin and Janet Gregory’s ‘More Agile Testing‘ book and it made me think about problems that I think we need to take care of at work, some of which are:
- Not enough specification by example before proceeding to coding phase. Although we do planning stages and have product owners that provide us with feature requirements written in statements, I feel that we often still don’t cover all the necessary examples required by our stories.
- Zero automated unit and integration tests on complex legacy systems. This really hurts, especially on short testing iterations, because automated end-to-end regression tests can only do so much and manual acceptance tests are just not enough to provide a high level of confidence about system quality.
- Technical debt. Rushed feature releases has left us very minimal time for refactoring sections of code that have always been a pain to develop and fix. Code reviews are usually non-existent too.
- Minimal testing visibility outside the sprint team. Programmers I work with know how I do testing – my capacity and the quality of information I provide. Other members of the team do not see as much and this causes delayed feedback, and brings in a somewhat different set of problems – only the tester knows how much testing coverage has been reached, teammates do not know how to help, among others.