The experience of writing and running automated checks, as well as building some personal apps that run on the terminal, in recent years, has given me a keen sense of appreciation on how effective command-line applications can be as tools. I’ve grown fond of quick programming experiments (scraping data, playing with Excel files, among others), which are relatively easy to write, powerful, dependable, and maintainable. Tons of libraries online help interface well with the myriad of programs in our desktop or out in the web.
Choosing to read “Build Awesome Command-Line Applications in Ruby 2” is choosing to go on an adventure about writing better CLI apps, finding out how options are designed and built, understanding how they are configured and distributed, and learning how to actually test them.
Some notes from the book:
- Graphical user interfaces (GUIs) are great for a lot of things; they are typically much kinder to newcomers than the stark glow of a cold, blinking cursor. This comes at a price: you can get only so proficient at a GUI before you have to learn its esoteric keyboard shortcuts. Even then, you will hit the limits of productivity and efficiency. GUIs are notoriously hard to script and automate, and when you can, your script tends not to be very portable.
- An awesome command-line app has the following characteristics:
- Easy to use. The command-line can be an unforgiving place to be, so the easier an app is to use, the better.
- Helpful. Being easy to use isn’t enough; the user will need clear direction on how to use an app and how to fix things they might’ve done wrong.
- Plays well with others. The more an app can interoperate with other apps and systems, the more useful it will be, and the fewer special customizations that will be needed.
- Has sensible defaults but is configurable. Users appreciate apps that have a clear goal and opinion on how to do something. Apps that try to be all things to all people are confusing and difficult to master. Awesome apps, however, allow advanced users to tinker under the hood and use the app in ways not imagined by the author. Striking this balance is important.
- Installs painlessly. Apps that can be installed with one command, on any environment, are more likely to be used.
- Fails gracefully. Users will misuse apps, trying to make them do things they weren’t designed to do, in environments where they were never designed to run. Awesome apps take this in stride and give useful error messages without being destructive. This is because they’re developed with a comprehensive test suite.
- Gets new features and bug fixes easily. Awesome command-line apps aren’t awesome just to use; they are awesome to hack on. An awesome app’s internal structure is geared around quickly fixing bugs and easily adding new features.
- Delights users. Not all command-line apps have to output monochrome text. Color, formatting, and interactive input all have their place and can greatly contribute to the user experience of an awesome command-line app.
- Three guiding principles for designing command-line applications:
- Make common tasks easy to accomplish
- Make uncommon tasks possible (but not easy)
- Make default behavior nondestructive
- Options come in two-forms: short and long. Short-form options allow frequent users who use the app on the command line to quickly specify things without a lot of typing. Long-form options allow maintainers of systems that use our app to easily understand what the options do without having to go to the documentation. The existence of a short-form option signals to the user that that option is common and encouraged. The absence of a short-form option signals the opposite— that using it is unusual and possibly dangerous. You might think that unusual or dangerous options should simply be omitted, but we want our application to be as flexible as is reasonable. We want to guide our users to do things safely and correctly, but we also want to respect that they know what they’re doing if they want to do something unusual or dangerous.
- Thinking about which behavior of an app is destructive is a great way to differentiate the common things from the uncommon things and thus drive some of your design decisions. Any feature that does something destructive shouldn’t be a feature we make easy to use, but we should make it possible.
- The future of development won’t just be manipulating buttons and toolbars and dragging and dropping icons to create code; the efficiency and productivity inherent to a command-line interface will always have a place in a good developer’s tool chest.
Here’s a screenshot of something we’ve currently been working on:
It’s not available publicly yet, because it’s still in the early stages of development. The app, hosted locally on http://aspen.reservations.com/#/stay-dates, will redirect to a 404 page unless you have the app installed in your machine. As a programmer, the changes I’m making on the app is only accessible on my local computer. Other programmers will only be able to view the changes I’ve made if I push said changes and they download those updates on their machines.
Similarly, I’ll be redirected to an error page if I view the said app on my phone:
What if a tester or a product owner requests an impromptu demo of the app? What if they want to test it as is? Early constructive feedback is always nice. We can set up the app on their machine, but what if they’re working remotely? I’m pretty sure it would be a pain for them to install the development environment on their own. And what if they want to test it on their mobile phone? Having the environment installed on their computer doesn’t mean they can test the app on an actual phone.
Apparently there’s Ngrok. It’s a quick solution to publicly sharing local apps over the internet.
Here’s a screenshot of me trying out the free version:
And now I can test the app on my phone!
No installations needed on the tester / product owner / client side, just a single bash command from the programmer instead. So cool!
Until recently, what I mostly knew about test-driven development was the concept of red-green-refactor, unit testing application code while building them from the bottom-up. Likewise, I also thought that mocking was only all about mocks, fakes, stubs, and spies, tools that assist us in faking integrations so we can focus on what we really need to test. Apparently there is so much more to TDD and mocks that many programmers do not put into use.
Justin Searls calls it discovery testing. It’s a practice that’s attempts to help us build carefully-designed features with focus, confidence, and mindfulness, in a top-down approach. It breaks feature stories down into smaller problems right from the very beginning and forces programmers to slow down a little bit, enough for them to be more thoughtful about the high-level design. It uses test doubles to guide the design, which shapes up a source tree of collaborators and logical leaf nodes as development go. It favors rewrites over refactors, and helps us become aware of nasty redundant coverage with our tests.
And it categorizes TDD as a soft skill, rather than a technical skill.
He talks about it in more depth in a series of videos, building Conway’s Game of Life (in Java) as a coding exercise. He’s also talked about how we are often using mocks the wrong way, and shared insight on how we can write programs better.
Discover testing is fascinating, and has a lot of great points going for it. I feel that it takes the concept of red-green-refactor further and drives long-term maintainable software right from the start. I want to be good at it eventually, and I know I’d initially have to practice it long enough to see how much actual impact it has.
There are two things that’s wonderful from last year’s Agile Testing Days conference talks: content focusing on other valuable stuff for testers and teams (not automation), and having as many women speakers as there are men. I hope they continue on with that trend.
Here’s a list of my favourite talks from said conference (enjoy!):
- How To Tell People They Failed and Make Them Feel Great (by Liz Keogh, about Cynefin, our innate dislike of uncertainty and love of making things predictable, putting safety nets and allowing for failure, learning reviews, letting people change themselves, building robust probes, and making it a habit to come from a place of care)
- Pivotal Moments (by Janet Gregory, on living in a dairy farm, volunteering, traveling, toastmasters, Lisa Crispin, Mary Poppindieck and going on adventures, sharing failures and taking help, and reflecting on pivotal moments)
- Owning Our Narrative (by Angie Jones, on the history of the music industry so far, the changes in environment, tools, and business models musicians have had to go through so survive, and embracing changes and finding ways to fulfil our roles as software testers)
- Learning Through Osmosis (by Maaret Pyhäjärvi, on mob programming and osmosis, creating safe spaces to facilitate learning, and the power of changing some of our beliefs and behaviour)
- There and Back Again: A Hobbit’s/Developer’s/Tester’s Journey (by Pete Walen, on how software was built in the old days, how testing and programming broke up into silos, and a challenge for both parties to go back at excelling at each other’s skills and teaming up
- 10 Behaviours of Effective Agile Teams (by Rob Lambert, about shipping software and customer service, becoming a more effective employee, behaviours, and communicating well)
Yes, we need to write tests because it is something that we think will help us in the long term, even though it may be more work for us in the short run. If written with care and with the end in mind, tests serve as living documentation, living because they change as much as the application code changes, and they help us refer back to what some feature does and doesn’t, in as much detail as we want. Tests let us know which areas of the application matters to us, and every time they run they remind us of where our bearings currently are.
Tests may be user journeys in the user interface, a simulation of requests and response through the app’s API, or small tests within the application’s discrete units, most likely a combination of all these types of tests, perhaps more. What matters is that we find some value in whatever test we write, and that value merits its cost of writing and maintenance. What’s important is asking ourselves whether the test is actually significant enough to add to the test suite.
It is valuable to build a good enough suite of tests. It makes sense to add more tests as we find more key scenarios to exercise. It also makes sense to remove tests that were necessary in the past but aren’t anymore. However, I don’t think it is particularly helpful to advocate for 100% test coverage, because that brings the focus on a numbers game, similar to how measuring likes or stars isn’t really the point. I believe it is better when we discuss among ourselves, in the context we’re in, which tests are relevant and which are just diving into minutiae. If our test suite helps us deploy our apps with confidence, if our tests allows us to be effective in the performance of our testing, if we are continuously able to serve our customers as best as we can, then the numbers really doesn’t amount to much.
For the past few weeks a number of programmers and myself have been tasked to build an initial prototype for a system rewrite project, handed to us by management. The merit of such project is a matter of discussion for another day; for now it is enough to say that the team has been given a difficult challenge, but at the same time excited about the lessons we knew we will gain from such an adventure.
There’s been several takeaways already in terms of technology know-how – dockerized applications, front-end development with Vue, repositories as application vendor dependency, microservices – just several of the things we’ve never done before.
But the great takeaway so far is the joy of literally working together, inside a room away from distractions, the team working on one task at a time, focused, taking turns writing application or test code on a single machine, continuously discussing options and experimenting until a problem is solved or until it is time to take a break. We instantly become aligned at what we want to achieve, we immediately help teammates move forward, we learn from each other’s skills and mistakes, we have fun. It’s a wonder why we’ve never done much of this before. Perhaps it’s because of working in cubicles. Perhaps it’s because there’s nearly not enough available rooms for such software development practice. Perhaps it’s because we’ve never heard anything about mob programming until recently.
I’m sure it won’t be everyday since we have remote work schedules, but I imagine the team spending more days working together like this from here on.
A software development team is composed of people, a combination of of programmers, testers, designers, and product owners. One team is visibly different from another because they have different people working in them, although both teams work in the same organisation and share the same mission. One group may be dealing with passionate but inexperienced new hires, another may be led by an introverted and soft-spoken senior, another team could be a lot more experienced with automating things, one team is probably better with team communication than others. Each team has a life of its own, always growing up, always trying to find out what specific problems itself has and what values it can contribute to the organisation, evolving day after day.
We understand that everyone is similar yet different from each other. Why is it then that we often insist in managing them with a one-size-fits-all process? Or am I missing something? I understand that we want teams to get better at what they do, we want them to release software faster, with better quality, but do we really think that one particular process works for everyone? Instead of that, shouldn’t we just immerse ourselves in a team, one after another, observe, ask where they are and where they want to go, talk to them, share experiences, help them solve problems, care for them, let them grow into a family with us? It seems to me that every time I get to join a team I think about the people in it first, no particular fixed process in mind, and we find out what works for us as we go, together.