“I’ll wait for the code to get pushed to the Staging server before I continue testing because the data there is a lot more stable.”
“It’d be a waste of time trying to teach people how to program when it feels like they don’t have the motivation for it.”
“There’s a high chance things will break after a code merge with the production branch. It’s been that way ever since. We’ll just have to fix those that we can until the release.”
“I wish there was an easy way to spin up a version of our apps in a local environment. That should help us test things faster.”
Some problems become the status quo. It’s worth revisiting them, asking ourselves whether there’s really nothing we can do about it or we’ve just become comfortable with complaining.
It’s easy to get caught up on writing a lot of automation, adding more process steps, and implementing stricter policies, believing we need them, because we think we’ll be safe from risks that way. We might have had a history of being carelessly bitten in the past.
So it is good that we’ve chosen to do something. But we have to be aware of the potential to get lost in the details.
Why did we have to do this work in the first place?
We don’t just want to add one more test to the suite. We don’t just want another meeting. We don’t just want customers to pay a fee when they request something for the nth time. What we want is to remember the big picture, what we want is to focus on helping people.
Knowing how to automate things and building scheduled tests to monitor known application behavior does not guarantee bug free apps.
They do help boost confidence in what we’ve built, and that makes us more comfortable releasing changes into the wild, because it means we’ve explored a fair amount of feature breadth. It’s not the only strategy I’m sure to use given some software to test, but it’s something I’ll practice every after thoughtful exploration.
But, even with that tool to augment my testing, I have to keep my guard up. Every time, I should keep asking myself about the more valuable matter of the unknowns. What sort of things have I not considered yet? What scenarios might I still be missing? Have I really tested enough or am I merely being overconfident, just because I can automate?
Rob Lambert released a new book on testing early this year, with title “How To Thrive as a Web Tester“. Like “Remaining Relevant and Employable in a Changing World” and “The Problems with Software Testing“, the new book only takes a few hours to read through and is focused, this time offering ideas which challenges testers to think more about how they perform their day-to-day work and their skill set. Rob wants us to flourish as testers, and the book provides us with tidbits of actionable items.
Here are a few gems from the book:
- Understand who your customer is.
- Your job is to keep up to date with what’s happening, ask how it might affect your job or how you approach testing. Your job is to get involved with new ideas, movements and embrace the technology that floats your boat.
- Pick something and learn it. It will likely be out-of-date soon, or surpassed by something else. Don’t let that stop you though – just pick something and learn it. What never goes out-of-date is a Tester’s ability to find problems, ask tough questions and work well with others. Focus on developing those skills and you’ll make a great Tester whatever the tech stack you’re working on.
- The tech moves on, the way we deliver it has moved on, the pace of delivery may change but the same problems exist; how to get people to work well together, how to understand customer’s needs, how to scale, how to make profit, how to dominate a market, how to learn, how to build an effective and efficient businesses, how to stop bugs affecting customers. Same problems, different tech.
- Your job is to ship software. Sure, if there are problems that will affect the customer or company, you’ll stop the release, but you’ll also work tirelessly to see how you can improve to allow safer releases, better monitoring, better rollbacks, better roll forwards and a more seamless release mechanism.
- You need to remain relevant, employable and effective to a wider business community. You cannot rely on your employers for your career development and security – that is your own responsibility.
- Let go of your job role, or what your job description says, and take ownership of the actual work and any problems that surround it. Take on problems others aren’t owning. Or offer to help those struggling with fixing tricky problems at work. There are always problems, and most problems exist between job roles.
- Testing is the art of asking questions and seeking answers. Never stop asking questions. Study how to ask good questions. Study how to listen for answers. They can come from anywhere, at any time. Good Testers know when to ask questions. They build the muscle memory needed to match patterns, to know when and where to ask, to know what to ask, to know who to ask, and to follow instincts.
- Good Testers solve problems. Even problems that are outside of the responsibility of Testers.
- If you aren’t as good as you can be, it’s because you haven’t yet decided to be the best version of yourself yet. It’s as simple as that.
- We are all, thankfully, different. There is no single model we should all fit. The world would be a pretty dull place if that were the case.
Elisabeth Hendrickson’s book “There’s Always A Duck” has been around for a number of years but I have only been able to read it recently. Now I know what she meant about ducks. They’re literally about ducks, but also about people too. People are different, and yet we share similarities. We experience things, we communicate with each other, and we learn and get better because of those experiences. Her book tells us stories of her adventures and the lessons she’s discovered along the way, and it was nice to have had a glimpse of what she saw and felt with her encounters with software project teams and everyone involved.
- The vast majority of programmers I have met are diligent, capable folk. They truly care about the quality of their work and want the software they produce to be useful. They work hard to make sure they are implementing the right features and writing solid code.
- The next time you’re tempted to think of your programmers as idiots, incompetents, or quality hostile, remember that no matter what else they may be, they’re people first. Even if it seems like they’re hostile or incapable, it is far more likely that they are having a very human reaction to a particularly bad situation.
- And before you blame someone else for a mistake, remember the last time you made one. I’ve made some real whopper mistakes in my time. We all have, whether or not we choose to admit them or even remember them. It may be that some programmers don’t care about users, but it’s more likely that bugs are honest mistakes made under difficult circumstances.
- Even when we are speaking the same language and about the same thing, it’s hard enough to communicate.
- The point wasn’t to catch every possible error. What seems to go wrong most often? What errors are difficult to see at first glance, and thus require concentration to prevent? What causes the most damage when it happens?
- Janet doesn’t know anything about the ins and outs of creating software. She probably doesn’t want to know. She just wants to serve her customers well. And this software is not helping. Back at corporate, the Steering Committee, Requirements Analysts, Designers, Programmers and Testers are congratulating themselves on a solid release. What they don’t see is Janet’s pain. The feedback loop is broken. The team back at corporate has no mechanism to find out whether the software is any good. Oh, sure, they’ll detect catastrophic problems that cause servers to go down. But they won’t see the little things that cause long queues at the front desk of the hotel.
- Testers naturally notice details. Not only do we notice, but we think about what we noticed, we attempt to interpret our observations to explain why things might be that way, we ask others if they noticed, we question our assumptions, and we poke and prod at things to see if our understanding is correct. We use our observations to inform us, and in doing so discover underlying causes and effects we otherwise might miss.
- I sometimes fall into the trap of thinking that the first problem I see must be THE problem that needs to be solved. Perhaps the problem I spotted is indeed worth correcting, but I almost never manage to spot the true critical issue at first glance.
- Both fear and excitement stem not from observable reality but rather from speculation. We are speculating that the bugs that we know about and have chosen not to fix are actually as unimportant to our users as they are to us. We are speculating that the fact we have not found any serious defects is because they don’t exist and not because we simply stopped looking. We are speculating that we knew what the users actually wanted in the first place. We are speculating that the tests we decided not to run wouldn’t have found anything interesting. We are speculating that the tests we did run told us something useful. None of it is real until it is in the hands of actual users. The experience those users report is reality. Everything else is speculation.
- It’s not because Agile is about going faster. It’s because structuring our work so that we can ship a smaller set of capabilities sooner means that we can collapse that probability wave more often. We can avoid living in the land of speculation, fooling ourselves into thinking that the release is alive (or dead) based on belief rather than fact. In short, frequent delivery means we live in reality, not probability.
- Hire the right people. If that means keeping a critical position on the team open longer than anticipated, so be it. It’s better to have an under- staffed team of highly motivated, talented, skilled people than a fully staffed but ineffective team. Remember that hiring mistakes often take only a few minutes to make, and months of wasted time to undo.
- Listen. There are always signs when a project is in trouble: missed milestones, recurrent attitude problems, general confusion about the project. Sometimes these signs indicate a dysfunctional team, sometimes they’re just normal bumps along the road, and sometimes they are early warning signs of major problems. The only way to tell the difference is to listen carefully to what the team members have to say.
- The best way to get people to accept change is to make it more fun, and more rewarding, to do things the new way.
- Choose a path that takes you in the direction you want to go. Don’t choose a path simply because it takes you away from the swamp you want to avoid.
Rob Lambert is certain that the current world of software testing is in chaos, convinced that the software testing community is in a state of turmoil in recent years, and he’s actually turned his ideas about some of these software problems into a free e-book, wrote and talked about them so that those of us who continue to test software for a living are guided about the reality of the industry and about what we can do.
Some notes from the book:
- Automating software testing is no different from automating tasks outside of work. We are ultimately trying to save labor time, do tasks we would not normally be able to do and automate those tasks that are tedious. And on paper that sounds awesome. Problem is, in reality it’s not quite as sweet as that.
- Best practices are moments in time when something went well, for someone, on some project and in some context. There’s no guarantee that it will work again in the future as your context is forever changing.
- Certifications are certainly a way to make money. Which is fine, it’s got a good business model. No one can begrudge people for making money. But is the certification helping the testing community or is it ruining it? Is it making testers “better” at what they do or simply giving them some generic terminology and a piece of paper? Is that a bad thing? Is it spoiling testers with false education?
A year and a half ago, I knew nothing about software test automation. All I had was a problem: I had been increasingly getting flustered over checking all necessary test scenarios for project releases. It seemed that our software systems were getting big. For each project iteration, new tests were being added for regression, some more complicated than others, and testing time was short and finite, and I did not have enough means to process everything to make deadlines. It was starting to get chaotic because I felt I had no way of looking over each test to make sure things are working fine, and that was bad. Then I came across Selenium, and I thought that maybe this could solve my problem, and started using it (with failures and successes), and the rest is history. I got the solution I wanted after a lot of tinkering, and I am glad I pushed myself to learn.