Extending The Avenues Of Performing Testing

Last Wednesday afternoon I anxiously asked my boss for permission to make changes on our application code repository. I said I wanted to try fixing some of the reported bugs listed on our tracking system, if there are no other resources available to pass them to. I made a case about myself not posing any problems because of the code review process built into our repository management tool, that there’s no reason for me to merge any changes without getting feedback from a senior developer first.

He smiled at me and gleefully said “Go ahead. I’m not going to stop you.“, to which I beamed and heartily replied “Thanks, boss!”

This is a turning point in my software testing career, to be able to work on the application code directly as needed. It is actually one of my biggest frustrations – to not be able to find out for myself where the bug lives in the code and fix them if necessary. It’s always a pain to be able to do nothing but wait for a fix, and for a fix to be dependent on the resources available. In my head I think that I’m available and maybe I can do something, but I don’t explicitly have access to the application itself and the code that runs it so I can’t do anything until I have the rights to do so. That’s how it always been. Software testers are often not expected to fiddle with code, at least in my experience, especially in the past where automation was not yet known to be useful as a testing tool. Now that I have the skills and the permission to work on the application repository, I feel that my reach for making an impact on application quality has now expanded remarkably well.

Now bug-fixing is not software testing work in the traditional sense. But I figured there’s no harm in trying to fix bugs and learning the nitty-gritty details of how our legacy applications actually run deep in the code. I believe that learning technical stuff helps me communicate better with programmers. It helps me test applications in a more efficient manner too. Of course I have to consistently remind myself that I am a software-tester-first-programmer-second guy and have to be careful not to fill my days playing with code and forgetting to explore our applications themselves. That said, there are ideas I really want to experiment within our software development process, towards the goal of improving code quality and feedback, and I can only tinker with those ideas inside the application repository itself. Dockerized testing environments, code linting, and unit tests are three things I want to start building for our team, ideas that I consider to be very helpful in writing better code but has not been given enough priority through the years.

I think I’m still testing software, just extending the knowledge and practice of the various ways I perform testing.

Advertisements

Takeaways from Margaret Heffernan’s “Willful Blindness”

To answer a question about exploratory testing, Alister Scott recommends testers to read a Margaret Heffernan book, titled “Willful Blindness“. He tells us that we have to be less blind when we’re exploring in order to find bugs in systems under test. We have to keep on looking, we have to continuously question things, we have to choose to know and understand how the system works. Reading Margaret’s book has helped me realize what being willfully blind meant and how we become blind without noticing. It has helped me be more aware of the different ways I can misjudge things, and thus helps me get better. Cognitive limits, biases, division of labor, money, hierarchy, relationships, feelings of belonging or ostracism, all these and more play a part in how we behave in various situations. They affect how we perform our software testing too.

Some takeaways:

  • We can’t notice and know everything: the cognitive limits of our brain simply won’t let us. That means we have to filter or edit what we take in. So what we choose to let through and to leave out is crucial. We mostly admit the information that makes us feel great about ourselves, while conveniently filtering whatever unsettles our fragile egos and most vital beliefs.
  • Most people marry other people very like themselves: similar height, weight, age, background, IQ, nationality, ethnicity. We may think that opposites attract, but they don’t get married. Sociologists and psychologists, who have studied this phenomenon for decades, call it “positive assortative mating” – which really just means that we marry people like ourselves. When it comes to love, we don’t scan a very broad horizon. People may have an interest in people who are different from themselves but they don’t marry them. They’re looking for confirmation, for comfort.
  • All personalization software does the same thing: make our lives easier by reducing overwhelming choice. And software is doing it the same way that our brain does, by searching for matches. This is immensely efficient: It means that the brain can take shortcuts because it is working with what it already knows, not having to start from scratch. When we find what we like, part of our pleasure is the joy of recognition. But the flip side of that satisfaction is that we are rejecting a lot along the way.
  • We like ourselves, not least because we are known and familiar to ourselves. So we like people similar to us – or that we just imagine might have some attributes in common with us. They feel familiar too, and safe. And those feelings of familiarity and security make us like ourselves more because we aren’t anxious. We belong. Our self-esteem rises. We feel happy. Human beings want to feel good about themselves and to feel safe, and being surrounded by familiarity and similarity satisfies those needs very efficiently. The problem with this is that everything outside that warm, safe circle is our blind spot.
  • Bias is pervasive among all of us, whether we think we’re biased or not.
  • The argument for diversity is that if you bring together lots of different kinds of people, with a wide range of education and experience, they can identify more solutions, see more alternatives to problems, than any single person or homogenous group ever could. Groups have the potential, in other words, to be smarter than individuals; that’s the case put forward so compellingly by James Surowiecki in his book, The Wisdom of Crowds. But the problem is that, as our biases keep informing whom we hire and promote, we weed out that diversity and are left with skyscrapers full of people pretty much the same.
  • But while it’s true that all of us now have access to more information than ever before in history, for the most part we don’t use it. Just like newspapers, we read the blogs that we agree with – but there we encounter a virtually infinite echo chamber, as 85 percent of blogs link to other blogs with the same political inclination.
  • Our blindness grows out of the small, daily decisions that we make, which embed us more snugly inside our affirming thoughts and values. And what’s most frightening about this process is that as we see less and less, we feel more comfort and greater certainty. We think we see more – even as the landscape shrinks.
  • Indeed, there seems to be some evidence not only that all love is based on illusion—but that love positively requires illusion in order to endure. When you love someone, he or she may even start to adapt to your illusion of him or her. So there is a kind of virtuous circle: you think better of your beloved who starts to live up to your illusions and so you love him or her more. It sounds a little like a fairy tale, but kissing frogs may make them act like princes or princesses. It is indeed a kind of magic, illusions transforming reality. We don’t have to love people for who they are but for who we think they are, or need them to be. This is something everyone does: overlook the flaws, discount the disappointments, focus on what works. Our love for each other allows us, even compels us, to see the best in each other.
  • One of the many downsides of living in communities in which we are always surrounded by people like ourselves is that we experience very little conflict. That means we don’t develop the tools we need to manage conflict and we lack confidence in our ability to do so. We persuade ourselves that the absence of conflict is the same as happiness, but that trade-off leaves us strangely powerless.
  • Because it takes less brain power to believe than to doubt, we are, when tired or distracted, gullible. Because we are all biased, and biases are quick and effortless, exhaustion makes us favor the information we know and are comfortable with. We’re too tired to do the heavier lifting of examining new or contradictory information, so we fall back on our biases, the opinions and the people we already trust.
  • People stay silent at work—bury their heads in the sand—because they don’t want to provoke conflict by being, or being labeled, troublemakers. They may not like the status quo but, in their silence, they maintain it, believing (but also ensuring) the status quo can’t be shifted.
  • Hierarchies, and the system of behaviors that they require, proliferate in nature and in man-made organizations. For humans, there is a clear evolutionary advantage in hierarchies: a disciplined group can achieve far more than a tumultuous and chaotic crowd. Within the group, acceptance of the differing roles and status of each member ensures internal harmony, while disobedience engenders conflict and friction. The disciplined, peaceful organization is better able to defend itself and advance its interests than is a confused, contentious group that agrees on nothing. The traditional argument in favor of hierarchies and obedience has been that of the social contract: It is worth sacrificing some degree of individuality in order to ensure the safety and privileges achieved only by a group. When the individual is working alone, conscience is brought into play. But when working within a hierarchy, authority replaces individual conscience. This is inevitable, because otherwise the hierarchy just doesn’t work: too many consciences and the advantage of being in a group disappears. Conscience, it seems, doesn’t scale.
  • Human beings hate being left out. We conform because to do so seems to give our life meaning. This is so fundamental a part of our evolutionary makeup that it is strong enough to make us give the wrong answers to  questions, as in Asch’s line experiments, and strong enough to make us disregard the moral lessons we’ve absorbed since childhood. The carrot of belonging and the stick of exclusion are powerful enough to blind us to the consequences of our actions.
  • Independence, it seems, comes at a high cost.
  • The larger the number of people who witness an emergency, the fewer who will intervene. The bystander effect demonstrates the tremendous tension between our social selves and our individual selves. Left on our own, we mostly do the right thing. But in a group, our moral selves and our social selves come into conflict, which is painful. Our fear of embarrassment is the tip of the iceberg that is the ancient fear of exclusion, and it turns out to be astonishingly potent. We are more likely to intervene when we are the sole witness; once there are other witnesses, we become anxious about doing the right thing (whatever that is), about being seen and being judged by the group.
  • It is so human and so common for innovation to fail not through lack of ideas but through lack of courage. Business leaders always claim that innovation is what they want but they’re often paralyzed into inaction by hoping and assuming that someone else, somewhere, will take the risk.
  • The greatest evil always requires large numbers of participants who contribute by their failure to intervene.
  • Technology can maintain relationships but it won’t build them. Conference calls, with teams of executives huddled around speakerphones, fail to convey personality, mood, and nuance. You may start to develop rapport with the person who speaks most—or take an instant dislike to him or her. But you’ll never know why. Nor will you perceive the silent critic scowling a thousand miles away. Videoconferencing distracts all its participants who spend too much time worrying about their hair and whether they’re looking fat, uncomfortable at seeing themselves on screen. The nervous small talk about weather—it’s snowing there? It’s hot and sunny here—betrays anxiety about the vast differences that the technology attempts to mask. We delude ourselves that because so many words are exchanged—e-mail, notes, and reports—somehow a great deal of communication must have taken place. But that requires, in the first instance, that the words be read, that they be understood, and that the recipient know enough to read with discernment and empathy. Relationships—real, face-to-face relationships—change our behavior.
  • The division of labor isn’t designed to keep corporations blind but that is often its effect. The people who manufacture cars aren’t the people who repair them or service them. That means they don’t see the problems inherent in their design unless a special effort is made to show it to them. Software engineers who write code aren’t the same as the ones who fix bugs, who also aren’t the customer-service representatives you call when the program crashes your machine. Companies are now organized—often for good reasons—in ways that can facilitate departments becoming structurally blind to one another.
  • We want money for a very good reason: it makes us feel better. Money does motivate us and it does make us feel better. That’s why companies pay overtime and bonuses. It may not, in and of itself, make us absolutely happy—but, just like cigarettes and chocolate, our wants are not confined to what’s good for us. The pleasure of money is often short-lived, of course. Because there are always newer, bigger, flashier, sweeter products to consume, the things we buy with money never satisfy as fully as they promise. Psychologists call this the hedonic treadmill: the more we consume, the more we want. But we stay on the treadmill, hooked on the pleasures that, at least initially, make us feel so good.
  • Motivation may work in ways similar to cognitive load. Just as there is a hard limit to how much we can focus on at one moment, perhaps we can be motivated by only one perspective at a time. When we care about people, we care less about money, and when we care about money, we care less about people. Our moral capacity may be limited in just the same way that our cognitive capacity is.
  • Money exacerbates and often rewards all the other drivers of willful blindness: our preference for the familiar, our love for individuals and for big ideas, a love of busyness and our dislike of conflict and change, the human instinct to obey and conform, and our skill at displacing and diffusing responsibility. All these operate and collaborate with varying intensities at different moments in our life. The common denominator is that they all make us protect our sense of self-worth, reducing dissonance and conferring a sense of security, however illusory. In some ways, they all act like money: making us feel good at first, with consequences we don’t see. We wouldn’t be so blind if our blindness didn’t deliver the benefit of comfort and ease.
  • Once you are in a leadership position, no one will ever give you the inner circle you need. You have to go out and find it.
  • We make ourselves powerless when we choose not to know. But we give ourselves hope when we insist on looking. The very fact that willful blindness is willed, that it is a product of a rich mix of experience, knowledge, thinking, neurons, and neuroses, is what gives us the capacity to change it. We can learn to see better, not just because our brain changes but because we do. As all wisdom does, seeing starts with simple questions: What could I know, should I know, that I don’t know? Just what am I missing here?

Asking More Questions

What problems do we face everyday that nobody seems to be trying to solve?

What’s something that’s true for me but a lot of people do not agree with? Why do I believe the opposite?

Where do we want to explore next?

What do we stand for? What values do we deem irrevocable?

Which habits do we need to exercise? Which ones should we stop?

Are we doing something for fun? How did we end up doing this particular thing we’re into now?

What questions do we need to ask someone?

Who do we need to thank?

Which stories about ourselves should we rethink?

Notes from Creating Great Teams (How Self-Selection Lets People Excel) by Sandy Mamoli and David Mole

Here’s an always challenging question: how are great software development teams formed? Managers, scrum masters, we all struggle to create continuous progress within our groups. And we know that there’s lots of factors in why that is – communication, skills, individual quirks. Sandy Mamoli and David Mole tells us that self-selection is the answer, and their book, Creating Great Teams (How Self-Selection Lets People Excel), provides us with the details.

Here are some notes from the book:

  • Fundamentally, two factors determine whether a group will forge itself into a team: 1) Do these people want to work on this problem? 2) Do these people want to work with each other? Neither a computer program nor a manager can answer these questions. Only the employees who will do the work can.
  • Self-selection is a facilitated process of letting people self-organize into small, cross-functional teams. Based on the belief that people are at their happiest and most productive if they can choose what they work on and who they work with, we think it’s the fastest and most efficient way to form stable teams.
  • The best motivators are autonomy, mastery, and purpose. Autonomy provides employees with freedom over some or all of the four main aspects of work: when they do it, how they do it, who they do it with, and what they do. Mastery encourages employees to become better at a subject or task that matters to them and allows for continuous learning. Purpose gives people an opportunity to fulfill their natural desire to contribute to a cause greater and more enduring than themselves.
  • No one chooses to work on more than one team or project. Time and again organizations fall into the trap of optimizing resources rather than focusing on outcomes. People often believe that multitasking, having people work across several projects, and focusing on resource utilization are the keys to success, when in reality they’re not.
  • People communicate face to face. There are barely any discussions about process or how to communicate. Team members just talk and coordinate and collaborate as needed. Things are much faster that way.
  • In the spirit of letting people control their way of working, we never mandate whether a squad should run scrum, kanban, their own special creation, or a traditional way of working. Following Daniel Pink’s principles of motivation, one of the key forms of autonomy is being in control of your processes. Giving people autonomy over who they work with should be extended by letting them choose how they work together.
  • There are two agile practices we believe should remain mandatory: retrospectives and physical story walls (if you are co-located).
  • It’s fair to say that sometimes employees don’t want to work with each other. And that’s okay. People know whether they’re going to gel in a squad with a particular person, and if not, it makes sense they would choose not to work with him or her. Self-selection, unlike management selection, allows them to make that choice.

Be Brave To Ask People To Join You On A Quest

I have always thought of myself as a good listener. That’s what I believe to be particularly the reason why I have thrived working with both programmers and product owners, a software tester in the midst of all sorts of people. It’s a fundamental skill I have learned to be proficient in.

What’s not my strong suit at though is asking people for what I want or need. Of course asking for little things or asking probing/clarifying questions isn’t that difficult; what’s tough is inviting people to join you on a quest, asking friends to do something interesting together, or asking a fascinating lady out to lunch. It’s a fear which doesn’t get any easier even if I actually know the problem. And the solution is the same as with all skills: practice. It has been and always will be a struggle, so I need to continuously remind myself to be brave.

Takeaways from Elisabeth Hendrickson’s “There’s Always A Duck”

Elisabeth Hendrickson’s book “There’s Always A Duck” has been around for a number of years but I have only been able to read it recently. Now I know what she meant about ducks. They’re literally about ducks, but also about people too. People are different, and yet we share similarities. We experience things, we communicate with each other, and we learn and get better because of those experiences. Her book tells us stories of her adventures and the lessons she’s discovered along the way, and it was nice to have had a glimpse of what she saw and felt with her encounters with software project teams and everyone involved.

Some takeaways:

  • The vast majority of programmers I have met are diligent, capable folk. They truly care about the quality of their work and want the software they produce to be useful. They work hard to make sure they are implementing the right features and writing solid code.
  • The next time you’re tempted to think of your programmers as idiots, incompetents, or quality hostile, remember that no matter what else they may be, they’re people first. Even if it seems like they’re hostile or incapable, it is far more likely that they are having a very human reaction to a particularly bad situation.
  • And before you blame someone else for a mistake, remember the last time you made one. I’ve made some real whopper mistakes in my time. We all have, whether or not we choose to admit them or even remember them. It may be that some programmers don’t care about users, but it’s more likely that bugs are honest mistakes made under difficult circumstances.
  • Even when we are speaking the same language and about the same thing, it’s hard enough to communicate.
  • The point wasn’t to catch every possible error. What seems to go wrong most often? What errors are difficult to see at first glance, and thus require concentration to prevent? What causes the most damage when it happens?
  • Janet doesn’t know anything about the ins and outs of creating software. She probably doesn’t want to know. She just wants to serve her customers well. And this software is not helping. Back at corporate, the Steering Committee, Requirements Analysts, Designers, Programmers and Testers are congratulating themselves on a solid release. What they don’t see is Janet’s pain. The feedback loop is broken. The team back at corporate has no mechanism to find out whether the software is any good. Oh, sure, they’ll detect catastrophic problems that cause servers to go down. But they won’t see the little things that cause long queues at the front desk of the hotel.
  • Testers naturally notice details. Not only do we notice, but we think about what we noticed, we attempt to interpret our observations to explain why things might be that way, we ask others if they noticed, we question our assumptions, and we poke and prod at things to see if our understanding is correct. We use our observations to inform us, and in doing so discover underlying causes and effects we otherwise might miss.
  • I sometimes fall into the trap of thinking that the first problem I see must be THE problem that needs to be solved. Perhaps the problem I spotted is indeed worth correcting, but I almost never manage to spot the true critical issue at first glance.
  • Both fear and excitement stem not from observable reality but rather from speculation. We are speculating that the bugs that we know about and have chosen not to fix are actually as unimportant to our users as they are to us. We are speculating that the fact we have not found any serious defects is because they don’t exist and not because we simply stopped looking. We are speculating that we knew what the users actually wanted in the first place. We are speculating that the tests we decided not to run wouldn’t have found anything interesting. We are speculating that the tests we did run told us something useful. None of it is real until it is in the hands of actual users. The experience those users report is reality. Everything else is speculation.
  • It’s not because Agile is about going faster. It’s because structuring our work so that we can ship a smaller set of capabilities sooner means that we can collapse that probability wave more often. We can avoid living in the land of speculation, fooling ourselves into thinking that the release is alive (or dead) based on belief rather than fact. In short, frequent delivery means we live in reality, not probability.
  • Hire the right people. If that means keeping a critical position on the team open longer than anticipated, so be it. It’s better to have an under- staffed team of highly motivated, talented, skilled people than a fully staffed but ineffective team. Remember that hiring mistakes often take only a few minutes to make, and months of wasted time to undo.
  • Listen. There are always signs when a project is in trouble: missed milestones, recurrent attitude problems, general confusion about the project. Sometimes these signs indicate a dysfunctional team, sometimes they’re just normal bumps along the road, and sometimes they are early warning signs of major problems. The only way to tell the difference is to listen carefully to what the team members have to say.
  • The best way to get people to accept change is to make it more fun, and more rewarding, to do things the new way.
  • Choose a path that takes you in the direction you want to go. Don’t choose a path simply because it takes you away from the swamp you want to avoid.

Where the Fun and Challenge Is

Ultimately what makes software testing fun and challenging is because it needs you to interact with people. You have to talk to both the software development team and the application users. You have to understand what they care for and why. You have to ask questions and the answers to those questions guide what you do when you test. You can test in a silo but that doesn’t help you learn anything deeper, you can’t thrive if you stick to the things you already know. You grow because you take in all these perspectives and beliefs that other people have and test them against your own, for the goal of making better judgments about how to test the thing your testing and why you’re testing them in the first place. It’s not easy, because you have to care. The challenge is that you have to learn a great deal of many things, you have to regularly check the landscape of where you are, and you often need to reflect about what you’ve done and what you want to do next. And on the journey of overcoming the challenges you set for yourself is where the fun is.