Reflections on my testing manifesto

Earlier this month I published my Manifesto for software testing. This manifesto is my attempt to bring together what I have learned about testing from the context-driven, agile and DevOps communities. Below you can find the manifesto with my reflections on it.

1. Testing is investigating in order to evaluate a product.

This definition is clearly influenced by James Bach’s “questioning a product in order to evaluate it”. I’m not sure at which point I started misremembering his definition as “investigating a product…”, but it works well with a change I did make intentionally: moving “a product” to the second part of the definition. As explained in 6. I believe that in order to evaluate the product, we need to investigate a number of different things, not only the product.

2. An evaluation is a judgement about quality – quality being value to persons who matter.

The definition of quality used here is Jerry Weinberg’s. I made one small change: “persons who matter” instead of “a person who matters”. I did this, because it makes more sense in the context of the previous point. In 1. I say we want to evaluate the product, say something about its overall quality, which consists of the quality to different people who matter. (I had trouble coming up with a product where there is only one person who matters.)
There is a risk in the way I am phrasing this: it might suggest that all persons who matter, assign the same value to the same things. That is not the case and Jerry Weinberg’s definition captures this better.

3. This makes testing a fundamentally human and contextual activity.

With people doing the investigating and the evaluating and the assigning value, there is no escaping humanity in testing. This means we need to account for the nature of human observation and cognition (both their strengths and their limits), for how we assign value and meaning, for the social aspects of software development and for its ethical aspects.

Since being human means being contextual – none of us are all-seeing and all-knowing – the same applies to testing. This is also why in a draft version of this manifesto 4. contained the context-driven statement “There are no best practices in testing, only good practices in context.”

4. As such, testing is an exploratory and open-ended activity, requiring continuous evaluation of and experimentation with our practices.

I packed a lot of different influences in this one sentence to expand on 3.
Testing is a wicked problem. Testing deals with the known knowns (Are we sure?), unknown knowns (Can we make these explicit?), known unknowns (Can we find out?) and unknown unknowns (Can we find out what it is we need to find out?). Both people and innovation fall (mostly?) in the complex domain of Cynefin, so we need to probe – sense – respond. Also, systems thinking! Finally, serendipity is an amazing testing skill.
In addition to that, there are the ideas from the agile, modern agile, and lean communities that all reinforce the need for continuous evaluation and experimentation.
There is no fixed recipe; there is only a craft to practice.

Finally, I snuck in an “our” without explaining who this “we” is, it refers to. As a matter of fact, I never clarify this in the manifesto. The reason for this is that the manifesto addresses everyone who performs activities that to this manifesto qualify as testing.

5. As such, testing cannot be automated. We do use a wide variety of tools to support, extend and amplify our testing. We may also delegate some decisions to our tools. However, without a human context, these decisions are meaningless.

With testing a fundamentally human activity requiring continuous evaluation and experimentation, automating testing is not going to happen. People will need to be involved in some way.

However, without tools, without automation, testing becomes nearly impossible. No notebooks, no mindmaps, no bug trackers, no IDEs, no browser developer tools, no way to interface with an API, no way to look at log files, no Excel, no nothing. Because of this, I think the concept of “extended cognition” is an excellent way to think of tool usage in testing. (One of my older blog posts, “The test case – an epistemological deconstruction“, also ventures in this territory.)

The words “support, extend and amplify” are deliberately chosen. Tools that support us, makes things easier than doing these things without them. Tools that extend our abilities, increase our reach. They allow us to go deeper and further than without them. Tools that amplify, increase the intensity of what we do. They allow us to do the same thing faster than without them. Often this means we can do a lot more of the same thing.
Finally, we can delegate decisions to our tools. I wrote that with continuous integration and deployment in mind.

6. Anything that can be observed, can be investigated: the product, artifacts, interactions, and tools.

Investigating the product can be split in three different areas:
(1) static testing: any investigation that doesn’t require the product to be running;
(2) dynamic testing: any investigation that does require the product to be running;
(3) monitoring: any investigation while the product is in actual use.

However, as mentioned in the reflection on 1. we should not only investigate the product. Anything related to the product can be a valuable source of information. We can review acceptance criteria and design document. We can do an analysis of the commit history, e.g. usage of curse words in commit messages. We can count the number of questions team members ask each other in the daily scrum. Etc.

Finally, we should not forget that a lot of testing is hidden in other activities. For instance, a developer looking at their screen as they are programming, is testing. I could even argue that even if they were to close their eyes, they are still testing: their muscle memory will give them information on how likely it is they hit the intended key on the keyboard.

7. This means that testing is fundamentally interwoven with all activities within a product’s existence: conception, development, operation, and disposal.

As hinted at in the last paragraph of the reflection on 6. testing is happening all the time – in subtle and less subtle ways. Any feedback loop implies testing is happening.

The choice of the four phases conception, development, operation, and disposal is deliberate. With testing being a fundamental part of all phases, the question “When should testing start?” makes no sense. And testing only ends when the software is done, which means when the product no longer exists as product.

8. And the core question during the product’s lifecycle is: how do we discover what we need to discover in the most effective way?

This cuts through all the questions about processes, procedures, roles, people, tools, artifacts, methodologies, etc. In the end the product of testing is information, so how are you going to get that done?

Manifesto for software testing

1. Testing is investigating in order to evaluate a product.

2. An evaluation is a judgement about quality – quality being value to persons who matter.

3. This makes testing a fundamentally human and contextual activity.

4. As such, testing is an exploratory and open-ended activity, requiring continuous evaluation of and experimentation with our practices.

5. As such, testing cannot be automated. We do use a wide variety of tools to support, extend and amplify our testing. We may also delegate some decisions to our tools. However, without a human context, these decisions are meaningless.

6. Anything that can be observed, can be investigated: the product, artifacts, interactions, and tools.

7. This means that testing is fundamentally interwoven with all activities within a product’s existence: conception, development, operation, and disposal.

8. And the core question during the product’s lifecycle is: how do we discover what we need to discover in the most effective way?

Many thanks to reviewers Ruud Cox, Elizabeth Zagroba, and Jean-Paul Varwijk.

For a version of this manifesto that includes my reflections, see this post.

Information debt

Last week ago the following happened on twitter:

In case you don’t know what technical debt is, you might want to read this first: http://techblog.net-a-porter.com/2011/10/agile-tetris/ (It’s the oldest source I could find of the technical debt-tetris analogy, by the way. If you know of an older one, please leave a comment.)
Information debt is the same but different. It’s not about the code or the implementation, but it’s about information, communication, models, documents and visualizations. Information debt is information being less available than you’d like it to be. Or relating it to the VIPT model: information debt are your processes and/or tools making valuable information less available than you want it to be.

And it turns out information debt is everywhere:
– a huge document (e.g. test plan) that’s very hard to navigate
– test cases that don’t communicate why these specific test cases have been chosen
– an outdated document that’s not being updated because everyone knows what it should say
– only one person knows and he’s on vacation
– a quickly sent out mail just about that one item leading to confusion and discussion
– sloppy bug reports
– a meeting with a bunch of people, none of which have enough of the relevant information
– not having the proper application to view a document (I’m looking at you MS Project)
– the models or visualizations of the system exist in people’s heads only
– a daily stand-up about tasks and resources instead of sharing information
– the only reliable list of outgoing interfaces is in the scheduler of the application

And it’s caused by a number of things:
– lack of skill
– taking shortcuts because of time pressure
– whiteboards or similar not being available
– not giving it any thought
– the assumption that everyone knows already
– not being aware of the debt
– following processes and templates instead of being purpose-focused
– not taking the time to do it properly
– having to get up from your desk to talk to someone
– thinking of something as trivial or obvious
– not knowing any better

Now these are two very nice and very incomplete lists, but what’s really going on here?

First of all, good information management is hard, very hard. I recently participated in the BBST Bug Advocacy course and even then, even when you can take all the time you want to focus on adding the perfect comment to a bug report, you’ll get feedback containing plenty of things to improve. And that’s in a “You had one job”-scenario. In a real working situation it will be so much more difficult – more pressure and more constraints.

Which brings me to my second point. There’s a reason the agile manifesto says: working software over comprehensive documentation. Our primary product as a software development team is that software. Good information management is very important to deliver that software, but it has a supportive role. It’s not on center stage. (Which, as with technical debt, makes it so easy to take some shortcuts now and worry about the consequences later.)
However, as a tester information *is* your main product. You don’t design the software, you don’t build the software, you investigate the software(1) to evaluate it. And not only is information your main product, it’s also one of your main tools. You want to know what to test, how to test, what’s been tested, the results of those tests, how to recognize a problem, … This makes information debt a very big deal: it touches the core of what we do as testers. Although information debt has team impact, it’s the testers that it hits the hardest.

So what are we to do? To start off I can think of three things:
– become more aware of information debt, recognize it, identify it, name it
– when there are trade-offs to be made, make the information debt explicit
– become better at making information available: improve your writing, reading, talking, drawing, …

Finally, I feel like I should say something about how all of this relates to tacit and explicit knowledge. I do think some interesting points can be made about information debt and tacit knowledge, but I need to give it some more thought. So I will leave that for a later blog post.

— — —

(1) Or rather: you investigate the *product* to evaluate it, but that sounded a bit confusing with a different referent of ‘product’ in the previous sentence.