VIPT – how to teach software testing

In this final post on VIPT (Value-Information-Processes-Tools) it’s time to take a look at teaching software testing. My previous posts on VIPT can be found here, here and here.

A typical traditional software testing course (at least in the way I have taught them) has three elements: theory, stories and exercises.
The first element is all about definitions (testing, test cases, defects, etc.), process descriptions and testing techniques (mostly test design). So basically what happens is that students get a brief introduction about testing in general and then we move on to the main part: teaching a specifc testing method.
The second element of the course are the stories. These are mostly stories aobut how testing in the real world does not work as described in the theory. At best they are stories containing all four elements of VIPT. Most of the time, however, they are just real-world examples of a certain definition or technique.
Finally, there are exercises. As with the techniques, these are mostly about test design. Unfortunately they are also very linear. There is only one correct answer and often only one correct way to get to that answer. So the main gist seems to be: “I taught you a trick, now show me you can perform the trick.” But shouldn’t learning about testing be more than learning to jump through a hoop on command?

How does this relate to the VIPT-model? Does this way of teaching cover all four elements of the model? Or does it focus on some elements at the cost of others?
I don’t think it’s hard to see that this way of teaching testing is heavily tool-focussed. It’s main purpose is to transfer knowledge about a specific testing method. In the VIPT-model this kind of knowledge is a tool. (During the course this knowledge is of course information, but in testing knowledge about a testing methodology is a tool. No one really cares how much you know about testing, they do care about what you can tell them about the quality of the product.)
At least during the exercises the student also experiences a process, but as I said earlier: the value of these exercises lays in demonstrating you can apply a certain test design technique, when asked to do so. That’s actually quite bizarre. It suggests that the hard part about testing is creating test scripts. Executing them correctly and making the problem-or-no-problem decision based on the result, is apparently such a banal task any idiot can do it, no training required.

To some degree this focus on tools makes sense: teaching a skill set has very much to do with giving people a good set of tools. However, that’s not all there is to it. If I give you a set of carpenter tools and teach you how to use them, you’ll still have a hard time to build decent cabinet. Because you still have to figure out how to combine your new skills into a series of actions that will result in a decent cabinet. You don’t just need to know how to use your tools, you also need when to apply which one with what purpose. You need to know in what circumstances a tool can create a certain value and in what circumstances it can’t.
In testing, an important part of this is -as Michael Bolton and James Bach call it- test framing: “the capacity to follow and express, at any time, a direct line of logic that connects the mission to the tests.” Or put differently, being able to answer the question of how the tool you are using, produces what value.
Another aspect is being able to express what value testing in a general sense can provide – without simply reciting the definition of testing you memorized. And preferable this explanation touches on the typical problems testers face in their line of work. For instance, recently I was asked: “To you, how much quality is enough?” I looked puzzled by the question and got the following clarification: “Would you say that only 100% can be enough? Or is 80% ok too?” So I answered it’s not up to me decide how much quality is enough. That’s up to the stakeholders or their representative, the project manager. And if the decision is made that 60% quality is good enough(1), I might inform them there is some pretty important stuff in the missing 40%, but still it’s not my decision. The only decision I get to make is to quit the job or to stay, I guess. However, the thing that amazes me the most about this question is that most of the time this person apparently does get an actual percentage as a reply.(2)
It’s just one of many examples that show we are failing at teaching testers to think about testing. Plenty of testers seem quite good in applying the framework they were taught in thoughtful and critical manner, but they never seem to go beyond that and apply their intelligence to questioning that framework. And more importantly, wonder whether there is a different framework in which they could provide more value.

So how do we fix this?
For starters, as James Bach keeps saying, let’s have the students test an actual piece of software! Then coach them along the way: tell them what they did right and show them where they went wrong. At least then the students will have exerpienced how the process of applying a set of testing tools can result in information about the quality of a product. Let them move through VIPT in relation to software testing.
But why stop there? Don’t just tell students testing is a sampling problem, let them experience it. Have them make decisions about how to test and then let them run into an unkown unknown. Have their tools fail on them: hand them a set of tools that poorly fits the situation and see if they figure that out and create their own tools.
So instead of giving new testers a bag full of tools and a vague notion of what to do with it, have them go through all four layers of VIPT.
The reason is very simple: even as beginning testers, they need to own their craft – if only at a very basic level.

— — —

(1) But how do you express quality in a single percentage anyhow? Perhaps ‘fairly good’ would make more sens as it doesn’t imply a calculation of some sort to get to the reply.
(2) On the other hand, I could argue that the correct answer is 80%. Anything lower and you are too easy-going, anything higher and you’re too much of a perfectionist. Of course, if that’s the intention of the question, one might as well ask: “What’s your stance on quality? Anything goes, good enough is good enough, or only perfection counts?”

p.s. One other thing I would like to see added to testing education is the history of software testing. People have little appreciation of how things were before they learned about it. For some testers, Agile has always existed in the same way as for some people there always have been mobile phones and the internet.
For an excellent start on the history of testing, see by Joris Meerts and Dorothy Graham.

The Seven Basic Principles of the Context-Driven School – part two

After the introductory post (to be found here) it’s time to take a closer look at each of the basic principles. In the past weeks I found out that it’s quite possible to take any one of these principles as a starting point for several different trains of thought. More importantly I discovered a story(1) to the principles: the first five principles are ways in which software testing is intellectually challenging, as stated by principle six. And principle seven then wraps it all up.
So below you can find some of the thoughts I had on the principles and the story I discovered.

1. The value of any practice depends on its context.
To get a better understanding of this principle I started thinking: what if the value of a practice did not depend on context? What else coud it depend on?
One possibility is that practices have an intrinsic value. If that’s the case, how can we compare two practices to see which one has the most intrinsic value? Should we look at elegance, simplicity, completeness, … (2) ? The problem is that those are not the usual criteria we judge a practice on. Since a practice is something that requires application, it does not stand on itself for us to enjoy its intrinsic value. So the value of a practice depening on intrinsic value makes little sense. Although I do think it’s something people take into account: a good method also needs a good story. It needs to be convincing.
Another possibility is that pracitices do not have any value: it doesn’t really matter what you do. Or that there is no way for us to discern the differences in value – even in hindsight. As a consequence it does not matter what you do, because you will never know if you could have done any better. If that were the case, software testing would be the easiest job in the world and the most ridiculous. Of course, often it is difficult to choose between different practices, but the choices you make do matter.
So these two alternatives are not complete nonsense, but in the end they do not (or at least should not) determine the value of a practice. The context does. But are all software testing contexts sufficiently alike so that we only need one practice? To answer this we need to look at the second princple.

2. There are good practices in context, but there are no best practices.
If there are good practices in context, there are also bad practices. Secondly, there are no best practices – ‘best’ implying context-independence here. Remains the question: are there better practices? Are some practices better than others? Outside of any context, there cannot be. Otherwise the one practice that is better than all the rest would be the best practice. In context, obviously, some practices are better than others. And to take the point even further: what is a good practice on one context, may be a bad practice in a different context. Which is kind of a weird thought if you’re used to thinking in best practices and maturity models.

Another thing I wondered is if there are ways around this principle. I could think of two. Unfortunately neither work.
Firstly, there are ‘best’ practices, i.e. best practices applied with some adaptivity. So you apply your best practices and where it doesn’t work you patch it up by getting creative. The problem with this workaround is that you will always maintain the basic philosophy of your practice. So in the details your approach may not be one-size-fits-all, the fundamental parts of your approach will be. You will approach every problem from the same perspective. This makes you context-conscious, but not context-driven.
The other workaround is creating a practice that is abstract enough that it’s context-independent. I don’t think that’s possible. Sure, it is possible to define a set of principles that is abstract enough. Just look at these seven principles. But I don’t see how one could create a practice that is abstract enough to be context-independent, yet concrete enough to have any practical value.

3. People, working together, are the most important part of any project’s context.
What does ‘working together’ mean? Are factory workers at a production line working together? Or does it depend on the degree of communication and interaction between them? On a shared goal, a shared understanding of what they are doing and of what they are trying to accomplish? I think it does. Working at the same production line or being assigned to the same project, does not mean you are ‘working together’. So the most important part of a project’s context is the ways in which the people in the project interact with each other. And remember, you are one of those people.
Also, because the value of a practice is determined by context, these interactions are what the value of your testing practice depends upon. This also means that the following parts of a project’s context are less important to the value of your practice: deliverables, processes, the type of product, laws and regulations, the project management method, the planning, etc.

4. Projects unfold over time in ways that are often not predictable.
My first reaction to this one was “Well… duh!” My second reaction was “But what do we do with this knowledge?” Which got me to my third reaction “What does ‘not predictable’ mean?”
Well, we all know that unexpected stuff happens and that’s why we add an error margin to our budget and our planning. We may not be able to predict exactly which tasks are going to eat up our error margin, but we can make a fair guess how much error margin we need. Often enough that guess is wrong, but luckily the project plan and the project itself are not two seperate entities. Changing the plan changes the project and vice-versa. As a result this kind of being not predictable is manageable to some degree.
Yet the above is not what we should think about to understand this fourth principle. What we should think about is that our model of the project might be wrong. Something that happens and does not fit into your model, is truely ‘not predictable’. Which is different from ‘hard to estimate’. (See the idea of black swans and the known unknown vs the unknown unknown, or this set of blog posts by Michael Bolton.) Now why would our model be wrong? Assuming we’re not clinging on to some best practice, in most cases I think it’s simply a case of having several models fitting the observations of the project so far. So there is just no way of deciding which of these models is the correct one. Hopefully, as the project progresses, we stay alert and will be able to discard the models that no longer fit our observations.

5. The product is a solution. If the problem isn’t solved, the product doesn’t work.
The easiest interpretation here is saying this principle is about verification (Did we build the product right?) versus validation (Did we build the right product?). As a consequence, delivering a product that meets all documented requirements on time and on budget, does not mean it’s a good product. One could argue that it was a well-managed project that produced this product, but does that really matter if the product is no good? Probably not.

A more difficult question is who gets to define the problem. One way to look at it is that everyone with access to information about the problem, defines the problem by building a mental model about the problem. However, none of the models are the definitive model. So another way to look at it is that nobody gets to define the problem. And that is (at least to me) the fun part of testing: you are being challenged to build two sets of models. One set about the problem and one set about the solution; comparing those with eachother is what we call ‘testing’.

6. Good software testing is a challenging intellectual process.
The main implication of this principle is that good software testing is not defined by following a method, going through the test process step by pre-defined step. It is defined by exploration, investigation and learning.
And as I said above, one way to look at the principles is to see the first five principles as ways in which software testing is intellectually challenging:
1. Understanding the context of a project.
2. Deciding which practice to use in a specific context.
3. Interacting succesfully with people.
4. Building and evaluating models of the project.
5. Evaluating if the product solves the problem.
Those are the five skills you need to be a good software tester.

7. Only through judgment and skill, exercised cooperatively throughout the entire project, are we able to do the right things at the right times to effectively test our products.
This last principle wraps it all up, referring to the other principles: judgment and skill (principle 6), cooperatively (principle 3), right things (principle 1 and 2), right times (principle 4), to efectively test our products (principle 5). And last but not least, ‘throughout the entire project’ implies there is no testing phase.

That concludes part two. In part three I will use these principles to compare the Context-driven school with the other schools of testing.
— — —
(1) There’s a reason I say ‘story’. Stories are great ways to make sense of things, but they can also be deceptive. People like stories, but not everything allows to be ‘storified’ without losing important bits.
(2) I know, one can argue none of those are actual indicators of intrinsic value. Elegant/simple/complete to whom?