the testing curve

my learning curve in software testing

Tag Archives: model

VIPT – bottom-up or top-down

In this second post on VIPT I want to talk about bottom-up vs. top-down. The original plan for this post was to talk about the distance between tools and value, but in the past few days I figured out that bottom-up vs. top-down is a better approach.
If you don’t know what VIPT is, please read this previous post. Don’t worry, I’ll wait.

For me as a context-driven tester the VIPT model is very much a top-down thing. You analyze the context, find out what value you should/can deliver and then you proceed to information, processes and tools. Of course, that’s easier said than done. Going through this for everything you do, requires a lot of time and effort. So most of the time you do quick analysis of the context, decide that it sufficiently resembles a context you encountered earlier and you use the same tools you used then. Most of the time that’s ok – as long as you stay alert to any signs that you misread the context.(1)

Now what if you have a tool and you used it in several contexts with a sufficient amount of success? You might be tempted to conclude the tool will work in all contexts. It has worked so far, hasn’t it? Congratulations, you just created a best practice! This also means in regard to VIPT you have gone from a top-down approach to a bottom-up one. Instead of taking value as a starting point, you start with a tool, on the assumption it will deliver the same value as it did in previous cases. Chances are small you will notice any indications your tool does not fit the context until it’s too late.

At best, you will see signs of the gap between tool and context and attempt to fix it. However, because of the bottom-up approach, you will not be looking at how to optimize the value you generate, you will focus on optimizing your tool. And this doesn’t just apply to best practices. You run this risk anytime you approach VIPT bottom-up instead of top-down.
Let’s say you have a tool that results in a sufficient amount of success and you decide to improve the tool. Say hello to the problem of local optimization.
Imagine that we make a map of all testing tools and express the value they generate (in your context) by raising the surface of the map in proportion to the value they generate. Basically, you get a mountainous region of testing tools with the highest mountain being the best tool. You are already using a tool, so you are on one of these mountains. If the only thing you do, is climbing up that same mountain, i.e. optimizing your tool, you will never be able to rise beyond the top of the mountain you’re currently on. And what if that mountain is not high enough?
The only solution to this is to also descend once in a while. Explore the region and get to know the different mountains. It will make it easier to recognize when your current tool just won’t be enough for the job at hand.

Another problem with the bottom-up approach is that it makes it tempting to use a tool to force a certain process, assuming this will ensure you get the information and value you want.
A great example of this are defect management tools. More specifically, defect management tools that assign different roles with different privileges to different users. For instance, only developers can change the status of a defect from ‘in analysis’ to ‘fixed’ to ‘ready for test’. This means first of all that going straight from ‘in analysis’ to ‘ready for test’ is not possible. You have to click through ‘fixed’ first. Secondly, this cannot be done by a tester or a designer; it needs to be done by a developer. Luckily, most of the time these tools are accompanied by a process flow diagram. Otherwise you’d first have to map out this maze of privileges before you can actually actually use the tool.
Now it’s not that hard to imagine why you would configure your tool like that. You want to keep proper track of defects. So you want to have correct information. This means that certain processes need to be followed. So you get a tool to facilitate this. And then you realize that people may make mistakes or that people take short cuts to make life easier for themselves. So you configure the tool not only to facilitate the processes you want, but also to enforce them. This does not work. Like the internet does with censorship, people see it as damage and route around it. Your plan backfires and instead of good, you get awfull information in your bug tracking tool.

I was supposed to say a few things about teaching testing in relation to VIPT, but I’m saving that for the next post.

— — —

(1) This is a really good moment to go and read Iain McCowatt’s latest blog post “Doctor, Doctor”, by the way. It’s about how testers make decisions.

VIPT Intermezzo – Models and the Unix philosophy

Thanks to Neil Thompson’s comments on my previous post, I started thinking about what I want to do with the VIPT model. Do I want to expand and refine it to a grand unified theory of testing? And if not, then what?

After some thinking I realized that with regard to models, I adhere to the Unix philosophy.
Particularly I am thinking about the following quote from Doug McIlroy:
“This is the Unix philosophy: Write programs that do one thing and do it well. Write programs to work together. Write programs to handle text streams, because that is a universal interface.”

As a result you can do this in Unix:
cat apple.txt | wc | mail -s “The count” nobody@december.com [example taken from here]
By piping the output of the first command to the second and then to the third, you get a small ‘program’ that counts the characters, words, and lines of the apple.txt file, then mails the results to nobody@december.com with the subject line “The count.”

Another part of the Unix philosophy according to Richard P. Gabriel is “Worse is better.” (As can be read here.) Simplicity is more important than other attributes — including correctness, consistency, and completeness. Basically you want something that’s good enough for you to have a use for it and simple enough for you to understand it. Quite similar to the ultimate toolbox actually: duct tape and WD40. If it moves and it shouldn’t, you need duct tape. And if it doesn’t move and it should, you need WD40. No need to worry if you want to use glue, a nail or a screw and of what kind, just duct tape.

Translating this to models, we get the following three principles:
– Create models that do one thing and do it well.
– Create models that work together.
– Prefer simplicity in your models over correctness, consistency and completeness.

This certainly won’t get us to a grand unified theory of testing. It also won’t give us any definitive models. What it does give us, is a big set of simple models. Sometimes we need only one model; sometimes we need to take a few and make them work together. Other times we may find none of our models are good enough, so we just build a new one and add it to our collection. And since models are always limited and thus fallible, let’s keep them simple, because what we never will achieve is correctness, consistency or completeness.
As such a model is very much a tool in the VIPT model. On its own it doesn’t do anything. When part of a process, it can provide valuable information – or not.

Yet Another Testing Model: Value – Information – Processes – Tools

During Let’s Test 2012 some ideas clicked in my mind with the result of yet another testing model:
Value – Information – Processes – Tools.

For me this model really is a culmination of being part of the context-driven testing community. If you have been reading about context-driven testing, I’m sure you’ll be able to spot plenty ideas I stole from others. ;-) So thank you so much to all of you!
Secondly, I have trouble believing I am the first one to come up with this simple model – although a google search didn’t turn up anything. (1) So if any of you know of similar models, please leave a comment!

And now to the actual model. Since you can’t have a model without a drawing, here’s a not so spectacular one:

Value - Information - Processes - Tools

Value is worth, usefulness.
Information is thoughts, ideas, questions, etc.
Processes are everything that happens: thinking, reading, talking, testing, etc.
Tools are stuff. Pen and paper, requirements document, charters, heuristics cheat sheets, etc. And people are also tools (no pun intended) (2).

Ok, now on to a simple example. Let’s take a look at you reading this blog post.
This blog post is a tool, a communication tool. It’s a specific configuration of pixels on a screen that can be interpreted as a set of letters in the Latin alphabet which can be read as a text in the English language.
Which brings us to the process part: you reading and interpreting it. Without you doing this, this blog post is just a set of pixels. Without a process the tool doesn’t do anything.
This process results in you getting information: your interpretation of what I wrote and your ideas, thoughts and questions based on that interpretation.
And hopefully this information is of value to you in some way.
That’s all there is to it!

Let’s continue with an example that actually has something to do with testing: defect management.
The most typical tool of defect management is the bug tracker. It can be an expensive tool, an open source-tool, a set of sticky notes or a list in your head. The processes involved are quite obvious: creating, reading and updating entries. The information (Remember that this information is not what’s in the tool; it’s an interpretation of what’s in the tool.) consists of what the defects is about, its status, who is supposed to something with it, etc. Finally the value is keeping track of the defects: defects should be fixed and/or registered, not forgotten.
However, that’s not the whole story. Keeping track of defects is of value, but we can also see it as a tool. By keeping track of the defects, we can analyse their current status. This gives us information about how the testing is going, how many issues there are with the product, what kind of issues there are and are not, etc. This information is valuable because it allows us to change our test approach if needed.
So as this example illustrates, often it’s possible to build one VIPT pyramid on the other with the value becoming the tool.

One of the most inmportant things to note about this model is that only tools can be actual things, objects. Processes are events; they occur. Some processes are observable; some are not. Information exists only in your mind. And so does value. Of course, you can use the tool of speech in a process of communication to share information about why you find this VIPT model valuable. But all I would get from that is the value of my interpration of what you said. I do not get your information or your value.
Or to put it differently. It’s possible to share a tool: I can give you the requirements document. It’s possible to partly share a process: we can review the requirements document together. It’s possible to share filtered information: you can interpret what I say. It’s possible to share translated value: I can tell you what I find valuable.
So basically, what we care about is value, but all we really ‘have’ are tools. And to get from tools to value, we need to go through processes that generate information. This distance between tools and value can make a life as tester a bit complicated sometimes. This problem is what part two of this post will be about.

Please post any questions or remarks in the comments! I am sure the model and its explanation can use some refining.

— — —

More on VIPT here:
VIPT Intermezzo – Models and the Unix philosophy

VIPT – bottom-up or top-down

VIPT – how to teach software testing

— — —

(1) This may have something to do with me heavily leaning on Kantian epistemology to simplify the model, though. (If you want to learn more about Kantian epistemology, reading Schopenhauer is a good start.)
(2) Of course one should remember Immanuel Kant’s admonition here always to treat people as ends in themselves, never as mere means.