Information debt

Last week ago the following happened on twitter:

In case you don’t know what technical debt is, you might want to read this first: (It’s the oldest source I could find of the technical debt-tetris analogy, by the way. If you know of an older one, please leave a comment.)
Information debt is the same but different. It’s not about the code or the implementation, but it’s about information, communication, models, documents and visualizations. Information debt is information being less available than you’d like it to be. Or relating it to the VIPT model: information debt are your processes and/or tools making valuable information less available than you want it to be.

And it turns out information debt is everywhere:
– a huge document (e.g. test plan) that’s very hard to navigate
– test cases that don’t communicate why these specific test cases have been chosen
– an outdated document that’s not being updated because everyone knows what it should say
– only one person knows and he’s on vacation
– a quickly sent out mail just about that one item leading to confusion and discussion
– sloppy bug reports
– a meeting with a bunch of people, none of which have enough of the relevant information
– not having the proper application to view a document (I’m looking at you MS Project)
– the models or visualizations of the system exist in people’s heads only
– a daily stand-up about tasks and resources instead of sharing information
– the only reliable list of outgoing interfaces is in the scheduler of the application

And it’s caused by a number of things:
– lack of skill
– taking shortcuts because of time pressure
– whiteboards or similar not being available
– not giving it any thought
– the assumption that everyone knows already
– not being aware of the debt
– following processes and templates instead of being purpose-focused
– not taking the time to do it properly
– having to get up from your desk to talk to someone
– thinking of something as trivial or obvious
– not knowing any better

Now these are two very nice and very incomplete lists, but what’s really going on here?

First of all, good information management is hard, very hard. I recently participated in the BBST Bug Advocacy course and even then, even when you can take all the time you want to focus on adding the perfect comment to a bug report, you’ll get feedback containing plenty of things to improve. And that’s in a “You had one job”-scenario. In a real working situation it will be so much more difficult – more pressure and more constraints.

Which brings me to my second point. There’s a reason the agile manifesto says: working software over comprehensive documentation. Our primary product as a software development team is that software. Good information management is very important to deliver that software, but it has a supportive role. It’s not on center stage. (Which, as with technical debt, makes it so easy to take some shortcuts now and worry about the consequences later.)
However, as a tester information *is* your main product. You don’t design the software, you don’t build the software, you investigate the software(1) to evaluate it. And not only is information your main product, it’s also one of your main tools. You want to know what to test, how to test, what’s been tested, the results of those tests, how to recognize a problem, … This makes information debt a very big deal: it touches the core of what we do as testers. Although information debt has team impact, it’s the testers that it hits the hardest.

So what are we to do? To start off I can think of three things:
– become more aware of information debt, recognize it, identify it, name it
– when there are trade-offs to be made, make the information debt explicit
– become better at making information available: improve your writing, reading, talking, drawing, …

Finally, I feel like I should say something about how all of this relates to tacit and explicit knowledge. I do think some interesting points can be made about information debt and tacit knowledge, but I need to give it some more thought. So I will leave that for a later blog post.

— — —

(1) Or rather: you investigate the *product* to evaluate it, but that sounded a bit confusing with a different referent of ‘product’ in the previous sentence.

VIPT – how to teach software testing

In this final post on VIPT (Value-Information-Processes-Tools) it’s time to take a look at teaching software testing. My previous posts on VIPT can be found here, here and here.

A typical traditional software testing course (at least in the way I have taught them) has three elements: theory, stories and exercises.
The first element is all about definitions (testing, test cases, defects, etc.), process descriptions and testing techniques (mostly test design). So basically what happens is that students get a brief introduction about testing in general and then we move on to the main part: teaching a specifc testing method.
The second element of the course are the stories. These are mostly stories aobut how testing in the real world does not work as described in the theory. At best they are stories containing all four elements of VIPT. Most of the time, however, they are just real-world examples of a certain definition or technique.
Finally, there are exercises. As with the techniques, these are mostly about test design. Unfortunately they are also very linear. There is only one correct answer and often only one correct way to get to that answer. So the main gist seems to be: “I taught you a trick, now show me you can perform the trick.” But shouldn’t learning about testing be more than learning to jump through a hoop on command?

How does this relate to the VIPT-model? Does this way of teaching cover all four elements of the model? Or does it focus on some elements at the cost of others?
I don’t think it’s hard to see that this way of teaching testing is heavily tool-focussed. It’s main purpose is to transfer knowledge about a specific testing method. In the VIPT-model this kind of knowledge is a tool. (During the course this knowledge is of course information, but in testing knowledge about a testing methodology is a tool. No one really cares how much you know about testing, they do care about what you can tell them about the quality of the product.)
At least during the exercises the student also experiences a process, but as I said earlier: the value of these exercises lays in demonstrating you can apply a certain test design technique, when asked to do so. That’s actually quite bizarre. It suggests that the hard part about testing is creating test scripts. Executing them correctly and making the problem-or-no-problem decision based on the result, is apparently such a banal task any idiot can do it, no training required.

To some degree this focus on tools makes sense: teaching a skill set has very much to do with giving people a good set of tools. However, that’s not all there is to it. If I give you a set of carpenter tools and teach you how to use them, you’ll still have a hard time to build decent cabinet. Because you still have to figure out how to combine your new skills into a series of actions that will result in a decent cabinet. You don’t just need to know how to use your tools, you also need when to apply which one with what purpose. You need to know in what circumstances a tool can create a certain value and in what circumstances it can’t.
In testing, an important part of this is -as Michael Bolton and James Bach call it- test framing: “the capacity to follow and express, at any time, a direct line of logic that connects the mission to the tests.” Or put differently, being able to answer the question of how the tool you are using, produces what value.
Another aspect is being able to express what value testing in a general sense can provide – without simply reciting the definition of testing you memorized. And preferable this explanation touches on the typical problems testers face in their line of work. For instance, recently I was asked: “To you, how much quality is enough?” I looked puzzled by the question and got the following clarification: “Would you say that only 100% can be enough? Or is 80% ok too?” So I answered it’s not up to me decide how much quality is enough. That’s up to the stakeholders or their representative, the project manager. And if the decision is made that 60% quality is good enough(1), I might inform them there is some pretty important stuff in the missing 40%, but still it’s not my decision. The only decision I get to make is to quit the job or to stay, I guess. However, the thing that amazes me the most about this question is that most of the time this person apparently does get an actual percentage as a reply.(2)
It’s just one of many examples that show we are failing at teaching testers to think about testing. Plenty of testers seem quite good in applying the framework they were taught in thoughtful and critical manner, but they never seem to go beyond that and apply their intelligence to questioning that framework. And more importantly, wonder whether there is a different framework in which they could provide more value.

So how do we fix this?
For starters, as James Bach keeps saying, let’s have the students test an actual piece of software! Then coach them along the way: tell them what they did right and show them where they went wrong. At least then the students will have exerpienced how the process of applying a set of testing tools can result in information about the quality of a product. Let them move through VIPT in relation to software testing.
But why stop there? Don’t just tell students testing is a sampling problem, let them experience it. Have them make decisions about how to test and then let them run into an unkown unknown. Have their tools fail on them: hand them a set of tools that poorly fits the situation and see if they figure that out and create their own tools.
So instead of giving new testers a bag full of tools and a vague notion of what to do with it, have them go through all four layers of VIPT.
The reason is very simple: even as beginning testers, they need to own their craft – if only at a very basic level.

— — —

(1) But how do you express quality in a single percentage anyhow? Perhaps ‘fairly good’ would make more sens as it doesn’t imply a calculation of some sort to get to the reply.
(2) On the other hand, I could argue that the correct answer is 80%. Anything lower and you are too easy-going, anything higher and you’re too much of a perfectionist. Of course, if that’s the intention of the question, one might as well ask: “What’s your stance on quality? Anything goes, good enough is good enough, or only perfection counts?”

p.s. One other thing I would like to see added to testing education is the history of software testing. People have little appreciation of how things were before they learned about it. For some testers, Agile has always existed in the same way as for some people there always have been mobile phones and the internet.
For an excellent start on the history of testing, see by Joris Meerts and Dorothy Graham.

VIPT – bottom-up or top-down

In this second post on VIPT I want to talk about bottom-up vs. top-down. The original plan for this post was to talk about the distance between tools and value, but in the past few days I figured out that bottom-up vs. top-down is a better approach.
If you don’t know what VIPT is, please read this previous post. Don’t worry, I’ll wait.

For me as a context-driven tester the VIPT model is very much a top-down thing. You analyze the context, find out what value you should/can deliver and then you proceed to information, processes and tools. Of course, that’s easier said than done. Going through this for everything you do, requires a lot of time and effort. So most of the time you do quick analysis of the context, decide that it sufficiently resembles a context you encountered earlier and you use the same tools you used then. Most of the time that’s ok – as long as you stay alert to any signs that you misread the context.(1)

Now what if you have a tool and you used it in several contexts with a sufficient amount of success? You might be tempted to conclude the tool will work in all contexts. It has worked so far, hasn’t it? Congratulations, you just created a best practice! This also means in regard to VIPT you have gone from a top-down approach to a bottom-up one. Instead of taking value as a starting point, you start with a tool, on the assumption it will deliver the same value as it did in previous cases. Chances are small you will notice any indications your tool does not fit the context until it’s too late.

At best, you will see signs of the gap between tool and context and attempt to fix it. However, because of the bottom-up approach, you will not be looking at how to optimize the value you generate, you will focus on optimizing your tool. And this doesn’t just apply to best practices. You run this risk anytime you approach VIPT bottom-up instead of top-down.
Let’s say you have a tool that results in a sufficient amount of success and you decide to improve the tool. Say hello to the problem of local optimization.
Imagine that we make a map of all testing tools and express the value they generate (in your context) by raising the surface of the map in proportion to the value they generate. Basically, you get a mountainous region of testing tools with the highest mountain being the best tool. You are already using a tool, so you are on one of these mountains. If the only thing you do, is climbing up that same mountain, i.e. optimizing your tool, you will never be able to rise beyond the top of the mountain you’re currently on. And what if that mountain is not high enough?
The only solution to this is to also descend once in a while. Explore the region and get to know the different mountains. It will make it easier to recognize when your current tool just won’t be enough for the job at hand.

Another problem with the bottom-up approach is that it makes it tempting to use a tool to force a certain process, assuming this will ensure you get the information and value you want.
A great example of this are defect management tools. More specifically, defect management tools that assign different roles with different privileges to different users. For instance, only developers can change the status of a defect from ‘in analysis’ to ‘fixed’ to ‘ready for test’. This means first of all that going straight from ‘in analysis’ to ‘ready for test’ is not possible. You have to click through ‘fixed’ first. Secondly, this cannot be done by a tester or a designer; it needs to be done by a developer. Luckily, most of the time these tools are accompanied by a process flow diagram. Otherwise you’d first have to map out this maze of privileges before you can actually actually use the tool.
Now it’s not that hard to imagine why you would configure your tool like that. You want to keep proper track of defects. So you want to have correct information. This means that certain processes need to be followed. So you get a tool to facilitate this. And then you realize that people may make mistakes or that people take short cuts to make life easier for themselves. So you configure the tool not only to facilitate the processes you want, but also to enforce them. This does not work. Like the internet does with censorship, people see it as damage and route around it. Your plan backfires and instead of good, you get awfull information in your bug tracking tool.

I was supposed to say a few things about teaching testing in relation to VIPT, but I’m saving that for the next post.

— — —

(1) This is a really good moment to go and read Iain McCowatt’s latest blog post “Doctor, Doctor”, by the way. It’s about how testers make decisions.