the testing curve

my learning curve in software testing

Tag Archives: rantish

Why I dislike test management

As I am enjoying these short, not very nuanced, not extremely well thought out blog posts, here’s another one.

Some people seem to think that it makes sense to think of testing as a project within a project, so they apply project management tools and techniques to testing. This simply doesn’t work.
Because what are the tools and techniques do they use? A plan with milestones no one is ever going to make as unexpected stuff tends to happen. A budget that is too tight because it’s based on that same plan. Entry criteria that are not met, but never mind, we’re running out of time so you need to start testing anyhow. And finally exit criteria that we fail to meet as well, but hey we’ll go live anyway, because the software really isn’t that bad (or so we hope).
So in the end, a lot of time and effort is spent on producing documents that are of little use in guiding the actual testing effort. The only thing they do is give some people a warm and fuzzy illusion of control.

But why doesn’t this test management thing work? In my opinion it’s quite simple: testing on its own doesn’t really do anything. There is no real product at the end of testing; we only produce information.
Of course, one could argue that the product of software testing is a test report, but that’s just weird. No one cares about your test report, they care about the software, about the product. Or rather (and more inspiring for us software testers): they don’t care about the documents you produce, they care about the service you provide. And that gets lost when you focus on the test project instead of on the software project.

p.s. Something is bugging me about this post, but I can’t put my finger on exactly what it is. Ideas anyone?

Test cases, can’t do ‘m no more

Continuing the style of my previous blog post…

Some days ago I found myself no longer able to think in test cases while testing. Of course, it’s not as if I was using test design techniques to generate test cases one day and woke up the next day to find myself unable to do it anymore. But still, about a week ago I figured I had explored enough to be able to write down the test cases I wanted to execute and found myself staring at a blank page (well ok, empty Excel sheet) feeling alienated from what I was planning to do.

So what do I mean when saying “thinking in test cases”. Simply put, you take a piece of functionality, let a test design technique loose on it and there you go: a set of test cases to execute. Combine test design techniques over the different pieces of functionality as required and you’re all covered test strategy-wise. Or that’s the idea.
The problem with this approach is that it is based on reductive and deductive reasoning. It believes that we can transform a description of some piece of software to a set of actions and checks – with nothing important getting lost in that transformation. How is that ever supposed to work? Systems thinking anyone?

Yet if not test cases, than what? You model, you explore and you investigate. You don’t think in test cases; you generate test ideas and work with those. You approach the application as the complex system that it is, with all the different interactions that entails. And yes, during this process you will be using test design techniques. The difference is that they will not give you any guarantee of coverage except in a very trivial way, i.e. that you got the desired coverage for the very specific thing you were testing. That is all.
To answer the question if you tested all the important parts of the application, you do not need test design techniques, you need models. More than one, some may overlap and some may be somewhat contradictory. That’s ok. If testing weren’t such a messy business, it wouldn’t be that much fun.

Why your Product Risk Analysis isn’t

Ok, going to try to keep this short and ranty (rantish?).

Typical test advice is to do a Product Risk Analysis (PRA, mind the capitalisation!) and based on that you decide what to test and how thoroughly. The most common way to do a PRA is with a workshop. Put some people in a room with a lot of stickies, let them list all the risks they can think of and then have them score them. Et voilà, Product Risk Analysis is done!

But that doesn’t really make sense, now does it? If someone were to give you an object and asked you “What could possibly go wrong with this?”, what would you do? Gather a bunch of people with some knowledge of the object, yet no actual experience with it and do a workshop imagining things that could go wrong? That’s not an Analysis (capital A!), that’s a SWAG – a scientific wild ass guess.

Yet that’s exactly what we do in software testing. After the workshop the PRA is done. It gets (at least!) version 1.0. But if the PRA is done, that is if we have a good analysis of the product risks, why do we need to test? Well of course, we didn’t do a proper analysis, we did a SWAG and we want and need to do better.
It is through testing (and checking), through interacting and experimenting with the product, we can enhance our PRA from “best we can think of” to “this is what we observed”. By testing we upgrade our risks from SWAG-status to three pieces of information: what works, what doesn’t work and what hasn’t been tested yet. Now that information is a lot more useful than what we got from our so-called PRA: “We have a moderate amount of fear it will fail and that would be fairly bad.”

Now quite often the list of stuff that doesn’t work is a bit too long, so we have a fixing phase. And that’s cool. We just keep testing, until the person who gets to make the call, is happy with the three stories in our PRA (works, doesn’t work, don’t know). And that’s when our Product Risk Analysis really is done, ready and finished: during our final test report.