the testing curve

my learning curve in software testing

Monthly Archives: July 2013

Why I dislike test management

As I am enjoying these short, not very nuanced, not extremely well thought out blog posts, here’s another one.

Some people seem to think that it makes sense to think of testing as a project within a project, so they apply project management tools and techniques to testing. This simply doesn’t work.
Because what are the tools and techniques do they use? A plan with milestones no one is ever going to make as unexpected stuff tends to happen. A budget that is too tight because it’s based on that same plan. Entry criteria that are not met, but never mind, we’re running out of time so you need to start testing anyhow. And finally exit criteria that we fail to meet as well, but hey we’ll go live anyway, because the software really isn’t that bad (or so we hope).
So in the end, a lot of time and effort is spent on producing documents that are of little use in guiding the actual testing effort. The only thing they do is give some people a warm and fuzzy illusion of control.

But why doesn’t this test management thing work? In my opinion it’s quite simple: testing on its own doesn’t really do anything. There is no real product at the end of testing; we only produce information.
Of course, one could argue that the product of software testing is a test report, but that’s just weird. No one cares about your test report, they care about the software, about the product. Or rather (and more inspiring for us software testers): they don’t care about the documents you produce, they care about the service you provide. And that gets lost when you focus on the test project instead of on the software project.

p.s. Something is bugging me about this post, but I can’t put my finger on exactly what it is. Ideas anyone?

Test cases, can’t do ‘m no more

Continuing the style of my previous blog post…

Some days ago I found myself no longer able to think in test cases while testing. Of course, it’s not as if I was using test design techniques to generate test cases one day and woke up the next day to find myself unable to do it anymore. But still, about a week ago I figured I had explored enough to be able to write down the test cases I wanted to execute and found myself staring at a blank page (well ok, empty Excel sheet) feeling alienated from what I was planning to do.

So what do I mean when saying “thinking in test cases”. Simply put, you take a piece of functionality, let a test design technique loose on it and there you go: a set of test cases to execute. Combine test design techniques over the different pieces of functionality as required and you’re all covered test strategy-wise. Or that’s the idea.
The problem with this approach is that it is based on reductive and deductive reasoning. It believes that we can transform a description of some piece of software to a set of actions and checks – with nothing important getting lost in that transformation. How is that ever supposed to work? Systems thinking anyone?

Yet if not test cases, than what? You model, you explore and you investigate. You don’t think in test cases; you generate test ideas and work with those. You approach the application as the complex system that it is, with all the different interactions that entails. And yes, during this process you will be using test design techniques. The difference is that they will not give you any guarantee of coverage except in a very trivial way, i.e. that you got the desired coverage for the very specific thing you were testing. That is all.
To answer the question if you tested all the important parts of the application, you do not need test design techniques, you need models. More than one, some may overlap and some may be somewhat contradictory. That’s ok. If testing weren’t such a messy business, it wouldn’t be that much fun.