Some thoughts after attending the ‘Getting a Grip on Exploratory Testing’ workshop

About two weeks ago I attended James Lyndsay‘s ‘Getting a Grip on Exploratory Testing’ workshop in Amsterdam. So it’s about time to write something about it…

Now one of the things I dislike about workshop blog posts is that people will say “It was great! And person X is such a good trainer!” without saying much about the content of the workshop. However, I find myself now writing this post and thinking: I shouldn’t post a full summary of the workshop. Not that it would spoil too much for any future attendee: most of the workshop consists of exercises and discussion. But posting a summary of the workshop that James has put a lot of effort in to create, just doesn’t feel right. So let me just say this: the workshop was great and James is such a good trainer! :-D

Now that’s out of the way, there are a few things from the workshop I’d like to share. Of course, the usual disclaimer applies: these are my thoughts on what was presented during the workshop. Any misrepresentations are my responsibility.

First of all, style. Different people have different styles of exploratory testing. Your style influences how you test, what you find, when you find it, etc. For example, some elements of my own style are:
– tendency to exceed time box.
– explores with about three variations before moving on to something else.
– takes a lot of notes.
Before the workshop I thought a style was like an approach. People prefer certain styles/approaches, but if their current one is failing them, they can switch to a different one. Turns out is’s not that simple. Style is very much like your personality and you don’t switch those easily either.
That does not mean it’s not a good thing to know what your style is like. Your style is an important part of how your mind works. Your mind is probably the most important testing tool you have. And as any craftsman can tell you: a good craftsman needs an excellent understanding of his tools. So knowing your style is important. Not just for when you’re testing by yourself, but also when you decide to team up with someone. Do you want to team up with someone with a similar style or with someone whose style complements your own?

Secondly, one small, but cool thing that came up was the ‘weapon of choice’ of a tester. (Which reminded me of a scene from a movie I can’t remember the title of. Two gentlemen are about to fight a duel. One says: “Choose your weapon: sword or pistol.” The other one: “I choose the sword.” To which the first one replies: “Fine, then I will take the pistol.”) Anyhow, if you were stranded on some desert island and had to do some software testing: which testing tool would you take with you? That tool is your ‘weapon of choice’. For me at the moment it’s Notepad++. I have my notes and to do-list in there. And I spend a lot of time with xmls, which Notepad++ does quite nicely with the proper plug-in.

Finally, perhaps the most important thing I learned is a simple model about exploratory vs. scripted testing:
– Scripted testing starts from what is expected. It is focused on discovering value.
– Exploratory testing starts from what is delivered. It is focused on discovering problems.
In some contexts this model may be too simple, but for me it really helps as a starting point to think about how one can fit scripted and exploratory testing into a test strategy.

This model has also helped me to explain exploratory testing to people and especially to people that approach testing in the traditional way, i.e. like TMap and ISTQB do. These people often think of exploratory testing as one of the many testing techniques at their disposal. So exploratory testing is not seen as having a fundamentally different approach from scripted testing. Rather, exploratory testing is something you can do besides e.g. decision table testing or use case testing.
The reason for this (in my opinion) is that these traditional methods reduce testing to showing that what has been delivered meets a certain set of expectations – be it specifications, use cases or requirements. This covers both the ‘verification’ part (Did we build the product right?) and the ‘validation’ part (Did we build the right product?) of testing. If that is all there is to testing, it’s hard to see what’s so great about exploratory testing. It’s what you do when you don’t have enough information to use any of the other (better) techniques.
So if you want to explain exploratory testing to someone with this traditional view on testing, you’ll first have to change their mindset. Show them there is more to testing. And please do that before you start talking about sessions and charters and debriefings and all that other stuff. First show them that there is another purpose to testing: investigating what has been delivered in search of problems (and/or surprises). Make them see the context in which you apply exploratory testing. With that done, it will be so much easier to explain what exploratory testing is.

The irony of scripted testing

A bit over a week ago, James Bach posted on twitter:
This video shows a nice simple contrast between heavy scripted testing and exploratory testing ” []

So I watched the video, hoping to see something that would make me go ‘Cool!’, but instead I went ‘Hmmm.’

First let me say that this video does get a few things right:
– Exploratory testing can be structured by using charters.
– Exploratory testing allows you to easily change your test approach based on your test results.
– Exploratory testing is very adaptable when confronted with inaccurate or changing requirements.
Yet notice how the above only talks about exploratory testing, because for every thing the video gets right about exploratory testing, it gets something wrong about scripted testing – or rather about how scripted testing works in practice.

1. The exploratory test team starts testing earlier than the scripted test team.
At the start, the exploratory team makes a high-level plan and then starts testing. The scripted team takes more time to create a detailed plan, so starts testing later.
Of course, that’s only what happens if the software is deliverd on the first day the test team goes to work. I know this still does happen, but in my experience that’s now how it usually goes. Most of the time, project managers realize that the test team needs time to prepare the tests and that the best time to do this is before the software is delivered. So in that respect this video is not an honest comparison between the two approaches.
What if I turn the dishonest comparison around and add some time to prepare testing? You would see the scripted test team preparing their test cases. And you would see the exploratory test team sitting idle, because there is no software to explore. (In a less dishonest comparison, I’m sure the exploratory team would find something useful to do, though.) Finally when the software is delivered, both test teams would start at the same time with the actual testing.

2. When the requirements turn out to be incorrect, the scripted team needs to stop and update the plan.
In the video the incorrect requirement is the bridge that’s not on the left side, but on the right side. Unfortunately, before the red team can do anything, they all have to gather at the place the bridge was supposed to be.
First of all, if you are going to use a scripted approach, you’re not supposed to start with the script that does detailed testing of the first screen. You’re supposed to start with an intake test. The reason is simple: if there are any big problems, like a bridge that’s not where you expected, you want to find out as early as possible.
Now let’s ignore that. Let’s assume one of the testers figures out the bridge is not where he expected to be. He’ll tell the other testers and the captain. They discuss how to solve it and at the same time the captain is updating the plan, the testers update the test scripts. (If you have good testers, they will have scripts that are easily updateable.) There really is no need to do it like in the video, i.e. first all gather at the wrong place and only then change the plan.

3. A bug is missed, because it was not in the plan.
Near the end there’s the following dialogue in the scripted team. Captain: “Number two, look to your right! There’s a huge bug right there.” Number two: “No can do, Captain. That’s not in the plan. We must stay on the path.”
With that kind of attitude, I’m amazed Number two got far enough in his career to actually become Number two. I cannot think of any ‘Captain’ that would tolerate this response.
It doesn’t matter if your approach is scripted or exploratory – for all I care you don’t even have anything that is worthy of the term ‘approach’. If someone points out a bug, you do something with it. Log it, fix it, whatever. But don’t ignore it. If it does get ignored, fire the guy who hired your testers in the first place.

So there you go, three big misrepresentations of how to do scripted testing. Now I realize I’m being a bit harsh on this video, as it is intended to sell exploratory testing. And one of the tricks it uses to do this, is make the competitor, scripted testing, look bad. On the other hand, what if the potential customer looking at this video has the same reaction as I did: “That’s not how you do scripted testing!” He probably won’t be a potential customer for exploratory testing much longer.

To close off, let me present a different way to look at it, because there’s something funny going on with scripted testing. I have never seen a scripted testing approach that does not have some exploratory elements in it. To give one example: with a scripted approach a fair amount of my best testing ideas come to me during the actual testing. When that happens, I act on those ideas and afterwards add the relevant test cases to the scripts (‘reverse scripting’?). So basically I deviate from the script to do exploratory testing.
And that exactly is the irony of scripted testing: the way to make a difference as a scripted tester is by doing stuff beyond the scripted testing … stuff like exploratory testing.