the testing curve

my learning curve in software testing

Tag Archives: DEWT

DEWT3 experience report

Last weekend the third Dutch Exploratory Workshop in Testing (DEWT3 for short) took place. The ingredients were: a very nice hotel in the woods, lots of talk about testing, beer, whiskey, a small to moderate amount of sleep, stickies and a group of fun and interesting people (You can see them here.)

On Saturday the talks (and thus discussions) were about systems thinking. A few years ago I did read Jerry Weinberg’s “An Introduction to General Systems Thinking” and although a very interesting read, w.r.t. to applying it to testing I never got further then: Software is (part of) a system, so you can apply systems thinking to it. Of course, that’s very much true, but it’s also quite a vague piece of advice.
Enter James Bach, who kicked off DEWT3 with a primer on systems thinking. Systems thinking is just a way of thinking – just like logical thinking, analogical thinking, creative thinking, etc. – in which we approach a situation as being a system. So what’s a system? It’s a set of things in a meaningful interaction with each other.
This definition raises all sorts of questions relevant in systems thinking:
– What’s part of the set and what’s not?
– What do we consider a thing?
– How are we going to name the things?
– What makes an interaction meaningful?
– What kind of interactions are there? Cause and effect? Feedback loops?
– Do these interactions result in a stable system or not?(1)
– …
So James’s talk was great (at least to me) in demystifying the application of systems thinking. Just as with logical thinking, we all do it often enough, the real trick is being (or becoming) really good at it.

The remainder of the day was filled with discussion on the following four talks:
Rik Marselis talked about software development as a system. This brought us to the question: should we stop or should we start testing? The main point was that we do not want testing to happen in a ‘testing phase’ that begins when everyone else thinks they’re done. And such a phase should really be called the fixing phase, by the way.
Ruud Cox presented a stakeholder analysis he did for a project building an intelligent luminaire for parking garages. (A luminaire is like a lamp, only more so.) It became quite obvious that by applying systems thinking he managed to identify quite a few additional stakeholders, such as the people transporting the luminaire, the owner of the garage (different person than the one operating it), the people living across the street from the garage, etc.
Derk-Jan de Grood talked about mapping communication channels and fields of influence in an organisation. This lead to a great discussion about the pros and cons of spreading a rumor in an organisation as a kind of test. Evidently, we were able to identify some potential problems, so don’t expect to see it on a list of best practices anytime soon.
James Bach talked about a model he and Anne-Marie Charrett are developing about coaching. Apart from being an interesting model as such, it was a bit shocking to see how a limited interaction (coaching) between just two people results in a system that’s already quite complex. James also showed us several versions of the model, illustrating how difficult it is to identify the ‘things’ in your system and how they interact.
Finally, Markus Gärtner did a short stickies tutorial. This was highly needed as the intuitive way of taking a sticky of a stack is wrong. It creates a curve in the sticky, which means that when you stick a sticky to something, there’s a distance between the bottom of the sticky and what you stuck it on. Luckily there are not one, not two, but three ways to properly remove a sticky from the stack correctly. And perhaps there are even more, so to encourage further contributions to this most exciting field of research, I will leave the specifics of these correct methods as an exercise to the reader.

In the evening Markus proposed a Lean Beer session. (It’s similar to Lean Coffee, only mental acuity goes down instead of up because of the consumption of beer instead of coffee.) So we had interesting discussions on a variety op topics, such as TMap, model-based testing, the OODA-loop and which books to read. I’m still slightly disappointed we never made it to the topic ‘grizzly bears’, though.

On Sunday there were four more talks.
Michael Philips expressed his concern about a tendency in Agile to kill off all human testing and just do automated testing and continuous deployment. One of the reasons for this tendency is the idea that “testers can’t keep up in Agile projects”. Which is an excellent non-systems thinking way to approach it. As James Bach pointed out, creating code is creating risk. If “the testers are not keeping up”, the problem is that your developers are out of control. They are creating more risk than the project as a whole can manage.
Joris Meerts did a talk on how do you know you’re a good tester. This lead him to point our attention to a very important and often overlooked testing skill: reading. It rarely seems to pop up in lists of testing skills, yet when you are testing a document (i.e. reviewing) it seems to me to be kind of indispensable. Joris also pointed out that it’s hard to find good material on reviewing skills and heuristics. Most seem to talk about the format (such as peer review, walkthrough, inspection) and not about what it is that you exactly do when you are reviewing a document. I hope to get back on this in a later blog post.
James Bach told the story about how a project in which the three other testers were fired and he was not (or rather: he was rehired after being fired). The reason: he had built up credibility within the project. He did not do this by being agreeable, but by doing his job to the best of his abilities, even when this meant arguing with the project manager. Basically he said: you hired me for my expertise, so have faith in my expertise (and by extension your earlier hiring decision) instead of telling me what to do.
Huib Schoots closed the day with a talk on recruiting testers. He stressed that to him the most important thing is personality and enthusiasm. These two things you can’t teach; testing skills and (especially) domain knowledge you can.

And that was the end of the weekend. To close off I’d like to thank all the other attendees. Hope to see you again sometime soon!

— — —

(1) Or as I am reading Nassim Nicholas Taleb’s “Antifragile”: a fragile, robust or antifragile system?

DEWT2 – Becoming a context-driven tester

About a month ago ago (October 5th – 6th) I was in Driebergen to attend DEWT2, a peer workshop with as theme “Implementing Context-Driven Testing”. As it turns out, implementing context-driven testing is not easy to do. That should not come as a surprise: it requires people to change and that is difficult. Luckily, I’m not a manager wanting to implement context-driven testing, so I can dodge most of that problem.
However, I do like ‘spreading the word’ on context-driven testing, because I would like for there to be more context-driven testers in the Netherlands (and Europe and the world, of course). So to promote context-driven testing I think there are three things I can do: 1) set an example, 2) be available to other people, 3) leave bread crumbs.
Setting an example means testing in a context-driven way and (very important) labelling it as such. You want people to know there’s a reason you do the things you do and that reason has a name: context-driven testing.
Secondly, be available to people. When asked a question, don’t just answer the question. Take your time to engage in a conversation.
Finally, leave bread crumbs to expose other people to context-driven testing. Keep some books on your desk. Put the 37 test ideas from the Little Black Book on Test Design up on a wall. Or a mindmap with the test coverage of your current project. Perhaps no one will ask about it, but perhaps someone will.

Now, I am fully aware these three things are nothing special. They are not the result of a stroke of genius insight on my part, if only because they were very much present in the talks by the speakers (Henrik Ilari Aegerter, Markus Gärtner, Ray Oei, Ruud Cox and Huib Schoots) and in the discussions at DEWT2.
However, I now am wondering perhaps there isn’t much more we can do, because (as I said earlier) change is hard. As I have discovered in one of my other pursuits: some people will get it and some people won’t. If you’re for instance a factory school-tester, you may become intrigued by the context-driven school or you may not. You may come to prefer the context-driven school or not. You may be able to fully change to a context-driven way of thinking or not. So there are at least three points in time in which a person may ‘fail’ to become an actual context-driven tester.

And even if you are able to make the change, it will take quite some time to make the change. Perhaps I’m not at all representative, but to illustrate here’s how I became a context-driven tester.
My career in testing started in May 2006 with a TMap course and I got the ‘Professional Advanced’ certificate in June 2007. Somewhere in 2008 I started teaching this TMap course to our new employees. So let’s say I had a fair grasp of TMap and thus (unknowingly) of the factory school of testing.
Then, somewhere in the beginning of 2010 I came across the blogs of James Bach and Michael Bolton, so I read those from the first post to the most recent. I also read the Rapid Software Testing course slides. I read about the different schools in software testing. I read “Lessons Learned In Software Testing” and read the site. Did this make me context-driven tester? No, I was just a guy that had read a lot about it. I liked what I read, but I did not really actually get it.
In November 2010 I attended the Rapid Software Testing class taught by Michael Bolton in London. This did give me a better grasp of what a context-driven testing approach looked like, but afterwards I still didn’t feel I was a context-driven tester. It still felt somewhat alien to me. So I continued to process everything. I noticed that quite often the first response to a situation that came into my mind was a factory-style one, but if I put a little more tought in it, I could also provide a ‘proper’ context-driven answer. In that respect thid was a very strange phase: noticing factory school thoughts popping up in my mind and then having a different part of my mind reasoning against those thoughts with context-driven arguments.
Fast forward to Januari 2012 when I began this blog. I even did a few posts on the seven principles of context-driven testing and still I didn’t feel like I was a context-driven tester. May 2012 I was at the Let’s Test conference in Sweden; that helped. And then finally in July 2012 I made my first blog post on the VIPT model and felt I had become a context-driven tester – probably because that model is my personal synthesis of context-driven testing. Only since that moment do I feel like context-driven testing is a fundamental part of how I think about testing.

So to summarize I went through these three stages:
1) Being interested and absorbing information;
2) Being able to reason to a context-driven repsonse;
3) Being a context-driven tester.
And it took me about 2.5 years to go through these three stages. To be honest, I don’t think that’s slow. Changing the way you look at testing in a fundamental way takes time. And the longer ago you made that change, the harder it is to remember how you thought before the change and to remember how you made this change. Which is part of the reason I am writing this blog post: to document this while I still remember. Because in a few years time I will have as hard a time to imagine what it’s like to think factory school as I had all those years ago imagining there was a different way to think about testing.