Test strategy primer

I originally wrote this primer to share with people at work. The first section is intended to be generally applicable; the second not necessarily so. The main influences on this primer are:
– Rapid Software Testing’s
Heuristic Test Strategy Model
– James Lyndsay’s
Why Exploration has a place in any Strategy
– Rikard Edgren’s
Little Black Book on Test Design

This document contains two sections:

  • Why have a test strategy? – what is a test strategy and what’s its purpose
  • Test strategy checklist – checklist for describing your test strategy

Why have a test strategy?

The purpose of a test strategy is to answer the question: “How do we know that what we are building, is good enough?” As such, every team has a test strategy, even if it’s only implicitly defined through the actions of the team members.

The value of documenting your test strategy is to create an explicit shared understanding of your strategy among your stakeholders (that includes you as team members) . So your test strategy document should be a living document, a tool that helps you deliver software better.

A good test strategy is characterized by diverse half-measures: diversity is often more important than depth.

“How do we know that what we are building, is good enough?”

“what we are building”
You can look at what we are building from different perspectives:
– a piece of code
– one or more components
– a feature
– a capability
– a product

“good
A good way to approach this is to split up “good” in different quality attributes such as capability, security, maintainability. Two good checklists with quality attributes are:
– The Test Eye’s Software Quality Characteristics
– The Heuristic Test Strategy Model’s Quality Criteria Categories (page 5)

“good enough”
Something is good enough when it provides sufficient value at low enough risk. Exactly where to draw that line, is a decision for the team’s stakeholders.

Evaluating value starts from our expectations, e.g. acceptance criteria. It’s about showing that something works, that it provides the value we want it to provide. Scripted approaches, i.e. the test is largely or completely defined beforehand, work well in this area – especially when you can define these in code instead of manual instructions.

Evaluating risk starts from the thing we are building. It’s about looking for problems, that what we’re building is not doing things we don’t want it to do. Exploratory approaches work well in this area, because you don’t know beforehand what exactly you are looking for.

A few important things to note:
– The two approaches (scripted versus exploratory) exist in a continuum. When testing, you move within this continuum.
– Both approaches can be applied to any level or across levels of your stack.
– Both approaches benefit greatly from using different kinds of tools.

“how do we know”
What are the things we do?
– static testing (code does not need to run), examples: code review, code analysis.
– dynamic testing (code needs to run), examples: unit testing, system testing, performance testing.
– monitoring (product is used by customers), examples: log alerts, server monitoring, support tickets dashboard.

Test strategy checklist

Products & components

  • What product(s) do we contribute to?
  • Why would our company want to have these products?
  • What components of those products are we responsible for?
  • Critical dependencies of our components:
    • What do our components depend on?
    • What has dependencies on our components?

Feature testing

  • What testing is done while developing a feature?
    • which quality characteristics do you focus on?
    • what levels of the stack and/or which components are part of the test?
    • what testing is done as part of a pipeline (see below)?
  • What testing have you decided not to do, i.e. is not in scope?

Release testing

  • What testing is performed on the release candidate?
    • which quality characteristics do you focus on?
    • what levels of the stack and/or which components are part of the test?
    • what testing is done as part of a pipeline (see below)?
  • To what degree do you repeat your feature testing during release testing?
  • What testing have you decided not to do, i.e. is not in scope?

CI/CD pipeline(s)

  • What is covered by the pipeline(s)?
    • code analysis
    • auto-tests
    • deployment
  • When/how are they triggered?
    • schedule
    • commits, merge requests, …
  • Is there a dashboard?
  • What happens if a pipeline goes red?

Testing done by others

  • What testing is done by others?
    • Examples: security testing by a 3rd party, beta-releases.

Monitoring

  • How do you monitor your production environment(s)?
    • Examples: server metrics, logs, user analytics, support tickets.

Impediments

  • What is slowing down your testing?
    • Examples: architecture, design, skills, tools, team capacity.

Test cases, can’t do ‘m no more

Continuing the style of my previous blog post…

Some days ago I found myself no longer able to think in test cases while testing. Of course, it’s not as if I was using test design techniques to generate test cases one day and woke up the next day to find myself unable to do it anymore. But still, about a week ago I figured I had explored enough to be able to write down the test cases I wanted to execute and found myself staring at a blank page (well ok, empty Excel sheet) feeling alienated from what I was planning to do.

So what do I mean when saying “thinking in test cases”. Simply put, you take a piece of functionality, let a test design technique loose on it and there you go: a set of test cases to execute. Combine test design techniques over the different pieces of functionality as required and you’re all covered test strategy-wise. Or that’s the idea.
The problem with this approach is that it is based on reductive and deductive reasoning. It believes that we can transform a description of some piece of software to a set of actions and checks – with nothing important getting lost in that transformation. How is that ever supposed to work? Systems thinking anyone?

Yet if not test cases, than what? You model, you explore and you investigate. You don’t think in test cases; you generate test ideas and work with those. You approach the application as the complex system that it is, with all the different interactions that entails. And yes, during this process you will be using test design techniques. The difference is that they will not give you any guarantee of coverage except in a very trivial way, i.e. that you got the desired coverage for the very specific thing you were testing. That is all.
To answer the question if you tested all the important parts of the application, you do not need test design techniques, you need models. More than one, some may overlap and some may be somewhat contradictory. That’s ok. If testing weren’t such a messy business, it wouldn’t be that much fun.

Some thoughts after attending the ‘Getting a Grip on Exploratory Testing’ workshop

About two weeks ago I attended James Lyndsay‘s ‘Getting a Grip on Exploratory Testing’ workshop in Amsterdam. So it’s about time to write something about it…

Now one of the things I dislike about workshop blog posts is that people will say “It was great! And person X is such a good trainer!” without saying much about the content of the workshop. However, I find myself now writing this post and thinking: I shouldn’t post a full summary of the workshop. Not that it would spoil too much for any future attendee: most of the workshop consists of exercises and discussion. But posting a summary of the workshop that James has put a lot of effort in to create, just doesn’t feel right. So let me just say this: the workshop was great and James is such a good trainer! :-D

Now that’s out of the way, there are a few things from the workshop I’d like to share. Of course, the usual disclaimer applies: these are my thoughts on what was presented during the workshop. Any misrepresentations are my responsibility.

First of all, style. Different people have different styles of exploratory testing. Your style influences how you test, what you find, when you find it, etc. For example, some elements of my own style are:
– tendency to exceed time box.
– explores with about three variations before moving on to something else.
– takes a lot of notes.
Before the workshop I thought a style was like an approach. People prefer certain styles/approaches, but if their current one is failing them, they can switch to a different one. Turns out is’s not that simple. Style is very much like your personality and you don’t switch those easily either.
That does not mean it’s not a good thing to know what your style is like. Your style is an important part of how your mind works. Your mind is probably the most important testing tool you have. And as any craftsman can tell you: a good craftsman needs an excellent understanding of his tools. So knowing your style is important. Not just for when you’re testing by yourself, but also when you decide to team up with someone. Do you want to team up with someone with a similar style or with someone whose style complements your own?

Secondly, one small, but cool thing that came up was the ‘weapon of choice’ of a tester. (Which reminded me of a scene from a movie I can’t remember the title of. Two gentlemen are about to fight a duel. One says: “Choose your weapon: sword or pistol.” The other one: “I choose the sword.” To which the first one replies: “Fine, then I will take the pistol.”) Anyhow, if you were stranded on some desert island and had to do some software testing: which testing tool would you take with you? That tool is your ‘weapon of choice’. For me at the moment it’s Notepad++. I have my notes and to do-list in there. And I spend a lot of time with xmls, which Notepad++ does quite nicely with the proper plug-in.

Finally, perhaps the most important thing I learned is a simple model about exploratory vs. scripted testing:
– Scripted testing starts from what is expected. It is focused on discovering value.
– Exploratory testing starts from what is delivered. It is focused on discovering problems.
In some contexts this model may be too simple, but for me it really helps as a starting point to think about how one can fit scripted and exploratory testing into a test strategy.

This model has also helped me to explain exploratory testing to people and especially to people that approach testing in the traditional way, i.e. like TMap and ISTQB do. These people often think of exploratory testing as one of the many testing techniques at their disposal. So exploratory testing is not seen as having a fundamentally different approach from scripted testing. Rather, exploratory testing is something you can do besides e.g. decision table testing or use case testing.
The reason for this (in my opinion) is that these traditional methods reduce testing to showing that what has been delivered meets a certain set of expectations – be it specifications, use cases or requirements. This covers both the ‘verification’ part (Did we build the product right?) and the ‘validation’ part (Did we build the right product?) of testing. If that is all there is to testing, it’s hard to see what’s so great about exploratory testing. It’s what you do when you don’t have enough information to use any of the other (better) techniques.
So if you want to explain exploratory testing to someone with this traditional view on testing, you’ll first have to change their mindset. Show them there is more to testing. And please do that before you start talking about sessions and charters and debriefings and all that other stuff. First show them that there is another purpose to testing: investigating what has been delivered in search of problems (and/or surprises). Make them see the context in which you apply exploratory testing. With that done, it will be so much easier to explain what exploratory testing is.