The Blasphemy of Recall Prevention & Paradigm Shifts
Recently I was explaining how we could help medical device companies prevent a recall, and my happy listener balked.
She couldn’t accept what she heard.
I told her that – essentially – by using our deductive reasoning process (for targeted negative testing) we can help R&D/QA teams reveal blind spots in their IV&V process (thus exposing critical defects) and dramatically increase confidence in their product releases. This will yield recall prevention.
“Recall prevention!” “You can’t say that – it’s impossible…!”
So, it got me thinking…
When we get to the point where we refuse to acknowledge something – simply because it flies in the face of our beliefs – it’s usually because a paradigm shift is taking place…and we’re not ready to accept it.
I completely understood where she was coming from, because I went through the same process myself.
When I first experienced this process years ago, (on a project led by KPMG), it totally blew me away…
The elegance of this approach is the logic.
Walking through an entire application and identifying all the high risk areas, then determining the functionality that required scenario-based (targeted) negative testing…it was like a blindfold had been removed.
They showed us two important benefits:
- How to identify all the low impact areas
(a) so we don’t waste time on them
2. Where to find the critical defects lurking in the code
(a) which are almost impossible to find with typical requirements based testing
My previous experience had been, you created your test plan (and all that goes in it) which included all areas of the application/system for product, system and performance testing – based upon requirements, supported by specifications, with risk management inputs folded in as well.
Everyone reviewed, updated and approved the test plan and strategy – and voila!
You’ve been very thorough and provided complete test coverage in your plan.
Or so I thought.
What I learned from KPMG was this:
When you rely upon requirements/specification/risk management inputs alone – even with ad hoc negative testing included – you are mostly testing….to prove all functionality and features work.
Let me repeat that statement.
You mostly test to prove the application works…
Yes, you find defects during the process – and after the discovery rate drops to zero over time, you’re eventually finished with testing.
But you still haven’t identified all the high risk/high impact areas of the application, from the perspective of what could fail.
Therefore you won’t find those lurking critical failures, regardless how elaborate your Test Plan becomes.
The answer is quite simple: you find what you’re looking for.
When you only test towards requirements/specifications – which is essentially ‘happy path’ – you are testing to prove it works.
However, when your team works together to:
- Identify the application’s critical path (work flows used the most by Users and connected systems)
- High risk areas (those impacted by change)
- Inject scenario based deductive reasoning to determine all the potential issues (what could go wrong)
Now you are looking for “what could fail” versus “what works” – in essence…recall prevention.
After this phase is completed, you then prioritize – determining which areas are low impact/high impact – and now you know where to look for the lurking critical defects.
What you look for – you will find.
The side benefit: you aren’t wasting time on low impact areas, so the project stays on schedule.
Blasphemy or Paradigm Shift?