Are you rewarding the right behavior…?

I’m often asked by Software QA Managers why their testing team is unable to find all the critical defects in the software, before it’s released to production.

“Why are high severity bugs escaping to production…on a continual basis?”

My response: What behavior are you rewarding?

Here’s what I mean.

Years ago, when I was actively leading software test teams I would ask the testers what their process was, before I arrived.  What was most important to the core project team?

Were they more interested in hitting deadlines or meeting quality requirements?

For example, I was working on a contract for a medical device manufacturer and the product line we were testing was for the European Market, which included additional EU requirements for alarms that threatened to expand the schedule due to the increased testing scope.

Additionally, due to the complexity of this newly integrated system, targeted negative testing was necessary.

While conducting test planning, I was able to observe how the other side of the business was operating – for US based products.

They had expanded their test automation tool usage across all their projects and they had become – over time – highly dependent upon it, to the point that upper management had decreed that all projects would utilize this time saving tool.

I noticed at team meetings that, if you wanted to be perceived as a software test champ, your test case creation numbers were your calling cards.  I heard a lot of bragging about the high number of automated test cases they were building – and executing – on a daily and weekly basis.

Those numbers were highly touted in (widely viewed) weekly status reports as well.

In essence, their teams were being rewarded for creating large quantities of automated test cases – which were 99% focused on requirements coverage and allowed them to reach happy path testing mecca.

And boy did they love those numbers. The greater the number of generated & executed automated test cases, the more testing they were able to accomplish!

Which meant – by golly – they could use this tool to shrink time to market for All Product Lines!

You see, in an FDA Regulated Environment, you need to achieve 100% requirements-based test coverage (with a test matrix included) in order to show all documented features & functionality have been thoroughly tested on all devices and applications on the project.

The thing is, if you’re being rewarded for creating vast numbers of automated test cases, then (quite naturally) that’s where you’re going to put all your time.

What the boss wants, the boss gets.

Meanwhile, those of us on the UK Team were adopting a slightly different approach.

We were focused on identifying all the high-risk areas in the products – from the medical devices to the integrated systems, including all the feature changes we were experiencing as the project continued.

We also identified the low-risk areas so we could set them aside and cut testing time.

Most importantly, we utilized scenario generation sessions in order to create targeted negative test cases…which allowed us to find all the hidden critical defects located throughout the system.

I made sure to highlight our team members who created the most TNT’s (targeted negative tests) and applaud those who discovered the most critical defects throughout the project – because that was the desired result.

We applied this approach to product, system and performance testing throughout the entire project.

The end result?

After release – our product line was successfully installed and implemented at multiple sites throughout Europe with zero issues reported.

Let’s compare the UK Product Team results with the US Product Team.

I observed for two plus years on that contract, how the US Product team was being rewarded for creating thousands of automated test cases in order to provide 100% requirements coverage – quickly.

I also observed how that approach didn’t ensure 100% product quality and reliability.

Here’s what happened.

They performed heroically in order to meet most of their deadlines, but, they experienced stop ships, recalls and endless audits because critical defects were still escaping to production – and they couldn’t figure out why.

Here’s what I learned.

When you depend heavily on an automated test tool (because of its ability to consume requirements and generate high numbers of test cases) but almost nothing else, you are rewarding the wrong behavior.

When you insist on meeting deadlines, no matter what – that’s what you get.

The key difference.

Automated test cases are almost always used for Happy Path Testing – which means you’ve verified all the product/system requirements in the application, and that’s great….but it’s incomplete.

Targeted Negative Testing will find more critical defects than any happy path approach ever will – and that’s by design.

But if your team is rewarded for creating vast amounts of automated test cases, they’re not going to spend their time creating targeted negative test cases, and your projects will suffer as a result.

However, happy path testing and targeted negative testing – when performed together – will absolutely help you prove the system works and find all those scary critical defects that can ruin customer relationships.  Especially on mission critical projects such as medical devices and hospital systems.

Here’s the biggest benefit.

When both processes are used in tandem you’ll actually reduce your time to market, and increase product reliability – because all the risks are identified and mitigated early…and you’ll avoid costly rework which causes projects to delay forever.

When you reward the right behavior – you get the desired results.

Share

You may also like...