Skip to main content
All articles
Best Practices

QA Coverage Metrics That Actually Matter (and the Ones That Don't)

April 22, 20267 min readNuria Carrasco · Co-founder, SmartRuns

You had 87% test coverage last sprint. How many bugs made it to production?

If you can't answer the second question, the first number is theater.

Most QA teams track coverage percentage because it's easy to produce. The problem is that it doesn't predict anything. Teams with 95% coverage ship production bugs. Teams with 60% coverage have almost none. The number itself is not the point.

The metric everyone tracks and why it lies

Test coverage percentage measures the ratio of test cases to something — usually features, user stories, or lines of code. It doesn't measure whether those tests are any good. It doesn't measure whether they ran. It doesn't measure whether the things they cover are the things that matter.

You can game it trivially. Write 50 shallow tests for your most-documented features and your coverage percentage jumps. Quality unchanged. Risk unchanged.

What it misses:

  • Critical path coverage. 87% coverage means nothing if the checkout flow, the authentication layer, and the payment processing are in the 13%. Those are the flows that end customer relationships.
  • Flaky test rate. Tests that fail intermittently are counted toward your coverage percentage even when nobody trusts them. A suite with 30 flaky tests out of 200 is not a reliable suite — it's one your team has learned to ignore.
  • Test maintenance burden. Old tests that haven't run in 90 days are counted as coverage. They might be for a feature you deprecated. They're not coverage. They're debt.

The metrics that actually matter

Five numbers. Each one tells you something the coverage percentage can't.

1. Critical path coverage

What percentage of your highest-risk user flows have at least one test case? Define your top 20 user flows. Count how many have coverage. That ratio is your real answer — not the overall percentage.

2. Execution rate

What percentage of your test cases actually ran this sprint? A suite with 100% coverage but 40% execution rate is not a 100% coverage suite. Execution rate is what separates a test suite from a test library.

3. Defect escape rate

Bugs found in production divided by total bugs found in QA and production combined.

Defect escape rate = Production bugs ÷ (QA bugs + Production bugs)

Healthy

Below 10%

QA catches the overwhelming majority before they reach users.

Systemic gap

Above 20%

One in five bugs reaches users before QA finds it. That's structural.

4. Flaky test rate

Tests that fail intermittently without a deterministic cause. The threshold is 5%. Above that, your suite is unreliable. Engineers start ignoring test failures — and a suite nobody trusts doesn't protect anything.

5. Test age

What percentage of your test cases haven't run in more than 90 days? A suite where 30% of cases are over 90 days old is a suite where 30% of your “coverage” is imaginary.

How to get these numbers without a custom dashboard

Critical path coverage

Tag your top 20 user flows in your test management tool. Count how many have at least one linked test case. This takes 30 minutes the first time, then it's a filter.

Execution rate

Your test management tool tracks this automatically if you run tests through it — not through a spreadsheet. The spreadsheet records what you did. A proper tool records what you didn't do, which is the number you actually need.

Defect escape rate

Pull from Jira. Filter bugs by reporter type: customer-reported versus QA-reported. This is a 15-minute report once you know the filter. Start tagging bug sources consistently — it takes one sprint to build the habit and the data compounds from there.

The honest conversation to have with your manager

Don't lead with coverage percentage. Lead with this instead:

  • Critical path status. "Here are our 3 highest-risk user flows and their current coverage status." This is the conversation that changes release decisions.
  • Defect escape trend. "Last quarter we had X bugs reach production. This quarter we're at Y. Here's what changed." This is what gets QA taken seriously.
  • Execution reality. "We have 400 test cases. Last sprint we ran 280. Here's why the gap exists and what it would take to close it." This is what gets QA resourced properly.
The question isn't “what's our coverage?” It's “what would break if we shipped today?” One of those questions has an answer. The other just has a number.

A practical starting point: the 20-minute weekly metric review

  • 1. Critical path execution. Did all P1 test cases run this sprint? If not, what's the gap and is it acceptable to ship with it?
  • 2. Defect escape rate this sprint. How many bugs did QA catch versus how many did users report? Any increase warrants a look at what changed.
  • 3. Execution rate. What percentage of the planned test suite actually ran? If it's below 80%, understand why before signing off.
  • 4. Flaky test count. Any new flaky failures this sprint? Flag them for engineering. Don't let them become background noise.
  • 5. Test age check. Any cases that haven't run in over 90 days still showing as "covered"? Archive or update them before the next sprint.

The teams that do this consistently stop being surprised by production bugs. The teams that don't keep optimizing their coverage percentage while users find the edge cases first.

Get the metrics that actually matter, built in

14-day free trial. 5-minute setup. No credit card required.

★ 4.9 rating · 500+ QA teams