What Code Coverage Really Tells You
Your test suite passes. Your coverage report shows 85%. The dashboard is green. But does that number actually mean your code works correctly?
Code coverage is one of the most visible metrics in frontend testing, yet it’s frequently misunderstood. This article explains what coverage metrics actually measure, why high percentages can mask serious gaps, and how to interpret these numbers in real JavaScript projects.
Key Takeaways
- Code coverage measures which lines executed during tests, not whether the code behaves correctly
- Branch coverage reveals gaps that line coverage misses, especially in conditional logic
- High coverage percentages can create false confidence when tests lack meaningful assertions
- Different coverage providers (Istanbul vs V8) can report different numbers for identical code
- Use coverage as a diagnostic tool to find blind spots, not as a quality score
What Code Coverage Actually Measures
Code coverage tells you one thing: which parts of your source code executed during your test run. That’s it.
It does not tell you whether the code behaved correctly. It doesn’t confirm that your assertions caught bugs. It doesn’t verify that edge cases were handled. Coverage tools simply instrument your code and track what ran.
When you run jest --coverage or enable coverage in Vitest, the tool monitors execution and reports back percentages. Those percentages represent execution, not correctness.
Understanding Test Coverage Metrics
Most JavaScript test coverage tools report several distinct metrics. Each measures something different.
Line coverage tracks whether each line of source code executed. Statement coverage counts executed statements—which differs from lines when multiple statements appear on one line. Function coverage reports whether each function was called at least once.
Branch coverage goes deeper. It tracks whether each path through conditional logic executed. An if/else block has two branches. Branch coverage requires both paths to run.
Here’s where the distinction matters:
function getDiscount(user) {
if (user.isPremium || user.hasPromo) {
return 0.2
}
return 0
}
A test with user.isPremium = true achieves 100% line coverage. Every line executes. But branch coverage reveals the gap: you never tested when isPremium is false but hasPromo is true, or when both are false.
This is why branch coverage vs line coverage matters. Line coverage can hit 100% while leaving logical paths completely untested.
Why High Coverage Can Be Misleading
A test that executes code without meaningful assertions inflates coverage without catching bugs. Consider:
test('processes order', () => {
processOrder(mockOrder)
// No assertions
})
This test might execute dozens of lines, boosting your coverage percentage. But it verifies nothing. The code could return wrong values, throw silently caught errors, or corrupt state—and this test would still pass.
Coverage tools can’t distinguish between a test that thoroughly validates behavior and one that simply runs code. Both count the same toward your percentage.
Discover how at OpenReplay.com.
Coverage Providers: Why Numbers Shift
Modern JavaScript test coverage relies on different instrumentation approaches. Jest defaults to Istanbul-style instrumentation, which transforms your code before execution. Vitest supports both Istanbul and V8-based native coverage.
These providers can report different numbers for identical code and tests. V8 coverage operates at the engine level and sometimes counts coverage differently than Istanbul’s source transformation approach.
Switching providers, upgrading tools, or changing configuration can shift your reported coverage without any code changes. This doesn’t mean one approach is wrong—they measure slightly different things at different levels.
Treat coverage numbers as directional signals, not precise measurements.
Where Coverage Helps and Where It Misleads
Coverage is useful for:
- Identifying completely untested files or functions
- Spotting dead code that never executes
- Finding branches you forgot to test
- Tracking trends over time (coverage dropping on new code)
Coverage misleads when:
- Teams chase percentages instead of testing behavior
- Tests execute code without asserting outcomes
- High numbers create false confidence in test quality
- Thresholds pressure developers into writing shallow tests
Interpreting Coverage in Real Projects
Use coverage reports as a diagnostic tool, not a quality score. When you see uncovered lines, ask: does this code path matter? If it handles errors, edge cases, or critical logic, write tests for it. If it’s genuinely unreachable or trivial, consider whether the code should exist at all.
Review coverage alongside your tests, not instead of them. A file with 60% coverage and strong assertions often provides more confidence than one with 95% coverage and weak tests.
In code reviews, coverage diffs help identify whether new code includes tests. But the review itself should evaluate whether those tests verify meaningful behavior.
Making Coverage Work for Your Team
Set coverage thresholds thoughtfully. A floor of 70-80% prevents obvious gaps without pushing teams toward coverage theater. More importantly, focus on branch coverage for logic-heavy code—it catches gaps that line coverage misses.
Run coverage in CI to track trends, but don’t fail builds on small dips. Refactoring often temporarily reduces coverage as tested code gets removed or restructured.
Conclusion
The goal isn’t a number. It’s confidence that your tests catch real bugs before users do. Coverage helps you find blind spots. Good test design, meaningful assertions, and thoughtful reviews determine whether you actually fill them.
FAQs
Most teams target 70-80% as a reasonable floor. However, the percentage matters less than what you're testing. Focus on covering critical paths, error handling, and business logic. A lower percentage with strong assertions often beats high coverage with shallow tests that lack meaningful verification.
Avoid failing builds on small coverage dips. Refactoring often temporarily reduces coverage as tested code gets removed or restructured. Instead, use coverage trends to identify patterns and review coverage diffs in pull requests to ensure new code includes appropriate tests.
Different coverage providers use different instrumentation approaches. Istanbul transforms code before execution while V8 operates at the engine level. These approaches measure slightly different things, so switching providers or upgrading tools can shift reported numbers without any actual code changes.
Review your assertions, not just your coverage numbers. Tests without assertions or with only trivial checks inflate coverage without catching bugs. Mutation testing tools can help by introducing small code changes to verify your tests actually fail when behavior changes unexpectedly.
Gain control over your UX
See how users are using your site as if you were sitting next to them, learn and iterate faster with OpenReplay. — the open-source session replay tool for developers. Self-host it in minutes, and have complete control over your customer data. Check our GitHub repo and join the thousands of developers in our community.