Top Software Test Metrics and How They Can Help Your Organization


Software test metrics are quantitative measurements used to estimate the progression of software tests, the productivity of the testing team, and the quality of the underlying software your organization is developing.

You can use software test metrics to:

  • Evaluate testing efforts. Several metrics can indicate how thorough the tests have been, indicating high-quality or low-quality software.
  • Reduce costs. You can track relevant defect metrics and save money over time by using the information gleaned to prevent costly future defects.
  • Make decisions. Test metrics can help you make more informed decisions about processes or technologies that need modifying. For example, you might scale back automation if you find that automating your tests takes more time than performing manual tests.

Conducting software testing without using metrics is meaningless—all you know is that you tested the software but you don’t get any valuable feedback from the testing process that leads to future improvements.

Not every metric is equally useful, though. Bad metrics don’t serve a clear beneficial purpose. For example, a poor metric such as counting the number of bugs raised per tester encourages excess competition among team members. But a collaborative effort is conducive to delivering high-quality software.

The following post gives you an overview on some of the top software test metrics you can use within your organization. These metrics provide actionable information that helps your organization improve its software development.

1. Automation Progress

Automation progress measures the percentage of test cases that have been automated out of the total possible automatable cases.

This metric is important because it helps you evaluate your organization’s automation efforts. An organization should set an automation goal for test cases, for example, 70% automation, and try to reach it. You can measure automation progress in short durations such as per sprint, or over the course of an entire project to see whether your testing teams are meeting the defined automation goals.

You can calculate automation progress as a simple percentage, as per the below image:

2. Defect Removal Efficiency

To get a measure of the effectiveness of a team’s testing efforts, you can use defect removal efficiency. To better calculate this metric, you need to keep track of the number of bugs found in development, the number of bugs found in production, and to associate bugs to a specific release or time period (see this post for more details).

An important caveat is that this metric doesn’t tell you about the severity of bugs found. But for a quick overview of whether your development teams are running a tight enough testing ship, DRE is a good metric.

For example, a DRE of 80% tells you that 20% of defects escaped the testing team’s efforts. You can therefore use this metric as the basis for investigation into what went wrong with the testing efforts and thus improve future test efficacy by taking corrective measures.

3. Defects Per Requirement

Measuring the defects found per requirement is a good way to identify poorly defined requirements or those requirements that have more risk than might be worth implementing.

To get the defects per requirement metric, simply list the requirement names and the number of defects found while testing the implementation of those requirements. Using an Agile framework you can replace requirements with user stories to identify poorly written or overly complex user stories.

By identifying risky requirements that tend to produce many defects, you can make decisions about whether to avoid similar requirements for future projects.

4. Test Progress

Test progress tracks the number of test cases executed per unit of time. You can use this metric to compare the test progress against an overall project plan.

Therefore, test progress provides actionable information that tells you whether your development teams are meeting the goals you set.

  1. Test Cases Per Requirement

The main aim of any software testing effort is to test as much of the software code as possible.

Measuring test cases by requirement gives you a clear picture of the features that are being tested and whether each test is aligned with a specific requirement or user story.

You can also track the pass and fail rates for tests to identify problematic requirements that are proving difficult to implement in your software.

This metric is straightforward—simply list your requirements, map a test case to each requirement, and mark the result of each test.

Closing Thoughts

  • Software test metrics are useful tools for tracking software testing efforts, software quality, and the entire software development process.
  • While metrics can provide valuable information for improving software development, no metric is perfect. Be careful with the conclusions you draw from different metrics and analyze all data carefully.
  • The plethora of different strategies and techniques used to test both functional and non-functional software requirements leads to huge challenges for organizations in getting a holistic picture of the entire testing process and the quality of software they produce.
  • An example holistic solution is the Sealights platform, which provides a unified dashboard across all types of tests. In addition, SeaLights allows development teams to use the shift left approach, moving testing as far left and earlier into the development pipeline as possible.
  • By investing in a platform that measures test coverage across many types of tests, your organization gets a centralized perspective on software quality, catering for more informed decision making for future development projects.