DORA’s software delivery performance metrics
Discover the essential measurements that can inform your ongoing journey of continuous improvement.
by Nathen Harvey

Technology-driven teams need ways to measure performance so that they can assess how they’re doing today, prioritize improvements, and validate their progress. DORA has identified five software delivery performance metrics that provide an effective way of measuring the outcomes of the software delivery process. DORA’s research shows that these performance metrics predict better organizational performance and well-being for team members.
These software delivery performance metrics can be viewed as both leading and lagging indicators. Leading indicators typically signal potential future changes in a system while lagging indicators reflect past performance and outcomes.
These metrics function as:
- Leading indicators for organizational performance and employee well-being
- Lagging indicators for software development and delivery practices.
Throughput and instability
DORA’s software delivery performance metrics focus on a team’s ability to deliver software safely, quickly, and efficiently. They can be divided into metrics that show the throughput of software changes, and metrics that show instability of software changes.
Throughput
Throughput is a measure of how many changes can move through the system over a period of time. Higher throughput means that the system can move more changes through to the production environment. DORA uses three factors to measure software delivery throughput:
- Change lead time: The amount of time it takes for a change to go from committed to version control to deployed in production.
- Deployment frequency: The number of deployments over a given period or the time between deployments.
- Failed deployment recovery time: The time it takes to recover from a deployment that fails and requires immediate intervention.
Instability
Instability is a measure of how well the software deployments go. When deployments go well, teams can confidently push more changes into production and users are less likely to experience issues with the application immediately following a deployment. DORA uses two factors to measure software delivery instability:
- Change fail rate: The ratio of deployments that require immediate intervention following a deployment. Likely resulting in a rollback of the changes or a “hotfix” to quickly remediate any issues.
- Deployment rework rate: The ratio of deployments that are unplanned but happen as a result of an incident in production.
Taken together, these two factors for software delivery performance (throughput and instability) give teams a high-level understanding of their software delivery performance. Measuring these over time provides insight into how software delivery performance is changing. These factors can be used to measure any application or service, regardless of the technology stack, the complexity of the deployment processes, or its end users.
Key insights
DORA’s research has repeatedly demonstrated that speed and stability are not tradeoffs. In fact, we see that the metrics are correlated for most teams. Top performers do well across all five metrics, and low performers do poorly.
These metrics work for any type of technology your organization is delivering, but are best suited for measuring one application or service at a time. Whether you are building large language models, retail banking applications, mobile food ordering applications, or mainframe-based travel systems, these five metrics can help you assess the delivery performance of your application.
Context matters. Apply the metrics in the context of the application or service your team is delivering. The context for your application, organization, and users will vary from other applications that your organization is delivering. While it may be tempting to blend metrics across multiple teams–or entire organizations–these differences in context mean that doing so can be problematic.
“…the real trade-off, over long periods of time, is between better software faster and worse software slower.” —Farley, D. (2021). Modern Software Engineering: Doing what works to build better software faster (p. 154). Addison-Wesley.
Common pitfalls
There are some pitfalls to watch out for as your team adopts DORA’s software delivery metrics, including the following:
- Setting metrics as a goal. Ignoring Goodhart’s law and making broad statements like, “Every application must deploy multiple times per day by year’s end,” increases the likelihood that teams will try to game the metrics.
- Having one metric to rule them all. Attempting to measure complex systems with the idea that only one metric matters. Teams should identify multiple metrics, including some with a healthy amount of tension between them. Choose a measurement framework to fit your organizational goals.
- Using industry as a shield against improving. For example, some teams in highly regulated industries might claim that compliance requirements prevent them from disrupting the status quo.
- Making disparate comparisons. These metrics are meant to be applied at the application or service level. Comparing metrics between vastly different applications (for example, a mobile app and a mainframe system) can be misleading.
- Having siloed ownership. Sharing all five metrics across development, operations, and release teams fosters collaboration and shared ownership of the delivery process. Isolating teams with specific metrics can lead to friction and finger-pointing.
- Competing. The goal is to improve your team’s performance over time, not to compete against other teams or organizations. Use the metrics as a guide for identifying areas for growth and celebrating progress.
- Focusing on measurement at the expense of improvement. The data your team needs to collect for the software delivery metrics is available in a number of different places today. Building integrations to multiple systems to get precise data about your software delivery performance might not be worth the initial investment. Instead, it might be better to start with having conversations, taking the DORA Quick Check, or using a source-available or commercial product that comes with pre-built integrations.
What pitfalls have you encountered? Share your own cautionary tales with the DORA community by posting to the mailing list at https://dora.community.
Dive into the research
DORA’s research goes beyond these five metrics, exploring various capabilities that contribute to high performance. You can learn more about these capabilities and their impact on software delivery by visiting the capability catalog.
By understanding and effectively utilizing DORA metrics, you can gain valuable insights into your software delivery performance and drive continuous improvement. Remember, the goal is to deliver better software faster, and DORA metrics provide the compass to orient teams toward that objective.
Next steps
A common approach to improving the five key metrics discussed in this guide is reducing the batch size of changes for an application. Smaller changes are easier to rationalize and to move through the delivery process. Smaller changes are also easier to recover from if there’s a failure. Teams should make each change as small as possible to make the delivery process fast and stable. Working in this way contributes to both change throughput and change stability.
We have found that an effective way of making changes is to gather the cross-functional team that is responsible for prioritizing, building, delivering, and operating an application for a discussion about improving their software delivery performance. Once the team is gathered, walk through the following steps:
- Set a baseline for your application’s current performance using the DORA Quick Check.
- Have a conversation about the friction points in the delivery process. Mapping out the delivery process may help facilitate this part of the process.
- Have the whole team commit to making an improvement in the most significant constraint or bottleneck.
- Turn that commitment into a plan, which may include some more specific measures that can serve as leading indicators for the software delivery metrics. For example, you may decide to measure how long code reviews take or the quality of your tests.
- Do the work. There are very few shortcuts, to make progress your team may need to change the way they work.
- Check your progress. Use the DORA Quick Check, conversations, and team retrospectives to validate the progress you’ve made.
- Repeat the process as a way to continue the learning and improvement.
Join us in the DORA Community for ongoing discussions about how to implement DORA metrics in your organization!