DORA Guides

How to enable your software delivery teams to innovate with generative AI

A guide to innovating with generative AI.

by Amanda Lewis

How to enable your software delivery teams to innovate with generative AI

Experimenting with or adopting a new process or technology–such as generative AI–requires a strategic approach across the entire organization. For all roles, it’s important to start small, plan for measuring the impact, and be able to iterate as necessary. If you want to enable your software teams to innovate with generative AI, a strategic approach is critical. Focusing on impact and iteration, you can then measure success through clearly defined outcomes that capture the effectiveness of this new technology. You can understand if it’s meeting the needs of your organization.

With over a decade of research into the capabilities that drive the performance of software delivery and operations, DORA provides established strategies and frameworks that you can apply to experimenting, measuring, and implementing new technologies, including generative AI. DORA strategies and frameworks are based on findings about how high-performing teams succeed.

Though DORA research focuses on high-performing technology teams, the capabilities that make it possible to successfully implement new technologies involve all levels of an organization, from individual contributors up through teams and then to the entire organization. All levels of your organization have a role in successfully applying the DORA strategies and frameworks to planning, measuring, and iterating with generative AI. Before exploring how you can use the findings from the DORA research, it’s important to understand why continuous improvement and user focus are critical to your generative AI efforts.

Success begins with a foundation of continuous improvement and user focus

An organization’s ability to innovate with generative AI depends on a foundation of both continuous improvement and user focus.

Over the last decade, the DORA research shows that successfully transforming an organization, including the introduction of new technology, takes time, effort, and clearly defined goals. Because large-scale, transformative change happens iteratively and incrementally, teams need to experiment and continually implement small changes to reach their organizational goals. To make these many small changes, high-performing teams leverage customer feedback, team experimentation, and communities of practice to drive a culture of continuous improvement. The value of investing in a continuous improvement culture can be explored using DORA’s ROI Guide.

DORA found that teams that focus on the user have 40% higher organizational performance. User-centric teams continually experiment and adapt to meet the needs of their users. Because these teams have a continuous improvement culture, they can identify bottlenecks and focus on specific areas for enhancement. By improving user satisfaction, these teams also improve their software delivery performance, job satisfaction, and productivity.

The foundation for exploring how to integrate generative AI features into the software for your users, or within your software development life cycle, involves the ability to continuously implement small changes, observe their impact on users, evaluate their outcomes, and then iterate as needed. Combining a culture of continuous improvement with user focus requires involvement from all roles across all levels of your organization. Though the specific contributions of each role might differ (for example, a customer support agent can provide crucial insights about users to developers), all roles in all levels must combine their efforts so that software delivery can improve your organizational performance. The following sections of this guide outline how different levels in your organization can apply the DORA findings towards using generative AI.

Ways that teams can innovate with generative AI

DORA research shows that high-performing teams use a combination of practices and capabilities to continuously improve their software delivery and operations performance. Improving these capabilities predict better software delivery performance, team well-being, and organizational performance. Teams that use the DORA research are able to baseline their software delivery performance and identify bottlenecks. They can then use this information along with generative AI to improve their capabilities.

Monitor software delivery performance

The DORA metrics (the four keys) measure the software delivery performance for an application or service. The DORA research is grounded in understanding the capabilities that predict improved software delivery performance.

Generative AI can be used to improve these capabilities and your software delivery. After you establish a baseline of your current software delivery performance, your team can use the DORA metrics to observe the impact of implementing generative AI on software delivery for your application. Regular reviews of the DORA metrics will provide your team with the space to discuss and explore how the use of generative AI contributes to your software delivery performance and team satisfaction.

In addition to using the DORA metrics, each capability has additional targeted metrics that can help evaluate the improvement in software delivery. For example, the DORA research shows that teams with fast code reviews have 50% better software delivery performance. If code reviews are a bottleneck for your team, your team could focus on improving their code review speed. To evaluate their improvement in this area, they could include metrics such as the average batch size and the duration between code completion and review. In the near term, their improvement progress would be easier to target at the code-review level. However, over time, the improvement in software delivery for your teams should be visible in the DORA metrics.

Onboard new team members faster

DORA research shows that new team members onboard faster if the team has these capabilities: a generative culture, quality internal documentation, continuous integration, and continuous delivery.

With this type of culture as the foundation, your team can consolidate all of your information from various sources, including documentation, design documents, code repositories, and support discussions. Doing so would let your team members use generative AI on a consolidated knowledge base that facilitates both sharing the knowledge and supporting a consistent understanding of it.

Providing your teams with code explanation through generative AI is especially helpful when onboarding new team members to a team or application. This way, your new team members can get access to context about the code’s logic and implementation. Also, your new team members can contribute to improved code quality and faster code reviews by using generative AI to review the code for bugs, security issues, and opportunities for refactoring. At the same time, these uses of generative AI can benefit all members of your teams, no matter how long they’ve been on the team.

Improve team satisfaction

The DORA findings show that collaboration, knowledge-sharing, and psychological safety create an environment where teams can innovate. In these teams, employees feel safe to experiment, make mistakes, and iterate. Teams that feel supported by their employers report higher levels of job satisfaction and performance.

For these teams, the presence of quality internal documentation is critical. High-quality documentation drives the capabilities that enable software delivery performance to positively impact organizational performance. High-performing teams prioritize documentation, recognize it as a valuable contribution, and have integrated it into the software delivery process. In turn, the documentation makes it easier for team members to share and scale their knowledge, align on processes, and find the information they need to be productive.

Teams that have quality documentation can implement generative AI to help team members be more productive. For example, you can implement generative AI within documentation to improve your team’s access to relevant information. Generative AI can also help your teams by providing the context of code snippets and assisting with tasks, such as generating initial documentation drafts and writing tests.

Ways that practitioners can leverage generative AI

DORA research shows that individual well-being and job satisfaction are improved when organizations and teams encourage psychological safety, risk-sharing, knowledge-sharing, increased cooperation, and a culture of learning. These cultural characteristics are outcomes of improving the capabilities that drive high-performing teams. For instance, regularly integrating changes into the main branch of the version control system enhances knowledge sharing and collaboration.

Continue learning

One of the DORA capabilities is having a learning culture, and the DORA research shows that a learning culture is predictive of software delivery performance. Because the technology landscape is rapidly changing, it is vital to learn new tools, frameworks, and languages. At the same time, the allocation of time and resources dedicated to this type of learning is often limited or non-existent.

Despite these constraints, integrating learning into the workflow is crucial for a culture of continuous improvement. By integrating learning opportunities into your workflow, you can deepen your existing skills and acquire new ones throughout the course of your normal workflow.

As an example for developers, generative AI that’s built into the IDE can provide you and other developers with learning opportunities through a virtual peer-programming experience. Generative AI access in the IDE lets you get code explanations, code suggestions, assistance with writing tests, and context on the implementation of the code.

Receive faster feedback

The DORA findings indicate that the ability to receive fast feedback during the code development process is fundamental to improving code quality. A fast feedback loop helps developers refine their code, improve their skills, learn from mistakes, and gain a deeper understanding of the principles and practices for code maintainability. With fast feedback, developers can identify errors faster and understand why the code is not working as intended. Developers can also make the necessary changes to the code without significant context switching because they worked on the code recently.

Additionally, when issues are surfaced earlier in the software delivery process–especially security issues–they are typically easier to remediate. As an example, it’s important for developers to understand the implications their code has on the security of the overall system. A capability like continuous integration lets you get fast feedback when code fails both unit and security-related tests. You can then use the test feedback to address code issues before they become a problem for users.

Generative AI can facilitate fast feedback loops for developers and help improve code quality. Using generative AI in the development cycle lets you receive feedback prior to a code review. For example, you can use generative AI in the IDE to review the code for bugs, security issues, and opportunities for improving code quality before someone else reviews it.

Provide context for you and your organization

DORA research shows quality internal documentation and code maintainability are core capabilities to improving software delivery. Teams that have both of these capabilities demonstrate higher levels of organizational performance and software delivery performance.

Both quality documentation and a well-maintained codebase facilitate a developer’s ability to find useful information that provides context for processes and code. Context is critical for developers because it helps them understand what the code needs to do, and why.

Generative AI can provide both you and your organization proper informational context that enhances code quality, learning, code review speeds, and documentation. For example, implementing generative AI into the IDE is an opportunity to provide more context to future readers of the code, as shown in the following three code comments. The first comment is a traditional note to self that you might write as a developer. This traditional comment provides you, the developer who wrote the comment, a reminder to refactor the function. However, there is no context into what the function does or why it needs to be refactored.

# TODO refactor this function

In contrast, the second comment is written as a prompt so you could use generative AI to assist with code generation. This type of generative AI prompt results in a comment that provides detailed information about what the function does and how the function needs to be refactored.

# This function calculates and returns the average of a list of numbers.
# Without altering its functionality, refactor it:
# - handle potential division by zero errors gracefully, instead of crashing
# - optimize performance for larger inputs
# - use clear variable names and comments to explain the logic.

After you refactor the function, you can easily revise the comments to provide brief context on the purpose of the code. This revised comment provides context for future users of the function, and you could even refer to it to refresh your memory if you need to revisit the code.

# calculates and returns the average of a list of numbers
# handles potential division by zero errors
# optimizes performance for larger inputs

As an additional benefit, you can use generative AI to prioritize code comments, design documents, documentation, and similar assets. Doing so will improve the large language model (LLM) used by the generative AI and deliver ongoing value to your organization.

Ways that leadership can use generative AI to enable innovation

DORA research shows that transformational leaders have a pivotal role in enhancing software delivery and operations performance. These leaders create a blameless environment that promotes experimentation, continuous learning, trust, and empowering practitioners. By encouraging practitioners to embrace practices and capabilities, these leaders create an environment for continuous improvement. Additionally, they establish incentive structures that acknowledge and reward teams for delivering value-driven outcomes to their users. By creating a culture of continuous improvement and continuous learning, these leaders make it possible for their teams to experiment and innovate with generative AI.

Manage the well-being of your organization

DORA research results have shown that improving software delivery performance predicts improved well-being of the team. Well-being is a capability that’s a composite of burnout, productivity, and job satisfaction. In 2023, DORA found that when organizations used AI, it reduced an individual’s level of burnout, while increasing productivity and job satisfaction.

Leaders can use generative AI across different capabilities to improve organizational well-being. For example, the DORA research shows that an organization’s technical capabilities predict performance. Empowering teams to choose tools is a core technical capability that leads to improved performance and increased job satisfaction. Encourage your teams to experiment with generative AI and explore it as a possible tool. However, be sure to communicate your guidelines for cost, policies, and processes for exceptions throughout your organization. The freedom to choose tools must be balanced by an understanding of the factors–such as cost–that constrain tool choice.

Software delivery flow is another area where your organization can use generative AI. In addition to predicting performance, the DORA research found that technical capabilities indicate the presence of a healthy culture, which is also foundational to well-being. Implementing generative AI into your organization’s software delivery flow can contribute to improving key capabilities that drive software delivery, organizational performance, and well-being. These capabilities include having a learning culture, prioritizing [documentation quality]((/capabilities/documentation-quality), and implementing security earlier in the software development lifecycle.

An organization’s focus on their users also represents an area for exploring generative AI. User-centrism leads to better team and organizational performance, and increased performance contributes to the well-being of your organization’s culture. Exploring the use of generative AI to better understand your users provides opportunities to bring your teams closer to their users. For example, your organization could implement a chatbot and analyze user interactions with it to gain insight into your users’ concerns, or leverage user feedback to customize their experiences with your applications.

Adapt quickly to changing market demands

Over a decade of DORA research indicates that high-performing teams maintain applications that are always in a deployable state. The capabilities that are required to keep an application in a deployable state require strategic coordination and collaboration throughout an organization. High-performing teams achieve these outcomes by making small, continuous improvements over time. When teams have the autonomy and incentive to improve their performance, they are better equipped to adapt to changing market demands.

Your organization’s teams can adapt more quickly to changes in the market and user needs by experimenting with a focus on the user, continuous improvement, and incremental changes. Your organization’s teams are experts in their application, and they are the best positioned to understand the details and flow of their work. On an organizational level, encourage your teams to first understand the user, identify bottlenecks, and reward continuous improvement. Then, when you add generative AI to your organization’s resources, your organization can evaluate and experiment with ways to improve the software delivery process. Engaging the organization in this evaluation and experimentation process will help teams find the right solutions that both benefit users and remove bottlenecks for the team.

Outcomes define success

As your organization explores integrating generative AI into your software delivery and operations process, foster an outcome-oriented mindset within your teams. Improvements take time. Ensure that your organization allocates enough time, resources, and space so these improvements can yield tangible results.

Instead of solely defining success by the adoption of generative AI, it’s more beneficial to base success on the impact of generative AI on specific outcomes. These outcomes should directly influence software delivery performance, overall well-being, and the organization’s performance as a whole.

When you integrate generative AI into your workflow, it’s important to remember the J-curve effect. Initially, productivity might dip as you invest time in learning, adjusting processes, and potentially facing unforeseen challenges. This is the downward slope of the “J.” However, the downward slope is not necessarily the final outcome. With persistence and strategic implementation, you can climb the upward curve and benefit from the efficiency, innovation, and problem-solving capabilities of generative AI.

Next steps

To innovate your software delivery and operations with generative AI, a culture of continuous improvement and user-centricity is essential. Begin by establishing a baseline, nurture curiosity and experimentation, and cultivate a collaborative community of practice.

Last updated: August 16, 2024