Generative AI is quickly becoming ubiquitous to the world of software development. AI is poised to transform every stage of the software development lifecycle – from initial design to ongoing maintenance. We know that AI is already reshaping how technologists work, and fundamentally altering how organizations function. The 2024 DORA report indicates that AI adoption is occurring as a result of both top-down and bottom-up efforts, showing that 89% of organizations are prioritizing the integration of AI into their applications and 76% of technologists report relying on AI for parts of their daily work.
For AI adoption to lead to long-term success, organizations need to address a crucial stumbling block–scaling its adoption across the entire organization. Early successes seen from adopting new technologies often fail to take hold when organizations struggle to scale the technology, leaving teams scrambling to keep up as the full potential of new technologies remains untapped.
However, this doesn’t have to happen with AI. Here are four practical, research-backed strategies that can help your organization move from isolated experiments to sustainable and widespread adoption of AI.
When you look at these strategies, you might notice that there are many ways you could apply them. This might seem daunting, but it’s by design. While our findings offer guidance and specific approaches to follow, the successful implementation of these strategies requires a deep understanding of the organization seeking to use them, and a willingness to experiment and iterate.
- Share and be transparent about how your organization plans to use AI
- Alleviate anxieties about AI-related job displacement
- Allow ample time for developers to learn how to use AI
- Create policies that govern the adoption of AI
Ultimately, the goal is to create an environment where developers feel confident and empowered to use AI when it makes the most sense, maximizing its potential benefits while minimizing potential risks.
Each of these strategies assumes that AI tooling is available to practitioners, and that initial experiments have indicated success in driving beneficial outcomes to practitioners and the organization, including improving productivity and job satisfaction.
Share and be transparent about how the organization plans to use AI
Our research suggests that organizations that apply this strategy can gain an estimated 11.4% increase in team adoption of AI compared to companies that do not openly communicate their AI plans to employees. We recommend that organizations provide employees with transparent information about their AI mission, goals, and adoption plan. By articulating both the overarching vision and specific policies, including addressing procedural concerns such as permitted code placement and available tools, you can alleviate employee apprehension and position AI as a means to help everyone focus on more valuable, fulfilling, and creative work.
You’ll notice a call towards transparency reappearing in the strategies that follow. Transparency is part of a healthy organizational culture, and over the years DORA has consistently shown it can help teams be more successful at implementing technical capabilities associated with improved outcomes.
Alleviate anxieties about AI-related job displacement
The rise of AI has sparked concerns among developers that organizations will use it to automate jobs and reduce headcount. DORA found that nearly 15% of respondents expect AI to have a detrimental effect in their careers.
This belief might drive some developers to adopt AI not out of genuine interest or recognition of its potential benefits, but rather out of a reactive need to protect their careers.
One developer we spoke to said, “I think, over the past year or so, people have realized that generative AI is at the point where it actually works for a lot of things. And now… no one wants to get left behind.”
This reactionary approach, fueled by anxieties about job displacement, can hinder the true potential of AI. However, DORA research indicates that organizational leaders who address these concerns in clear terms can enable developers to focus on learning how to best use AI, rather than worrying about its impact on job security.
Our findings indicate that organizations that take steps to alleviate developer anxieties are estimated to have 125% more team adoption of AI than those who ignore those concerns.
We recommend that organizations work towards alleviating developer concerns about job displacement by increasing transparency and communication on how they plan to use AI. This approach can help organizations maintain and even enhance a culture of psychological safety, where developers feel supported as they work to embrace collaborative human-AI partnerships.
AI is here to help improve their work lives, not replace them.
Allow ample time for developers to learn how to use AI
Integrating AI into developer workflows has a learning curve. Our data indicates that individual reliance on AI peaks at around 15 to 20 months into using the tool, suggesting that learning how to use AI requires dedicated time for experimentation, practice, and integration with existing workflows.
While providing resources including training materials and documentation can be helpful, our findings suggest that actively encouraging developers to integrate AI into their workflows leads to a 27% increase in AI adoption. However, these findings also suggest that simply giving developers dedicated time during work hours to explore and experiment with AI tools leads to a 131% increase in team AI adoption.
We envision this dedicated time to be a combination of reducing productivity expectations during the adoption phase and giving developers ample heads-down time to explore these tools on their own — while also providing them with opportunities to collaborate and learn from others around them. This may take the form of community-led hack-a-thons, lunch-n-learns, or communities of practice.
Expecting developers to acquire these skills in their personal time can lead to frustration, burnout, and ultimately hinder adoption. By providing dedicated time for AI learning and exploration during work hours, organizations demonstrate a commitment to developer growth and create a supportive environment for successful AI adoption.
Providing dedicated time for developers to learn and experiment with AI can create an environment that enables — not forces — AI adoption. The goal is for organizations to foster a healthy culture where developers feel supported in the adoption of AI, without creating top-level mandates that force them into adopting this technology.
While our findings indicate that mandatory training does lead to small increases in AI adoption, we do not believe this to be a sustainable or fruitful strategy. Widespread adoption requires developers to embrace this technology, and to believe its adoption will lead to better outcomes for them and their organizations.
Create policies that govern the adoption of AI
Establishing clear organizational policies around AI usage can provide developers with a framework to confidently, responsibly, and effectively use these tools. Well-defined policies outline appropriate use cases, ethical considerations, and potential risks, guiding developers in making informed decisions about how to best use AI in their workflows.
This clarity can reduce uncertainty, encourage responsible experimentation, and promote a consistent approach to AI adoption across the organization. Our survey suggests that organizations that create AI acceptable-use policies show a 451% increase in AI adoption as compared to companies that don’t.
Further, DORA demonstrated that there’s a strong association between the quality of internal documentation and AI adoption. Perhaps high quality documentation helps provide the clarity necessary for people to confidently adopt AI.
Addressing security concerns through specific policies and safeguards can help build confidence and mitigate potential risks associated with AI adoption. By understanding potential risks and current practices for data security and privacy, developers can better discern appropriate use cases for AI, ensuring compliance with organizational policies. This guidance can foster a sense of responsibility and can help developers use AI in a manner that prioritizes security and innovation.
Implementing these guidelines may lead to some near-term decreases in the adoption of generative AI. Our data suggests there is a lot of uncertainty about how guidelines on mitigating security and privacy risks, and guidelines that outline when and where to use AI will impact its adoption. This is likely because the impact of these strategies is in the details and depends on the context.
This isn’t necessarily a bad thing! It might even be a sign that developers are exercising thoughtful discretion when deciding when to use and not use AI. Identifying use cases where developers are incorporating AI alongside use cases where developers are choosing to not adopt AI may help organizations identify a sustainable, long-term approach that uses ongoing feedback to iterate as necessary.
Google has shared guidance on how to craft an acceptable use policy for gen AI, including recommendations for specifying scope, enforcement responsibilities, and delegating accountability for data governance and information security. Google has also published the Secure AI Framework (SAIF), a conceptual framework to secure AI systems. Both of these may help inform your own policies.
What we’ve learned
DORA’s research found AI adoption success in organizations who effectively communicate their vision and provide clear guidelines for AI usage. Based on this insight, we propose four strategies for increasing AI adoption among developers.
All of these strategies suggest that fostering widespread and effective AI adoption among developers requires a transparent, clear, and supportive approach. Organizations without a scaling strategy risk low adoption rates at best, and misuse of AI and AI tools at worst.
The strategies we recommend will help ensure that AI is used strategically and responsibly to enhance productivity, foster innovation, and drive organizational success.
This image displays Bayesian posterior distributions, which represent our updated understanding of a value after considering new evidence. The height of each curve over a value indicates the relative plausibility of that value. Taller, narrower curves mean we’re more certain about our estimates, while shorter, broader curves reflect greater uncertainty.
The value we’re examining is how a specific strategy impacts a team’s adoption of AI, measured by their response to the survey question: ‘Over the last 3 months, approximately what percentage of your team’s work is supported by AI?’ Each curve represents 4000 simulated scenarios, comparing teams that haven’t adopted the strategy at all with those that have thoroughly integrated it.
Next steps
- Download and read the 2024 DORA Report for additional findings from the research program.
- Read more about how to transform your organization for perspective into the importance of ensuring teams have the tools and resources to do their job, and of making good use of their skills and abilities.
- Read about strategies the DORA team has identified for ensuring AI amplifies developer value. This article provides further guidance for ensuring job satisfaction and productivity among developers.
- Join the dora.community to share your approaches and learn about how other organizations are encouraging their developers to adopt AI.