DORA Research: 2024

Overview AI Preview Fostering Trust in AI Questions  
DORA Report Preview - Fostering Trust in AI

Fostering developers’ trust in generative artificial intelligence

It’s no secret that generative artificial intelligence (gen AI) is rapidly changing the landscape of software development, with discussions about best practices for applying this transformative technology dominating the popular press. Perhaps nowhere on Earth have these discussions been more frequent and passionate than inside the organizations dedicated to making gen AI accessible and useful to developers, including at Google. During one such discussion between researchers on our DORA and Engineering Productivity Research (EPR) teams, we were struck by a recurring finding common to development professionals both inside and outside of Google:

Using gen AI makes developers feel more productive, and developers who trust gen AI use it more.

On the surface, this finding may seem somewhat… obvious. But, for us, it highlighted the deep need to better understand the factors that impact developers’ trust in gen AI systems and ways to foster that trust, so that developers and development firms can yield the most benefit from their investment in gen AI development tools.

Here, we reflect on findings from several studies conducted at Google, regarding the productivity gains of gen AI use in development, the impacts of developers’ trust on gen AI use, and the factors we’ve observed which positively impact developers’ trust in gen AI. We conclude with five suggested strategies that organizations engaged in software development might employ to foster their developers’ trust in gen AI, thereby increasing their gen AI use and maximizing gen AI-related productivity gains.

Trust and productivity

Research conducted by Google and other respected voices in technology research has found consistently high use of gen AI by software developers, and developers hold largely favorable views of using gen AI tools at work. Our research suggests that this warm reception of gen AI amongst professional developers is due, in no small part, to significant increases in reported productivity from using gen AI.

The DORA team found that, outside of Google, 75% of 2024 DORA survey respondents reported positive impacts of gen AI on their productivity. Internal to Google, the EPR team found a similar number of developers reported a positive impact of gen AI on their productivity, as well.

AI’s perceived impact on productivity

Importantly, developers who trust gen AI more reap more positive productivity benefits from its use. In a logs-based exploration of Google developers’ trust in AI code completion, our EPR team found that developers who frequently accepted suggestions from a gen AI-assisted coding tool submitted more change lists (CLs) and spent less time seeking information than developers who infrequently accepted suggestions from the same tool. This was true even when controlling for confounding factors, including job level, tenure, development type, programming language, and CL count. Put simply, developers who trust gen AI more are more productive.

Unfortunately, developers’ trust in gen AI is relatively low. According to StackOverflow’s 2024 Developer Survey, trusting the output of gen AI is presently developers’ number one challenge with AI at work, and Google’s research supports this finding. The DORA team found 39% of developers outside of Google trust the quality of gen AI output only “a little” or “not at all,” and these numbers are similar amongst developers at Google.

We find these low levels of trust concerning, because they suggest that a subset of developers may not be experiencing the productivity gains they could from gen AI.

Five strategies for fostering developers’ trust in gen AI

Given the significant investment many organizations are making in gen AI, and the likelihood that increasing developers’ trust in gen AI would improve returns on those investments, we believe it is important to reflect on our prior research and offer the following five strategies organizations might take to foster developers’ trust in gen AI:

  1. Establish a policy about acceptable gen AI use, even if your developers are good corporate citizens. Policies about the tasks, purposes, and data that developers can and cannot use in conjunction with gen AI tools are often viewed through the lens of preventing “bad” behavior, in the form of proprietary data leaks, security breaches, and poor stewardship of user data. While important, preventing irresponsible use of gen AI is only half of the story. Our data suggest establishing clear guidelines encouraging acceptable use of gen AI will likely also promote cautious and responsible developers to use gen AI, by assuaging fears of unknowingly acting irresponsibly. In our survey, the DORA team found that employees who felt their organizations were more transparent about the use of gen AI trusted gen AI more. Unfortunately, in qualitative interviews, most participants were unable to say with certainty whether their organization had any policy about the use of gen AI in development, at all. Together, these findings suggest that many firms could increase their employees’ trust in gen AI by providing more explicit direction about its use and sufficiently advertising that policy to their workforce. Google has previously published guidance on how to craft an acceptable use policy for gen AI, including recommendations for specifying scope, enforcement responsibilities, and delegating accountability for data governance and information security. Our research suggests that comprehensive acceptable use policies which include these elements can help developers feel empowered to rely on and use gen AI, by knowing that appropriate risk-mitigating guardrails are in place. But, we believe that even a lightweight policy, such as a guide to aligning gen AI-produced code to the company’s programming conventions, is likely a better signal that gen AI use is sanctioned than a simple lack of explicit prohibitions. For some firms, it may even be beneficial to externalize company policies about gen AI use, to assure end users of their products that development was handled responsibly and using reliable sources.

  2. Double-down on fast high-quality feedback, like code reviews and automated testing, using gen AI as appropriate. Developers’ perceptions that their organization’s code review and automated testing processes are rigorous appear to foster trust in gen AI. This is likely because appropriate safeguards assuring them that any errors that may be introduced by gen AI-generated code will be detected before it is deployed to production. Interestingly, data from the DORA team suggests that the adoption of gen AI makes code reviews faster and improves reported code quality, likely by allowing a wider breadth of code to be analyzed at a faster pace than could reasonably be expected of a human. So, it is possible there is a virtuous cycle in which applying appropriate safeguards fosters trust in gen AI and encourages its use in feedback processes, like code reviews and testing. This strengthens the robustness of the safeguards which foster trust, further promoting its use. The logical entry point for this cycle is likely encouraging developers to use gen AI in tasks that feel low-risk, until their trust is built and reinforced over time.

  3. Provide opportunities for developers to gain exposure with gen AI, especially those which support using their preferred programming language. Trust in gen AI increases as developers gain exposure to it and grow more familiar with its uses, strengths, and limitations. Additionally, developers appear to trust gen AI more when engaging it for tasks in their preferred programming language, likely because they have more expertise to assess and correct its outputs. Providing opportunities to gain exposure to gen AI, like training, unstructured activities, or slack time devoted to trying gen AI, will help increase trust, especially if such activities can be performed in developers’ preferred programming language in which they are best equipped to evaluate gen AI’s quality.

  4. Encourage gen AI use, but don’t force it. Leadership actively encouraging the use of gen AI in development work appears to be effective in promoting its use amongst individual contributor developers. At the same time, control over the degree to which gen AI intervenes and the tasks in which it is employed increases developers’ overall trust in gen AI tools. So, while it is likely to the benefit of the organization for people in leadership roles to encourage their ICs to test, evaluate, and employ gen AI in their daily work, it is important developers do not feel obligated to use gen AI. One approach to encouraging gen AI use in a manner that prioritizes developers’ sense of control is to promote the spread of knowledge organically, by building community structures to foster conversations about gen AI.

  5. Help developers think beyond automating their day-to-day work and envision what the future of their role might look like. Fears that gen AI might lead to a future loss of employment for development professionals have been well-publicized and were a recurring theme in our interviews and survey data. We believe much of this fear stems from the fact that the most common uses of gen AI in development replicate the daily work of developers, like writing code or test cases. Delegating these tasks to gen AI has, indeed, made developers more productive. However, without a clear vision for what the transformed role of a developer working at a higher level of abstraction in which these repetitive tasks are delegated to gen AI resembles, it will be hard to assuage fears of unemployment. That is, delegating mundane tasks, such as generating test paths, creating documentation, and providing system health monitoring, is a clear small step toward using gen AI effectively. But, long-term, developers will likely need guidance about how to reallocate the time saved, and encouragement to explore new ways to improve user experience, innovate with emerging technologies, and add value for their companies and users. We are unable to predict what development work will look like in the future, or what new tasks will be performed as a result of gen AI increasing developer capacity. But, we believe that acknowledging gen AI will fundamentally change development work, and empowering developers to have a voice in shaping the future of their profession will foster trust in gen AI by helping developers co-create and move toward a world where gen AI transforms their work, rather than simply replicating it.

Conclusion

We are still at a moment in which the ubiquitous use of gen AI in development is nascent, and the long-term efficacy of the strategies proposed above remains to be determined. Yet, given our opportunity to see emerging AI-powered development tools in action at Google and across the industry, we wanted to share our insights and observations to help organizations navigate these still-uncertain waters by sharing insights from our early inquiries. While the long-term future of gen AI use across the software development lifecycle is inevitable, the near-term path to successfully realizing that future is less clear. We believe those individuals and organizations that embrace this change, start with small projects, learn, and iterate will be in a better position to proactively navigate it than those that sit on the sidelines. The good news is that you are not alone in this journey, and we look forward to continuing to co-create the future of software development with you.

Meet DORA's Research Team
Research archives:
Last updated: September 26, 2024