Even since the publication of our DORA 2024 Report in October of last year, the landscape of generative AI tools used by developers has changed significantly. New interaction models like coding agents and command-line interfaces have opened use cases for AI across the entire software development lifecycle. Free licenses for AI-assisted coding tools and an array of enterprise AI solutions have expanded access to the power of AI-human collaborations to a wider number of users and new development styles are emerging. Many of these advancements make our team quite optimistic (and equally inquisitive) about what the future holds for the practice of software development.
Unfortunately, across that same time period, the most significant criticism of these tools has remained largely unchanged⸺their potential to hallucinate and display inaccurate information in their outputs. In some cases, hallucinations may be relatively benign, or even humorous. But, when used in critical applications, like software development, it is monumentally important that generative AI tools provide accurate information. For that reason, many technical improvements in AI have been directed at mitigating this risk, like Model Grounding which connects LLM outputs to verifiable sources of information to improve accuracy, factuality, and transparency.
Because of the potentially serious consequences of hallucinations and inaccuracies, it is unsurprising that these concerns have dominated conversations about the appropriateness of using generative AI tools in software development and the importance of human-in-the-loop AI. At the same time, the intense focus on these particular limitations has left little room to discuss whether developers perceive other potential issues with the use of LLMs in their coding practice, and what technical improvements might need to be made to mitigate those associated risks. So, we wondered:
What concerns do developers have about using AI, other than its accuracy?
To answer this question, we conducted a series of 8 in-depth interviews with development professionals, each lasting approximately 90 minutes. Participants were asked whether they had concerns about using AI in their professional lives beyond its accuracy, what those concerns were, and whether they believed those concerns were likely to be manifested in reality.
Developers’ (other) concerns
During these interviews, participating development professionals each identified at least one concern⸺other than hallucinations and inaccuracy⸺about using generative AI tools in their professional work. Using a qualitative data analysis method called Thematic Coding in which participant quotes describing similar phenomena are systematically grouped into broader categorical “themes,” we identified a total of 11 categories of concerns raised by participants. Of these, the following five categories of concern were identified by at least half of participants:
-
Data privacy: Five participants expressed concerns about whether their internal data might be used to train their AI tools’ foundational models, and thereby could be leaked to an external audience using the same tools. Notably, some participants whose employers had provided enterprise licenses for AI-assisted software development to them expressed this concern, even when the terms of service for those tools specified that users’ data will not be used to train foundational models. For example, one participant shared that they worry about data privacy because AI tool providers “claim they won’t use my data to train for future study. And, I don’t trust that [because] it just depends on how you are wording, or rewording the terms [of service],” suggesting their belief that there may be a loophole or exception that would allow LLM developers to train, even when it appears as though they will not.
-
Deskilling: Five participants expressed concerns that the productivity gains provided by generative AI tools might contribute to an increased reliance on their assistance in coding tasks, ultimately to the detriment of their own personal coding skills. In many cases, the imagined consequence of this potential loss of skills was that participants might be unable to complete their job in the absence of AI-assistive coding tools. But, in one case, a participant worried even that they “would, as a person, just be not as sharp, not as smart. And, I don’t want that,” pointing to a wider concern about loss of cognitive ability, beyond those required for their job.
-
Job displacement: Four participants acknowledged a concern that generative AI’s abilities to significantly increase productivity might ultimately displace development jobs. Interestingly and importantly, these participants did not express fears that their own jobs were at risk. Rather, they linked this potential primarily to a plethora of news articles with sensationalist headlines promising that the vast majority of code will be generated by AI in the very near future. One participant even directly refuted this idea, saying "[job loss] is not a concern of mine. I mean, I can only really see [AI] creating jobs," suggesting a disconnect between media promises and expectations of individuals in the field.
-
Malicious use: Four participants expressed having concerns that generative AI coding tools might be used for malicious purposes, including generating malware, engineering sophisticated phishing attacks, and even opening the potential for new attack vectors, as in “model poisoning” and “prompt injection” attacks. Several participants felt that the potential for malicious use was inevitable, and part-and-parcel of the utility of AI-assistive development tools. One shared their view that “whenever there is some good technology, right there will be some bad actors,” while another described this as “the double-edge sword of creating any new useful tool.”
-
Impacts on development culture: Four participants expressed concerns about whether and how generative AI-assistive coding tools might affect the culture surrounding software development. In addition to wondering how these tools might affect the process of conducting professional software development, some participants worried about the preservation of traditions surrounding the crediting and attributing of innovative solutions to individual developers. When asked to explain why they worried about crediting innovative solutions to specific posters on Stack Overflow, one participant explained the value of attribution, as “saying, ‘Hey, this person is really good at what they do and maybe they’re someone that we would hire or maybe they’re someone that we would consult with or whatever.’ Because, I mean, I like doing these consulting things. I like sharing my information and giving information. And, so, I’m a pay it forward kind of person, and, so really it would just be [good] to have that notated. And, I like the fact that you can say, ‘Hey, this was part of my code!’ or something.”
Put simply, developers do have concerns about using generative AI tools in coding beyond whether those tools are accurate, which should continue to be addressed, even as models improve.
Five strategies for addressing developers’ broader AI concerns
-
Continue providing clarity to developers about relevant Acceptable Use Policies (AUPs) for generative AI. The Terms of Service (TOS) of many popular, enterprise-grade, generative AI tools used in development, like ChatGPT and Gemini Code Assist, state that customer data will not be used to train foundational models without permission. Some AI providers explicitly protect their customers by indemnifying the customers from copyright disputes. We believe providers of enterprise-ready generative AI tools act in good faith to uphold their TOS. But, the nature of generative AI makes it difficult, if not impossible, for developers to personally ensure that company data is used responsibly. At the same time, the fact that developers who believe they are acting in accordance with their company’s best interests fear that they may inadvertently be responsible for a data breach suggests a pervasive lack of clarity in many organizations’ AUPs. We have previously noted that having clear and well-advertised AUPs may help foster developers’ trust in generative AI. Here, our research further suggests that clear and well-advertised AUPs may also help developers understand their personal responsibilities in maintaining data security, as well as those of their employers and their AI providers.
-
Provide opportunities, slack time, and resources for AI-related skill development. Although effectively working within AI-assistive and AI-driven development paradigms may require developers to develop many new skills, like prompt engineering, there remains anxiety about retaining existing skills through this transitional period. It is possible that this anxiety results, in part, from our collective inability to predict what responsibilities and expectations will be placed on development professionals in the near future. For example, many participants expressed a concern that they might no longer be able to pass an in-person coding interview without the aid of AI tools. But, at the same time, it is unclear whether we should expect that in-person coding interviews absent of AI-assistive development tools remain the norm. Put differently, which coding-related skills developers might choose to relinquish and which to retain is still unclear. While we collectively navigate those changes, emphasizing the new skills that can be acquired, and providing opportunities, slack time, and resources to develop those skills, may help assuage developers’ concerns about their coding skills and critical thinking abilities atrophying. Importantly, framing the proliferation of AI as a learning opportunity may also help developers continue to perceive their work as valuable.
-
Address job insecurity concerns, by helping developers envision the future of development work. We found previously that awareness of potential AI-related job loss hinders developers’ trust in, and associated use of, AI. Although the developers interviewed in this study did not express a salient fear that their own jobs were presently at risk, the prevalence of popular media discussing this potential⸺not just for development but across a range industries⸺makes it difficult to discuss advancements in AI-assistive development tools without invoking fears of economic insecurity. We hypothesize that these fears stem, in part, from the fact that many AI-assistive coding tools replicate tasks that are central to the present state of development work, like generating code and writing documentation. At the same time, delegating these tasks to generative AI may provide opportunities for developers to operate at higher levels of abstraction, fundamentally changing the shape of the practice of software development. We believe that inviting development professionals into the process of envisioning the future of their practice will be important both for realizing the full potential of AI-driven development and for assuaging fears of job insecurity prompted by often-sensationalist media.
-
Double down on fast, high-quality feedback, like rigorous code reviews and automated testing, and consider new layers of defense, like sandboxing AI-generated code. Just as generative AI tools produce novel opportunities for enhanced developer productivity and creativity, they also produce novel attack vectors, like prompt injection, which can be exploited by bad actors. Security threats are certainly not new to the field of software development, nor are they specific to the use of generative AI. However, the nature of these vulnerabilities and the appropriate approach for mitigating them may yet be unknown to many developers. As such, the salience of developers’ fears that AI might be used for malicious purposes might suggest a need for new layers of defense, like sandboxing AI-generated code to test before deployment. At minimum, developers’ fears of new attack vectors are a valuable reminder of the importance of fast, high-quality feedback inclusive of identifying and reducing security vulnerabilities. Importantly, the DORA Research Team has consistently shown that fast, high-quality feedback contributes to improved software delivery performance, organizational outcomes, and developer well-being.
-
Give credit to developers, where credit is due. Whether developers will still be given credit for innovative, thoughtful, and efficient solutions in an AI-assistive or AI-driven development paradigm was a significant area of concern for developers who were interviewed. Working with generative AI to arrive at solutions can help increase developers’ efficiency in arriving at certain solutions⸺but it does not work automagically. Using generative AI requires real labor from developers. Explicitly recognizing this work will be important for assuaging developers’ concerns about using generative AI in their workflow, as we argued previously in relation to whether developers’ perceive their work as valuable.
Conclusion
Concerns about generative AI’s potential to hallucinate or provide inaccurate information have dominated discussions about whether it is ready to take on a central and critical role in the software development process. But, we found that developers have other concerns about using generative AI in their workflows that are also important to address directly. Only the future will tell whether developers’ concerns beyond the tools’ accuracy are well-founded or will ultimately be inconsequential. But, as foundational models continue to improve in the direction of increased accuracy and grounding, we believe it is important to broaden our discussions about what potential limitations of AI are most salient, so that we can, collectively, continue to ensure that AI-assistive development tools are designed to be maximally beneficial to developers and society.