The cost of ambiguity
In the rapidly evolving landscape of software development, ambiguity is the enemy of both innovation and safety. Without clear guidance, organizations face two equally detrimental extremes: developers acting too conservatively out of fear they will cross an invisible line, or acting too permissively and using unvetted tools that introduce significant risk.
A clear and communicated AI stance acts as a corrective to this uncertainty. It is not merely a legal document buried in a wiki; it is a comprehensible, widely socialized framework that tells developers: “Here is how we expect you to use AI, here are the tools you can use, and here is how we support you.”
DORA research shows that this capability is a powerful amplifier. When an organization’s stance is clear and well-communicated, it amplifies the positive impact of AI on individual effectiveness, organizational performance, and software delivery throughput. Perhaps most importantly, it turns AI from a source of anxiety into a tool for reducing friction.
DORA research found with a high degree of certainty that a clear and communicated AI stance amplifies AI’s positive influence on individual effectiveness and organizational performance. Furthermore, it takes the neutral effect AI often has on friction and turns it into a beneficial decrease in friction.
The AI Angle: Psychological safety as an enabler
Ambiguity stifles adoption. If a developer isn’t sure whether using a coding assistant will get them fired or sued, they will likely either hide their usage (shadow AI) or avoid the technology entirely.
A clear stance provides the psychological safety needed for effective experimentation. It shifts the cognitive load from “Am I allowed to do this?” to “How can I use this to solve my problem?” By defining practical guardrails—such as which data classification levels are appropriate for which tools—organizations empower teams to innovate within safe boundaries.
Research indicates that a successful stance is defined by four key perceptions among developers:
- Expectation: Does it feel like using AI is expected?
- Support: Does the organization support experimentation?
- Permission: Is it clear which tools are permitted?
- Applicability: Does the policy apply specifically to my role?
How to implement a clear AI stance
Building a clear stance is a journey that moves from executive vision to daily workflows. It requires a cross-functional effort to balance risk management with developer reality.
Secure executive sponsorship
Clarity starts at the top. Leadership must define and champion a clear AI mission and adoption plan. This signals strategic importance and provides the authority necessary to enact a comprehensive policy across the organization.
Form a cross-functional working group
A policy created solely by the legal or security department is unlikely to survive contact with reality. Author your stance using a working group that includes representatives from engineering, legal, security, IT, and product leadership. This group is uniquely positioned to balance risk with utility.
Adopt a “three-bucket” framework
Avoid a binary “yes/no” policy. Instead, categorize tools and use cases into three clear buckets:
- Prohibited: High-risk uses that are never allowed (e.g., pasting customer PII into a public chatbot).
- Permitted with guardrails: Allowed only with specific controls (e.g., using proprietary code with an enterprise-grade, approved tool).
- Allowed: Low-risk, high-value activities that are actively encouraged (e.g., generating boilerplate code or brainstorming ideas without proprietary data).
Publish as a “living document”
Don’t lock your policy in a static PDF. Publish it as a living document in a central, searchable developer hub. This hub should host the official list of approved tools, the policy itself, and an evolving FAQ.
Socialize and establish feedback loops
A policy is useless if no one knows it exists. Launch it through “town halls” and team meetings. Crucially, establish a feedback loop where developers can ask questions or suggest new tools. This allows the policy to evolve alongside the rapid pace of AI technology.
Common pitfalls
- The “one-time” policy: Treating the stance as a static document. AI technology changes monthly; your policy must be reviewed and updated regularly based on lessons learned.
- The “whiplash” effect: Changing the policy too frequently without giving teams time to adapt. Constant changes can be worse than no policy at all.
- Myopic authorship: Allowing a single department (like Legal or Security) to dictate terms without Engineering input creates a stance that ignores the practical realities of software development.
Why this deserves investment
We know our teams are already using AI—the industry data says 90% of developers are. Right now, they are likely doing so in the shadows or with hesitation. By establishing a clear and communicated AI stance, we aren’t just ‘checking a compliance box.’ We are actively reducing friction and enabling our teams to work faster. DORA research proves that when we clarify our stance, we amplify the gains in individual effectiveness and organizational performance.
How to measure
The most direct way to measure this capability is by surveying your teams to gauge their perception of clarity. You cannot simply measure the existence of a document; you must measure the comprehension of it.
Survey questions to ask your teams
- Clarity: To what extent is it clear which AI tools you’re allowed and not allowed to use at work?
- Support: To what extent does your organization support you with experimenting with AI?
- Expectation: To what extent do you feel that the use of AI at work is mandatory?
- Applicability: To what extent does your organization’s AI policy directly apply to your work?
System metrics to track
- Policy Engagement: Page views and dwell time on your internal AI policy documentation.
- Clarification Volume: The number of questions asked in public channels regarding tool approval or policy interpretation.
More from DORA
Read more about healthy data ecosystems in the following sources:
- DORA AI Capabilities Model report
- 2025 State of AI-assisted Software Development report
- Fostering developers’ trust in generative artificial intelligence
- Concerns beyond the accuracy of AI output
- Helping developers adopt generative AI: Four practical strategies for organizations
What’s next?
- Read the blog post: How to craft an Acceptable Use Policy for gen AI (and look smart doing it).
- Take the DORA quick check to get a baseline of your team’s software delivery performance.