Managing AI dependency: How students are establishing guardrails with AI
by Andrew Harlan Kenny Ly Mindy Tsai
guidance from Jamie Benario Derek DeBellis Nathen Harvey Steve Fadden Harini Sampath Becky Sohn
Introduction
AI has made it easier than ever for student developers to work efficiently, tackle harder problems, and pursue ambitious projects. But for students earning technical degrees, these new capabilities create genuine tensions around learning. Our research with UC Berkeley students found that while AI made challenging projects feel more accessible, it also sparked anxiety about becoming too dependent on AI. Rather than ignoring this discomfort, our research shows that students have responded by establishing guardrails for their AI usage, treating AI as a support tool rather than a replacement for deep understanding. Both the anxiety and the behavioral response are encouraging signals: students are meeting this technological moment with curiosity and caution, actively working to shield themselves from potential pitfalls rather than chasing efficiency at the cost of learning.
In the first part of this research series (AI as tutor), we discussed how students are using AI in their learning and coding workflows. In this article, we discuss the concerns that students have about over-reliance on AI (many of which mirror those of professional developers) and how they are modulating their AI usage.
This finding emerged from an eight-month mixed-methods research project to understand how UC Berkeley students in Electrical Engineering, Computer Science, Design, and Data Science are using artificial intelligence in their academic workflow. We focused on gathering perspectives from these fields through in-depth interviewing and survey responses to provide a lens for understanding how the next generation of developers are currently navigating AI. Our research began with a simple observation: AI use is no longer a unique edge-case; it is pervasive. Google’s 2025 DORA report found that more than 90% of professionals already use AI in their day-to-day work (State of AI-assisted Software Development 2025). As these tools become increasingly ubiquitous, we wanted to understand how student developers are navigating not only AI’s new capabilities, but also setting guardrails for themselves.
Maintaining skill development
Our interviews revealed that students are actively managing the tension between using AI for efficiency and the fear that overreliance would impact their skills. They widely recognize AI as fundamental to the future workplace, an essential literacy they must develop. AI also enables them to tackle far more ambitious projects than would previously have been feasible. Yet students simultaneously worry that excessive reliance might erode their foundational skills or prevent them from developing these capabilities in the first place. This concern is further complicated by AI’s ability to amplify their output while obscuring the underlying mechanics, allowing them to build more while potentially understanding less.
Deliberate moderation
Contrary to media narratives depicting students as passive or overly dependent on AI, our qualitative research seems to suggest the opposite: students actively monitor and regulate their AI use with a real awareness of its potential risks. They worry that overreliance might diminish their capacity for independent thinking and problem-solving, and they respond by establishing deliberate boundaries. Rather than wholesale adoption or rejection, they calibrate their engagement strategically, leveraging AI for routine tasks while maintaining direct control over conceptually challenging work.
Students described intentionally attempting problems themselves before turning to AI assistance. Their strategies ranged from line-by-line code verification to deliberately alternating between assisted and unassisted problem-solving. This reflects not dependence, but careful negotiation between efficiency and cognitive development.
“Despite knowing AI was allowed, I wanted to go through the friction of learning and failing and having space for creativity.”
“I have actually gone back to hand-coding for certain things, like a for-loop for example.”
Underlying concerns
The students we spoke to expressed genuine anxiety about potential overdependence:
“If AI disappeared, I’d struggle more with figuring out how to solve things on my own.”
These concerns have some empirical support. Research measuring brain activity via EEG during essay writing found that AI users showed weaker cognitive engagement patterns compared to those using search engines or no tools.1 Frequent AI users who later wrote without assistance remembered less of their content and felt less ownership over it, what researchers termed “cognitive debt.”
Yet the picture may be more complicated than it appears. Students now routinely work on problems of greater difficulty than they would have previously attempted, which may naturally produce heightened feelings of uncertainty or unease. This makes it difficult to isolate whether their concerns stem from genuine skill erosion or simply from tackling harder challenges. Either way, the fact that students are asking these questions suggests a healthy vigilance about their own learning.
Strategies for modulated behavior
Students described specific strategies to maintain this balance and avoid overreliance. A common approach was using AI to initiate work rather than complete it, leveraging the tools to overcome creative or technical inertia while retaining control over the final output. This shift reflects a broader transformation in how students see themselves: less as pure developers and more as software managers orchestrating the development process.
“I use AI to generate ideas for a starting point.”
“The starting process is faster for finding info/help when I’m confused. With google, there isn’t one place I can go to as there’s a lot of websites to visit. With [AI] I can start off by asking all the questions in one place.”
“When we were asked to do a group final project, I told [AI] what kind of project I was interested in and asked if it had any ideas or suggestions.”
“AI can help generate a skeleton outline of what to do and how to start.”
Used this way, AI serves as a powerful launching point without preventing continued skill development. Additionally, as we explored in our first article, AI as Tutor, students leverage AI metacognitively, using it to identify knowledge gaps, clarify confusing concepts, and guide their learning process rather than simply generating answers. This tutoring role complements their approach of using AI as a starting point: the tool helps them understand what they need to learn while they maintain ownership over the actual learning.
Beyond strategic use, students also employed tactics to actively limit their reliance. Some deliberately restricted themselves to free-tier subscriptions to avoid accessing more powerful models:
“I don’t want to pay for AI tools because it could lead me to overuse the models.”
Others reflected more critically on practices like “vibe coding,” a term used to describe relying on AI-generated code without fully understanding its logic. Several participants warned against this tendency, emphasizing the importance of validation and comprehension. One student put it succinctly:
“AI tools can definitely be a good companion to boost developer productivity. However, one needs to be very mindful and not get used to vibe coding. It’s very important to understand and validate the code AI is generating and use it appropriately.”
Conclusion
Our research with UC Berkeley students reveals three key insights about how they’re navigating AI in their technical education:
- Critical evaluation: AI literacy for students now extends beyond effective prompting. It included skills of critical evaluation, validation, and debugging AI-generated responses, making verification crucial for learning.
- Efficiency vs. education: Students have voiced managing the tension between using AI for academic efficiency with their fear of forgoing fundamental critical thinking skills. This implication influences the usage, timing, and extent of AI integration into their workflows.
- Resisting overdependence: Students employ deliberate strategies to prevent overreliance, from restricting access to powerful models to alternating between assisted and unassisted problem-solving. These self-imposed boundaries reflect an active effort to preserve their capacity for independent thinking even as AI becomes more capable.
These findings paint a more nuanced picture than media narratives of passive dependence or uncritical adoption. Students are acutely aware of the risks AI poses to their learning, yet they also recognize the benefits and even necessity of using these tools. They understand the tension between efficiency and education, between building faster and understanding deeper. The anxiety they express about overreliance is in itself a form of metacognitive awareness: a recognition that the path of least resistance may not be the path of greatest learning. This combination of adoption and caution is encouraging. Students are not blindly embracing these tools; they are actively questioning their impact and adjusting their behavior accordingly, using AI’s capabilities without surrendering the learning process itself.
1. Kosmyna, Nataliya, et al. “Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task.” arXiv, 10 June 2025, doi:10.48550/arXiv.2506.08872. Accessed 28 Jan. 2026.↩