Using AI as a tutor is like keeping a big jar of cookies in the kitchen cabinet, says Wharton professor Hamsa Bastani.
“You tell yourself that you’re just going to eat one, but it’s a slippery slope,” she said. “Self-regulation is hard, even when you know something isn’t good for you.”
This insight drives Bastani’s latest research on a critical question in AI-assisted learning: Why do students over-rely on AI help even when they understand it hurts their long-term development? In a three-month study with chess clubs, she and her co-authors found that students with on-demand access to an AI tutor achieved less than half the performance gains of those with access only to controlled, automatic assistance (30% vs. 64%).
But the study goes beyond simply showing that unrestricted AI access is harmful. It reveals the mechanism behind this harm, examines how self-regulation breaks down, and identifies which student characteristics — such as skill level and motivation — influence AI over-reliance.
The Self-Regulation Paradox
The paper, “Self-Regulated AI Use Hinders Long-Term Learning,” is co-authored with Stefanos Poulidis, a doctoral student in decision sciences at INSEAD, and Osbert Bastani, a computer and information science professor at Penn. They designed an AI tutor specifically for the experiment with the chess clubs. They chose chess because it is a sequential decision-making setting that allows them to measure both students’ immediate decisions and long-term skill development with precision.
The choice was also practical: Chess provides a stable learning environment, unlike rapidly evolving AI tools.
“You can’t study long term effects with ChatGPT or other AI because the ground is literally shifting under our feet — the models themselves keep changing, so chess is nice for that,” Hamsa Bastani said.
“Self-regulation is hard, even when you know something isn’t good for you.”— Hamsa Bastani
More than 200 students trained for three months, randomly assigned to one of two conditions. The system-regulated group received automatic tips at strategic moments during their games — AI assistance deployed by the system, not by student choice. The self-regulated group received those same automatic tips but could also request additional help at any time, including “move reveal” tips that showed the optimal next move.
Here’s where the paradox emerged: Students started with restraint, using on-demand help sparingly. But usage crept upward over time. By the end of three months, self-regulated students were requesting move-reveal tips every three to four moves — essentially outsourcing their decision-making to the AI rather than thinking independently.
“They knew it wasn’t good for them,” Hamsa Bastani said. “In follow-up interviews, students told us they understood overusing AI help would hurt their learning. One said, ‘Using the option won’t win me games against humans later on.’ But in the moment, when faced with a difficult position, they clicked anyway.”
The performance gap wasn’t temporary. Follow-up testing weeks after training ended showed that differences persisted — indicating that the learning deficit was real, lasting, and not just a matter of temporary dependence.
Why On-Demand AI Hurts Learning
The study identifies the specific mechanism through which on-demand AI assistance harms learning: It reduces productive struggle — the cognitively demanding work of grappling with difficult problems that drives skill development.
When students could request help whenever they wanted, they increasingly opted out of this productive struggle. Instead of working through challenging positions, analyzing alternatives, and learning from mistakes, they took the shortcut. Each time they clicked for a move reveal, they missed an opportunity for the deep processing that builds expertise.
“Productive struggle is about working at the edge of your ability — tasks that are challenging but achievable,” Hamsa Bastani explained. “AI assistance that makes tasks too easy pushes you out of that learning zone. You’re no longer practicing at the level where skill development happens.”
The concept connects to what educational psychologists call the Zone of Proximal Development (ZPD) — the sweet spot where learners work just beyond their current capability but with appropriate support. The study shows that when AI assistance is available on-demand, students use it in ways that place them outside their ZPD, making tasks too easy to promote learning.
Beyond performance metrics, students in the self-regulated condition reported lower engagement. Many said the experience became less enjoyable. “I want to think for myself, not use the button,” one student said. The very tool meant to help them learn was undermining their intrinsic motivation.
“AI assistance that makes tasks too easy pushes you out of that learning zone.”— Hamsa Bastani
Who Over-Relies? Skill and Motivation Both Matter
One of the study’s most surprising findings challenges assumptions about which students need support in managing AI tools. The researchers found that over-reliance wasn’t limited to struggling students — even high-skilled players requested excessive help when given unrestricted access.
“There’s a common belief that if you just teach students effective learning strategies, the high performers will self-regulate successfully,” Hamsa Bastani said. “But skill alone doesn’t ensure good self-regulation. Even students who were performing well fell into the pattern of over-requesting help.”
However, motivation did make a difference. Students with higher intrinsic motivation — those who were learning chess because they genuinely enjoyed it, not for external rewards — showed somewhat better self-regulation. But even these motivated learners still over-relied on AI assistance compared to the system-regulated group.
The implications are clear: Individual characteristics like skill and motivation influence AI use, but they’re not sufficient to prevent over-reliance. System-level design matters for all learners.
Designing Better AI Learning Systems
The research points toward concrete design principles for AI-assisted learning platforms. Unlike previous work that simply argues for “guardrails,” this study provides evidence for specific approaches based on understanding the mechanisms of harm.
First, rate-limiting or introducing delays before providing help can preserve productive struggle. If students must wait 30 seconds before seeing a hint, or can only request help a certain number of times per session, they’re more likely to attempt problems independently first.
Second, AI systems should adapt to individual learners — not just their skill level but also their motivation. A student with high intrinsic motivation might handle slightly more autonomy, while others benefit from tighter constraints. The key insight is that even motivated, skilled learners need some level of system regulation.
Third, designers should consider providing help only within each student’s ZPD. AI assistance on tasks that are too easy (below the ZPD) or impossibly difficult (above the ZPD) doesn’t promote learning. The challenge is calibrating this zone for each individual — a task that requires monitoring both performance and engagement over time.
“Skill alone doesn’t ensure good self-regulation.”— Hamsa Bastani
“The education technology community needs to move beyond just making powerful AI tutors,” Hamsa Bastani said. “We need to build them in ways that preserve the struggle necessary for learning. That means system-level constraints.”
The longitudinal nature of the study — tracking more than 200 students over three months with persistent effects measured weeks later — provides unusually robust evidence in a field where short-term experiments are the norm. And because the research examines not just whether AI access matters, but how and for whom, it offers actionable insights for educators and developers alike.
“What worries me is that these models are already undermining human learning,” she said. “We have a responsibility to build these AI tools in a way that supports, rather than undermines, humans thriving. This research shows us a path to achieving that goal.”



