The survey, conducted by two economists, found that students are using AI to a much greater extent than U.S. adults, and they’re doing it responsibly. What the data reveals is a distinction between two types of AI use: augmentation and automation. Augmentation refers to uses that enhance learning, while automation refers to uses that produce work with minimal effort. The study found that students are far more likely to use AI for augmentation purposes.
This finding challenges the alarmist narrative that AI has “unraveled the entire academic project.” Instead, it suggests that AI use is a widespread trend that institutions should approach with a focus on how the technology is being used, rather than whether it should be banned outright.
AI as an On-Demand Tutor
For most students, generative AI has become a valuable learning supplement. The most common use was for explaining concepts, with students describing the technology as an “on-demand tutor” that provides immediate assistance, particularly late at night when a professor’s office hours aren’t available. Other popular augmentation tasks included summarizing readings, proofreading, and designing practice questions. The study found that 61% of students who use AI do so for these types of beneficial, learning-focused tasks.
The researchers even verified their survey data against actual AI conversation logs from the company Anthropic, which confirmed that “technical explanations,” editing essays, and summarizing materials were major use cases for students, reinforcing that their self-reported data was trustworthy.
Responsible Use and Nuanced Policy
While the study found that a significant portion of students (42%) also use AI for automation tasks, the data reveals a nuanced picture. Students reported that they typically use AI for these tasks with judgment, reserving them for low-stakes work like formatting bibliographies or drafting routine emails, or for high-pressure periods like exam week. They don’t use automation as a default approach for all meaningful coursework.
This insight has major implications for how institutions should craft policy. The authors argue that extreme policies like blanket bans or completely unrestricted use carry risks. Bans could disproportionately harm students who would benefit most from AI’s tutoring functions, while unrestricted use could enable harmful automation practices. The study’s findings suggest that a more effective approach is to help students learn how to distinguish between beneficial and harmful uses of AI, fostering a culture of responsible technology use.