There’s few words as immediately emotionally resonant as “cheater”.
Think about it: what’s your gut reaction when you hear of someone being a “cheater”, or someone “cheating”? Whether in sport or life, there’s an immediate and universal negative valence. I don’t think I’ve ever heard anyone excuse “cheating” before. Maybe they’ll explain it in a sympathetic context, or balance the negative with some other positive characteristics. But the act itself is still looked down upon.
The psychological impact of a word, and its ability to color our perceptions, matters a great deal. And so any discussion of AI in school and learning that immediately categorizes any usage of AI as “cheating” can create some pretty strong reactions.
What does it mean to “cheat” with AI at school, or homework, or essay writing? First there’s the question of what even constitutes AI. Is spellcheck AI? Or Grammarly for error detection? What about if a student performs a Google search, and gets some useful information in the new AI highlight features? Is it cheating to use Perplexity, but not Google?
OK, well, let’s just say that cheating with AI is using an AI Chatbot to assist in writing an essay or answering a homework question. But what does “assist” mean here? Does it matter if the Chatbot simply gives advice, rather than writing the answer? Is that same advice still cheating if it’s froom a privately employed human tutor?
There’s a lot of nuance in what might be considered “cheating with AI”. But when we use such a strong word, nuance tends to go out the window. We’ve already staked a deeply emotional flag.
We’re all experiencing the birthpangs of a major tectonic shift in how we live and work, thanks to the explosion of LLM-powered AI capabilities. But perhaps no population feels those tremors more directly than college students. They face incredible uncertainty in their futures: what careers even continue to exist in five years? What can students do to protect their future employability? Certainly learning how to capably use AI fits into that job preparation. But must they become a “cheater” to learn how to use AI to effectively perform their work?
It’s easy to make assumptions about how kids operate in the world. It’s harder to actually, well, talk to them.
This week, I had the opportunity to visit an AI Ethics class at St Olaf College taught by Professor Mike Fuerstein. I met Professor Fuerstein through a mutual friend while exploring possible directions for my new startup. I immediately resonated with Prof Fuerstein’s thoughtful perspective on AI in learning, guided both by first principles thinking and a realistic understanding that AI isn’t going anywhere and must be wrestled with rather than avoided. I asked Prof Fuerstein if I could chat with his students, and he graciously invited me to his class.
I joined the class of around 20 students remotely, in a Zoom setup enabled more for lectures than for conversation: I looked down at the students’ circle of chairs as if suspended from the ceiling. I put aside the feeling of both vertigo and Tom Cruise in Mission Impossible, and concentrated on the students’ perspectives. Not just their words, but their emotions.
I left with a deep respect for the challenges these students are facing. They’ve worked hard to get where they are. They’re attending St Olaf because they want to learn and grow. Yet they also recognize that college is not just about, or even primarily about, their growth. Their performance in undergrad will meaningfully shape their careers, and from their perspective, even a hundredth grade point difference matters.
These students know that AI is already a required tool for their future careers, and they want to know how to effectively use it. But like everyone else, the range of comfort and efficacy with controlling a chatbot ranged greatly. Some students could expertly manipulate AI to perform their desired actions; others struggled to get AI to stop repeating itself and provide more value.
And the value for these students wasn’t to “avoid work”. It’s very easy to assume students want to just lazily ignore homework and play more Fortnight or whatever stereotype we have in our heads. But that’s not what I heard at all. These students wanted to improve their scores, sure. But they didn’t want that improvement to come at the expense of their learning. They wanted to use AI to produce better output and better learning outcomes.
Yet getting AI to produce strong output, let alone output that contributes to a student’s learning, is not an easy feat.
has written a great deal about the intersection of AI and education, and in a recent post, shared a study with some sobering data on the impact of AI. He cited a study from Turkey “where some students were given access to GPT-4 to help with homework, either through the standard ChatGPT interface (no prompt engineering) or using ChatGPT with a tutor prompt.” He summarize the outcomes:Student homework scores shot up [with AI], but the use of unprompted standard ChatGPT to help with homework undermined learning by acting like a crutch. Even though students thought they learned a lot from using ChatGPT, they actually learned less - scoring 17% worse on their final exam… [But] giving students a GPT with a basic tutor prompt for ChatGPT, instead of having them use ChatGPT on their own, boosted homework scores without lowering final exam grades.
So it’s possible to game improved homework scores without reducing learning outcomes. Gaming homework scores while improving learning outcomes still seems like a reach. But the default – using ChatGPT without prompt assistance – had a direct negative impact on learning.
The students I met at St Olaf weren’t interested in using AI to “cheat”. They acknowledged that some students certainly would use AI to just do their work entirely, and thoughtfully highlighted the concerns for middle and high school, where students may have less intrinsic motivation for a particular class.
Anytime we apply a blanket word to describe a behavior, we lose nuance. In this case, I worry about the damage we create by automatically labeling any AI use as cheating. A working definition of academic cheating might be “avoiding work to get a better grade”, perhaps with the modification of “using ill-gotten information to do so”. By this definition, stealing the answer card for a multiple choice exam is clearly cheating. But using AI to improve your work and your learning? Sure, you could be described as cutting corners, but another term for that is efficiency. And if the time and energy saved by this efficiency is put back into learning, then I’m not sure how this fits any definition of cheating.
So does that make any usage of AI in the classroom ok? Of course not. But instead of casually disregarding the usage of AI as cheating, and the attitude of students as lazy, I’d encourage us all to meet students where they are. Listen to their concerns and ambitions. Understand how we can empower them to use AI to their learning benefit, not just to improve their GPA.
The emergence of AI in the classroom produces many meaningful challenges to the entire edifice of learning. We’ve long known of the uncomfortable relationship between colleges as bastions of higher learning and “credentialing factories” for the modern workforce. AI perhaps makes this relationship untenable: colleges may eventually need to choose one identity to the exclusion of the other. But in the meantime, students’ outcomes (both career and learning) hang in the balance. Do we let them try to navigate these monumental challenges on their own? Or do we build the tools and practices to enable students to use AI to enable their learning?
There’s energy in Silicon Valley to “disrupt” education with the new powers of AI. And a perfect personalized tutor in every pocket, regardless of income or status, is a compelling and inspiring vision. But learning is relational. There’s a reason movies like Dead Poets Society put mentor-teachers on a pedestal: strongly relational teaching makes lessons real, brings them to life and resonance in a way that the material alone never can. The modern idea of the university goes back a thousand years, and early exemplars of this model of communities of learning go back another millennia and a half before that. While AI will certainly bring change, and education must work out the balance of priorities between learning as job prep and learning “for it’s own sake”, universities as institutions aren’t going anywhere. They’ll evolve, sure, but they won’t die. And nor should they: at their best, universities contribute deeply to collective human growth and flourishing.
I have no idea how all this will play out. I don’t know how any of AI will play out. I shift between moments of deep optimism and incredulous… well, not despair, that’s a strong word. But certainly a degree of fear. Not because I’m scared of AI, at all. But because I’m scared of how AI could shift the sense of meaning that’s so important to human identity, to living a meaningful life.
But these kids? They fill me with hope. They’re facing their future with clear eyes. They see the turbulence ahead, and are dealing with unprecedented challenges of trying to balance “good work” and “learning” with fears of the accusation of cheating. But they’re not despairing. They’re not even angry. They’re asking questions, and seeking better outcomes, and maintaining open-minded optimism in a way that only kids can. They inspire me to do the same: to roll up my sleeves and start building bridges to a better future. Because we’re on the precipice of a new world, and for those of us lucky enough to be conversant in technology, it’s our responsibility to build the interfaces that take everyone into the future together.
This is a great post Justin, on may levels. I especially resonate with the 2nd last paragraph. My students are using LLMs and I try to show them the responsible use, including solid educational prompts. I'm also teaching around 1000 professors and admin staff at the university. This is where the mixed reactions are - I have to remind them time and time again that we are preparing students for their work... I am concerned about many factors, most notably around critical thinking - but then I see executives who admit to becoming lazy and over reliant on the AI!
I agree. Critical thinking is the most important skill to cultivate - it’s the only mechanism for understanding when and how an LLM goes wrong. And that same critical thinking enables smarter prompting in the first place. The risk is that it’s so easy to turn your brain off when iterating with AI.
Have you seen any changes in how professors and staff have been reacting in the last six months?