I told myself I knew the signs: the perfect grammar paired with an awkward, disjointed structure, the missing citations, the bizarre subheadings, the vague, robotic language. The tendency, as another TA put it to me recently, to “use all the right words without actually saying anything.” But these are the clumsy students, and when used carefully, generative AI is now virtually undetectable in writing.
It’s also completely ubiquitous. Like many people who care about education, I’ve been reeling this week since reading James D. Walsh’s piece in New York Magazine, “Everyone is Cheating Their Way Through College.” The stats Walsh includes are scary: the proportion of students using AI to help them through their studies is already as high as 90 percent.
A symptom of ubiquity is that a technology starts to feel like a necessary and unquestionable part of life. Describing the experiences of one student who reported using AI in all her essays, Walsh writes:
“I really like writing,” she said, sounding strangely nostalgic for her high-school English class—the last time she wrote an essay unassisted. “Honestly,” she continued, “I think there is beauty in trying to plan your essay. You learn a lot. You have to think, Oh, what can I write in this paragraph? Or What should my thesis be? ” But she’d rather get good grades. “An essay with ChatGPT, it’s like it just gives you straight up what you have to follow. You just don’t really have to think that much.”
[…]
Later, I asked Wendy if she recognized the irony in using AI to write not just a paper on critical pedagogy but one that argues learning is what “makes us truly human.” She wasn’t sure what to make of the question. “I use AI a lot. Like, every day,” she said. “And I do believe it could take away that critical-thinking part. But it’s just—now that we rely on it, we can’t really imagine living without it.”
It seems illogical not to use AI, when students have absorbed the message that high grades matter more than anything and when everyone else is using it too—there’s definitely a social contagion effect at work. Once you begin cutting corners, it’s impossible to imagine voluntarily taking the long way around just for the sake of the antiquated “beauty” of thinking for yourself. And soon enough, the long way around is no longer an option, because technologies—all technologies, digital and otherwise—rewire our brains over time. As Walsh notes, studies are already linking AI usage to declining critical thinking skills. It’s one thing to make a strategic choice not to exercise a particular faculty because there’s a more efficient option available, and quite another to lose that faculty, or parts of it, altogether.
This is what worries me about the way both students and professors talk about AI as a “tool” that just needs to be harnessed responsibly, as if you can separate the instrumental benefits from the basic mind-altering character of the technology itself. We’ve been having the same misguided conversation about the internet as a “tool” for years. It’s certainly an appealing argument: AI is coming whether we like it or not, but we can get ahead by learning how to use it to our advantage and tailoring our “human capital” to what the new world of work is going to demand of us. We’ll integrate the tool and become robotlike in our efficiency, and this, far from diminishing our value, will make us more productive and less dispensable.
I think this logic is partially symptomatic of a culture that believes being good at working the system makes you master of it, and that adapting to disruption is a virtue. It’s a dehumanizing culture, a culture of precarity, a culture of resignation—not to mention a culture that has abandoned workers. But this logic also depends on a kind of wilful ignorance about the nature of the technology and about the scale and scope of disruption it’s going to cause.
It isn’t just that AI is reshaping our minds. Consider this: eventually, humans will be written out of the equation altogether. Walsh mentions that AI’s potential to take over the task of grading students’ assignments, in addition to writing them, threatens to reduce “the entire academic exercise to a conversation between two robots—or maybe even just one.” But the problem is more fundamental than that. From the AI’s perspective, generating knowledge just means infinitely recombining existing data. “New” knowledge no longer means original knowledge. And what happens when the data being infinitely recombined—the content of the assignments, to put it one way—are themselves artificial creations? Already, more than half of online text is generated or translated by AI. The process becomes perfectly self-sustaining: to borrow words from a professor in my department, AI threatens to collapse knowledge production itself into an artificial mind endlessly “eating its own shit.” This sickening loop is the reason I’ve been writing about generative AI as a kind of end of history.
Will “being good at ChatGPT” save any of us in this scenario—our jobs or our souls?
There’s another version of the argument for embracing generative AI that I want to mention here. Last month, D. Graham Burnett asked for The New Yorker, “Will the Humanities Survive Artificial Intelligence?” Burnett acknowledges that a lot of scholarly work is about to become irrelevant, now that AI can instantly call up the information and analysis that human researchers writing books spend years labouring over, all tailored in real time to the user’s exact preferences. But Burnett reframes this disruption as an opportunity to pare the humanities back to their classical state:
[F]actory-style scholarly productivity was never the essence of the humanities. The real project was always us: the work of understanding, and not the accumulation of facts. Not “knowledge,” in the sense of yet another sandwich of true statements about the world. That stuff is great—and where science and engineering are concerned it’s pretty much the whole point. But no amount of peer-reviewed scholarship, no data set, can resolve the central questions that confront every human being: How to live? What to do? How to face death?
The answers to those questions aren’t out there in the world, waiting to be discovered. They aren’t resolved by “knowledge production.” They are the work of being, not knowing—and knowing alone is utterly unequal to the task.
For the past seventy years or so, the university humanities have largely lost sight of this core truth. Seduced by the rising prestige of the sciences—on campus and in the culture—humanists reshaped their work to mimic scientific inquiry. We have produced abundant knowledge about texts and artifacts, but in doing so mostly abandoned the deeper questions of being which give such work its meaning.
Burnett points out that knowledge is not the same as understanding, and the latter is a uniquely human function. By taking over the rote work of knowledge production, he argues, AI will unshackle the humanistic quest for meaning. “This is the pivot where we turn from anxiety and despair to an exhilarating sense of promise,” Burnett writes. “These systems have the power to return us to ourselves in new ways.” According to Burnett, what this means practically is that first, the purpose of scholarship will change as we shift away from measuring productivity by the volume of publications; and second, the nature of teaching will change because students can no longer be made to read or write if they aren’t motivated to do so.
On the surface, this is an attractive prospect. There’s no doubt that the “publish or perish” dynamic is constraining the academic imagination and pushing certain forms of knowledge to the margins. It’s also true that measuring success by the volume of outputs rather than the quality of outcomes is a classic sign of a broken incentive structure. Academia in general and the humanities in particular do have a “knowledge production” problem.
It’s also heartening to imagine a university where every student is there because they want to be—because they’re intrinsically motivated to learn, to ask the deepest questions with genuine curiosity, to grow. I know how defeating it is to try to teach when most students are only enrolled because getting the credit is a stepping stone to some other goal. We all want a room full of students who find our subject as inherently valuable as we do and who have the passion and skill to excel on their own merit.
The problem with this vision is that it’s exclusionary. It’s a return to a more classical academy: one oriented around the love of wisdom and insulated from economic pressures, yes, but also one that has no place for most of society. In today’s context, the price we’d pay for embracing the AI-induced purification of higher ed is that many academic jobs would disappear in an already declining market, and many students would be filtered out, even as a college or university degree becomes increasingly necessary to make it in the workforce.
I absolutely sympathize with these arguments. I’ve joked to friends that sometimes when I grade papers or read the very mediocre content that passes journal publication standards, an evil little voice in my brain starts chanting make the university elitist again! But I try to squash that voice because I recognize that there’s no going back without leaving a lot of people behind; and besides, we can’t just press reset on the economic transformations that have pushed more and more students to pursue higher ed.1
I have students who have had to give me both typed essays and handwritten exams. Sometimes I get a sophisticated, grammatically flawless essay submitted online by a student I know can barely string a sentence together when forced to write by hand. I know the essay is written with AI, but I can’t penalize students for it because there’s no proof. Which should get the higher grade: success by plagiarism or a poor but honest effort? No one seems to have an answer.
And does anything change if I know the student has a cognitive disability or a mental health issue, or speaks English as a second language? For them, AI might be an important form of accommodation, helping students who would otherwise be at a disadvantage participate on a more equal footing with their peers. I don’t delude myself into thinking that anyone excels purely “on their own merit.” I’ve always worked hard in school, but I was set up to succeed because, to put it bluntly, I’m neurotypical and won the lottery of birth. How can I disdain a technology that could help break down barriers for others, even if I believe it’s lowering the overall quality of education in other ways? There’s always another voice in my brain chanting, no less forcefully than the first, democratize education!
Beyond the equity issues, how can I blame students who cheat because they don’t want to be there but need to get a degree because of structures that are beyond their control? I have high standards, but I find myself grading easy because I don’t want to ruin anyone’s future. I wish education weren’t instrumental, but my wishing doesn’t change the reality that it is. And besides, penalizing the students won’t make them care, and it certainly won’t put the AI genie back in the bottle.
The best I feel I can do is try to breathe some oxygen on the little spark of curiosity and self-confidence that all of them have inside, and to lead them by example to appreciate the value of critical thinking and the written word. I am trying to save their souls one by one. In this sense, I absolutely agree with Burnett’s idea that the essence of education has to be a kind of non-coercive guiding.
This is, however, not the best that we can do collectively. It isn’t enough for members of the university to debate about academic honesty and harmonizing course policies, though this matters too. We need to confront the deeper identity crisis in higher ed that generative AI is throwing into especially stark relief—but did not create—which is no less than a battle over the soul of the university.
In many ways, higher ed still appeals to the classical ideal of education as an end in itself: Walsh cites, for example, Columbia U’s lofty description of its own curriculum as “intellectually expansive” and “personally transformative.” At the same time, universities need money, and many of the people who control the money want higher ed to be a training ground for the workforce, shaped in accordance with the practical demands of society or the nation.
The university is currently failing to do either of these things well. On the one hand, it’s become a sort of grotesque undergraduate factory where students are numbers and commodities, and where well-rounded education is being subordinated to economic demands—including through the systematic dismantling of the humanities. On the other hand, there’s a widespread feeling among students that their classes are irrelevant to their lives and aren’t preparing them well for the future. Mass cheating with AI is the symptom here, not the cause—although it threatens to create a circular relationship. Both the intrinsic and the instrumental value of higher ed are in jeopardy, and neither rejecting AI nor embracing it will get to the heart of this crisis.
I don’t have the answers, although I do believe that neither ideal is sustainable on its own. The university as an insulated arena for intellectual and personal self-development is a humane ideal, but it’s one made for a bygone era and tends toward exclusion. The university as a practical training ground is an ideal with the potential to lift people up and advance the public good—at least, it would be if it hadn’t been so thoroughly colonized by financial interests—but taken to the extreme, it represents a profound loss in terms of our collective capacity for meaning-making, cultural creation, and free expression. I think we should be striving to combine the best of the two.
Whatever happens, the university will never be the same. Although the vision for its future is contested, I want to echo Burnett’s idea that we can and should reframe the present crisis as an opportunity to reimagine what higher education could be. Generative AI is an unprecedented disruption, but this crisis has been a long time in the making. If we’re being honest with ourselves, we need to treat AI less as a random shock to the system that needs to be contained, and more as an inflection point—a moment of reckoning in the history of the university, destabilizing but potentially transformative.
But we can step up AI governance efforts and expand the social safety net. Burnett takes a utopian view of automation’s emancipatory power, and in theory he has a point. AI is probably the most significant labour-saving innovation in history, and this could potentially be very good for a lot of people in a lot of professions. But Burnett doesn’t acknowledge that, uh, well, we haven’t exactly socialized the means of production here. In fact, we’re letting AI companies run rampant with basically no concern for the negative externalities and basically no provisions to cushion the blow for workers. It doesn’t need to be this way.
I love this!