While I knew the current higher education system is both unsustainable and unnecessary now, I never suspected that it would be AI and not debt or the absence of men that would put the final nail in the university coffin:
While professors may think they are good at detecting AI-generated writing, studies have found they’re actually not. One, published in June 2024, used fake student profiles to slip 100 percent AI-generated work into professors’ grading piles at a U.K. university. The professors failed to flag 97 percent. It doesn’t help that since ChatGPT’s launch, AI’s capacity to write human-sounding essays has only gotten better…
There are, of course, plenty of simple ways to fool both professors and detectors. After using AI to produce an essay, students can always rewrite it in their own voice or add typos. Or they can ask AI to do that for them: One student on TikTok said her preferred prompt is “Write it as a college freshman who is a li’l dumb.” Students can also launder AI-generated paragraphs through other AIs, some of which advertise the “authenticity” of their outputs or allow students to upload their past essays to train the AI in their voice. “They’re really good at manipulating the systems. You put a prompt in ChatGPT, then put the output into another AI system, then put it into another AI system. At that point, if you put it into an AI-detection system, it decreases the percentage of AI used every time,” said Eric, a sophomore at Stanford.
Most professors have come to the conclusion that stopping rampant AI abuse would require more than simply policing individual cases and would likely mean overhauling the education system to consider students more holistically. “Cheating correlates with mental health, well-being, sleep exhaustion, anxiety, depression, belonging,” said Denise Pope, a senior lecturer at Stanford and one of the world’s leading student-engagement researchers.
Many teachers now seem to be in a state of despair. In the fall, Sam Williams was a teaching assistant for a writing-intensive class on music and social change at the University of Iowa that, officially, didn’t allow students to use AI at all. Williams enjoyed reading and grading the class’s first assignment: a personal essay that asked the students to write about their own music tastes. Then, on the second assignment, an essay on the New Orleans jazz era (1890 to 1920), many of his students’ writing styles changed drastically. Worse were the ridiculous factual errors. Multiple essays contained entire paragraphs on Elvis Presley (born in 1935). “I literally told my class, ‘Hey, don’t use AI. But if you’re going to cheat, you have to cheat in a way that’s intelligent. You can’t just copy exactly what it spits out,’” Williams said.
Williams knew most of the students in this general-education class were not destined to be writers, but he thought the work of getting from a blank page to a few semi-coherent pages was, above all else, a lesson in effort. In that sense, most of his students utterly failed. “They’re using AI because it’s a simple solution and it’s an easy way for them not to put in time writing essays. And I get it, because I hated writing essays when I was in school,” Williams said. “But now, whenever they encounter a little bit of difficulty, instead of fighting their way through that and growing from it, they retreat to something that makes it a lot easier for them.”
By November, Williams estimated that at least half of his students were using AI to write their papers. Attempts at accountability were pointless. Williams had no faith in AI detectors, and the professor teaching the class instructed him not to fail individual papers, even the clearly AI-smoothed ones. “Every time I brought it up with the professor, I got the sense he was underestimating the power of ChatGPT, and the departmental stance was, ‘Well, it’s a slippery slope, and we can’t really prove they’re using AI,’” Williams said. “I was told to grade based on what the essay would’ve gotten if it were a ‘true attempt at a paper.’ So I was grading people on their ability to use ChatGPT.”
The “true attempt at a paper” policy ruined Williams’s grading scale. If he gave a solid paper that was obviously written with AI a B, what should he give a paper written by someone who actually wrote their own paper but submitted, in his words, “a barely literate essay”? The confusion was enough to sour Williams on education as a whole. By the end of the semester, he was so disillusioned that he decided to drop out of graduate school altogether. “We’re in a new generation, a new time, and I just don’t think that’s what I want to do,” he said.
“The students kind of recognize that the system is broken and that there’s not really a point in doing this.”
The students are right. There is no point in doing this, because the only reason they’re doing it is to acquire a golden ticket to higher income and higher social status that increasingly no longer exists.