Lately, it seems everyone in higher education has been handed the same essay prompt: describe how AI is destroying your classroom. In The New Yorker, literature professor Hua Hsu considers the demise of the term paper while shadowing an undergraduate who systematically forges his coursework. Writing for this publication, Troy Jollimore mourns the link between writing and thinking, as students use the technology to deprive themselves of the critical spirit philosophy has to offer. ChatGPT’s style is spreading like an infection: symptoms include em dashes, three-point lists, the verb “delves,” and sentences using “not only.”
I am among the educators reverting to old-school methods in an effort to exclude robots from the room. This approach seems logical to me; I teach literature, so, surely, I need to ensure my students do their own reading and writing. In Quebec’s junior college system, where I have four hours a week to play with, I’ve implemented more reading quizzes. My students now write most of their essays in class by hand. When I taught a university course this past spring, I assigned oral defences for the final paper, thinking students would be less likely to fabricate work they needed to justify to my face. Take-home assignments come with a plea about the importance of developing one’s own voice alongside new grading criteria that lets me give a zero for a single hallucinated quotation.
As with any overhaul triggered by despair, I’m worried about what I might be sacrificing: the presumption of trust in my charges, the deep reflection that comes from long-form writing, and the opportunity to revise at one’s own pace. I’m also acutely aware that another reaction is possible. A mushrooming class of pedagogy influencers on TikTok, LinkedIn, and X insists that generative AI can “revolutionize” my teaching. Tech companies have adopted the tactics of drug dealers, handing out free trials in hope of getting students hooked.
But I can also find counterpoints to my approach by walking across the hallway. The AI enthusiasts who I cannot dismiss, and who are most likely to convince me to reconsider my methods, are my own colleagues.
Stéphane Paquet is in the English department with me at Champlain College-Saint Lambert, but his reaction to the past few years could not be more different. “The whole AI thing, in a way, saved my interest in my career,” he told me over coffee in Montreal’s Rosemont neighbourhood.
Paquet has a long-standing interest in technology: he taught high school computer science before switching to English literature and wrote his MA thesis on scientific discourse and Romantic poetry. Then he settled into his teaching routine, refining and repeating courses on graphic novels, short stories, and essay composition. By the time ChatGPT emerged, he needed some kind of a challenge. Otherwise, “Fifteen years felt like a long time before retirement.”
Over the past two years, Paquet has become one of the faces of AI at our college. After teaching himself the tools—a gratifying if interminable process—he launched a series of workshops to help faculty get their own bearings. An introductory session explained the basics of large-scale language models, their strengths, and their limitations. Other workshops focused on topics like rethinking assignments to either include or resist ChatGPT and looking at AI’s ability to provide a personalized feedback template.
By the second year, however, teacher interest dropped; Paquet suspects many of us had made enough adjustments to survive this new era without engaging the technology, either because of ethical objections or because revamping our pedagogy was too time consuming. Now, he focuses on training students, who can earn an AI certificate from attending his sessions.
I’ll admit that, when I first heard Paquet was teaching teens how to best prompt ChatGPT, I saw him as a traitor. But he had conducted a poll revealing that some 87 percent of our students were using AI on a weekly basis anyway, and, based on what he’s seen online and in his classrooms, they’re doing it largely without our guidance, using tips from TikTok and YouTube. “It’s not healthy,” he argued, “to have this kind of clash between the way the students learn in the class and the way they learn outside the class.”
His workshops are also designed to pre-empt reactions like mine. In order to increase AI literacy, he demonstrates how algorithms can reproduce bias and falsify results. When it comes to coursework, he details the most unobjectionable ways of deploying these tools, like training a chatbot to re-organize your notes or quiz you in preparation for an exam.
The goal is to increase transparency and reduce plagiarism anxiety by focusing on what is obviously permitted. In his own classes, he shows students how AI can illustrate comic books for a creative assignment and revise essays that they have already drafted by hand.
Still, buy-in isn’t universal. While many adored the image generators, far fewer completed the editing exercise when he made it optional: “They saw it as more work.”
To me, that gap hints at a basic truth about how students treat AI—not as an aid for thinking, but as a shortcut. If it isn’t saving them time, why bother? One might argue the same logic applies in the workforce. At Concordia University, where I teach part time, Maggie McDonnell coordinates the composition and the professional writing programs, prioritizing AI integration in the latter. “If I’m going to become a technical writer and go out there into the ‘real world,’ I’m using AI,” she explained to me.
She therefore promotes AI as an efficiency tool, all while stressing the importance of refining the output. For a group project on producing a technical manual, she asks students to use AI for brainstorming, outlining, and image generation, then has them edit and reflect on the results. Any group that wants to do the assignment on their own needs to submit a short essay justifying their choice.
McDonnell’s openness to the technology extends to her own work life. She finds AI to be a “phenomenally effective tool” for lesson planning when combined with her pre-existing teaching expertise. (Paquet, likewise, uses ChatGPT for coding and organizing notes.)
In the composition program, by contrast, McDonnell discourages AI use, not because of philosophical objections but due to the learning objectives. These courses are meant to develop core reading and writing skills, moving step by step through the process of drafting and revision. McDonnell advises instructors to explain the goals of the class, so participants understand why they should avoid AI. She also recommends learning student writing styles early, as they will be less likely to cheat if they know their professor can spot their voice.
But she pushes back against the impulse to move all assignments in class. “In terms of accessibility and in terms of your own teaching,” she says, “it becomes problematic.” Students with learning disabilities may need accommodations, such as extra writing time or the use of a computer, and they risk being singled out and stigmatized. And for writing-intensive courses that only meet for twelve weeks during the semester, it’s simply impractical to sacrifice so much instructional time in order to watch students as they work.
There is also the cost of appearing obsolete. Mary Towers, who works for the McGill Writing Centre, describes herself as “techno-cautious.” While she would love for her students not to touch AI, she feels like banning it would strike the wrong note. “In some sense,” she tells me, “I’m undermining my own credibility because I think they will see me as an ancient, outdated professor and might not take valuable advice that I have to offer them.”
Instead, she gives students concrete examples of how AI tools can help improve their writing, such as asking for advice on a troublesome sentence or paragraph and deciding what suggestions to adopt. By contrast, having AI edit the paper is unacceptable for Towers, as is having it outright generate a first draft. “My goal here is to get them making conscientious choices as writers, not just assuming that the tech can do the work or the thinking for them.”
In all cases, she stresses transparency, requiring students to file a report on what software was used, which prompts were given, and before-and-after versions of the text. She also asks students to submit annotated versions of the sources they cite as a way of pushing them to do the research properly rather than relying on AI summaries. Sifting through so much documentation significantly increases her workload overall but allows her to have a better sense of when something really is plagiarized.
In these conversations, I realized our approaches to AI reflected our relationships with technology in general. The daughter of a physicist and an engineer, McDonnell happily declares herself an early adopter: “I have always felt like if something is new, it isn’t by default bad or scary.”
Meanwhile, I fought having a cellphone for years, and a smartphone for years after that. Now I’m as dependent on Google Maps as anyone, and my rage at QR code menus is shifting to acquiescence. The pattern suggests that I’ll cave after social norms move too far to resist.
Yet I can’t help but think that, for all the comparisons between AI and calculators, we still teach children to add and subtract by themselves. I’m not sure students are skilled enough to outsource any part of the writing process and remain in control of the product.
University and college administrations seem to be hedging their bets. Guidelines at Concordia, like those at the University of Toronto and the University of British Columbia, leave the details of AI use at the discretion of the instructor, though Concordia’s Centre for Teaching and Learning also advises blanket bans as unenforceable. The Université de Sherbrooke stresses the pitfalls of traditional, pen-and-paper examinations and the inaccuracy of AI detectors, suggesting instead that assignments be overhauled to reduce the utility of cheating.
Most distressing for me was an interactive lesson provided by the University of Waterloo to advise students on ethical AI use. The module cast me as an overworked biology student struggling with an assignment and asked me a series of questions about whether I would use ChatGPT. The first time through, I impersonated my nightmare student, deciding to copy and paste AI-generated text into my assignment and denying it when confronted by the instructor. I was relieved that this scenario saw me receiving a zero after a meeting with the associate dean.
But when I tried the opposite tack and refused to use generative AI at all, the module told me I had handed in my work late, resulting in a reduced grade—as though opting out would necessarily lead me to underperform. “Could ChatGPT have made things easier?” the lesson prompted. “How can I use it to support my learning without resorting to academic misconduct?” The implication was that taking a hard line was only punishing myself.
The lesson’s mixed message captures the broader confusion on campuses these days: use AI, but don’t rely on it; resist it, but not too much. Yet there are surprising pockets of consensus. Despite our opposing reactions to the technology itself, Paquet and I have both shifted our pedagogy to become more process oriented.
For Paquet, this means making his assignments more unique, justifying the value of doing them oneself, and changing his grading. On one of his rubrics, an essay might now only be worth two-thirds of the final grade, with the other third awarded for completing brainstorming, revision, and drafting steps. As he explained, these choices are not especially new. “All the solutions teachers come up with are all pedagogical approaches that have been around for decades,” he says. Ironically, the end result of integrating these supposedly more efficient approaches has been to slow down Paquet’s teaching.
I’ve likewise increased my emphasis on process, with oral seminars that build toward final papers, mandatory outlines, and, above all, a different relationship to polish. Yes, I still underline grammatical mistakes and include mini-lessons about comma splices and fragments. Yet I now find awkward syntax reassuring, the mark of a human mind struggling to figure out what to say. I’m less punitive about style and give students more revision opportunities before I assign grades, all in the name of rewarding them for actually doing the work. I hesitate to dig into the details, because part of me thinks I’ll need to change it all again next semester.
As destabilizing as the situation may be, this openness is exactly what our college’s pedagogical councillor, Sara Hashem, recommends. At first excited by the release of ChatGPT, Hashem has since grown more skeptical about handing the reins of public education over to private companies, and she cautions against rushing forward too fast or blindly trusting the AI education “experts.”
“I would like to see people being more humble about it and say we are in reactive mode,” she told me. Hashem is now partnering with the Université de Montréal’s Neerusha Baurhoo Gokool on a project that surveys emerging research and brings together instructors from different disciplines to discuss how they are—or aren’t—integrating AI. She has no idea what the conclusions will be.
I find some comfort in the idea that we are all still experimenting. Maybe the return to in-class assignments will prove beneficial. With cellphones increasingly being banned from elementary and high school classrooms across the country, there might be a case for post-secondary education to continue the digital detox. Or maybe generative AI will become this generation’s spell check. Writing might be reconceived as a form of collaboration with online tutors, and solo composition may become a niche skill, like developing film negatives.
In the meantime, I’ll probably be deciphering student handwriting for a while yet—until there’s hard evidence that inviting these algorithms into the classroom does more good than harm.





