When OpenAI released ChatGPT in 2022, it caused a stir among educators. It was a tool that, through several directions, could gather masses of information, compose human sentences, and produce an answer to seemingly any question. They believed that students would certainly use this to cheat.
As AI chatbots grow in popularity, so does concern about their potential abuse. In March, The Wall Street Journal told parents: “There's a good chance your child is using AI to cheat.” New York Magazine stated that “Everyone cheats when they go to college.”
For many students, these headlines ring true. But not for everyone.
Why did we write this
As artificial intelligence becomes intertwined with everyday life, some students are resisting it. Their reasons range from profound to practical and speak to preserving a sense of community and humanity.
“What’s the point of going to college if you’re only going to rely on this thing to give you the right answers?” says Marie Norkett, a junior at St. John's College in Santa Fe, New Mexico. “You're not improving your mental abilities.”
Ms. Norkett is among the students who choose not to use AI in their research. They give reasons both profound and practical. Ms. Norkett, for example, worries not only that cutting corners could dull her critical thinking skills, but also about the accuracy of what is produced by AI bots that scrape vast amounts of information from the Internet to mimic human thinking.
Such students are a minority on campuses. In a September survey of college students conducted by Copyleaks, maker of an artificial intelligence-powered plagiarism detector, 90% of respondents said they use AI for school work. Of course, not all of these students used it to cheat: the most common uses were brainstorming (57%) and making plans (50%).
Still, like many educators, some artificial intelligence advocates worry that bots make cheating easier. In an internal report on the use of ChatGPT from OpenAIAbout a quarter of 18- to 24-year-olds, the most active of the bot's more than 700 million weekly users, said they used it to “answer exams.” A September report Discovery Education found that 40% of middle and high school students used AI without teacher permission, and nearly two-thirds of middle and high school teachers say they have caught students using chatbots to cheat.
The true extent of the fraud problem remains a matter of debate. Victor Lee, an assistant professor of education at Stanford University, says decades of research have shown the fraud rate to be between 60% and 80%. The situation has “remained fairly stable” since ChatGPT arrived on the scene.
However, it is clear that students use this technology frequently. This reflects the variety of tensions. Students feel enormous pressure to succeed academically as they juggle academics with extracurricular activities, work, and social obligations.
“There are also situations when [students] it's just not clear where the line is between what is acceptable and what is unacceptable,” adds Professor Lee.
Still, some students are resisting pressure from their peers to use AI—legally or illegally. They have charted a path toward a more old-fashioned education that for them is fulfilling, meaningful, and decidedly human.
“The full expression of a human being is not a robot. It is a creative, interactive force,” says Caleb Langenbrunner, another St. John's College student. Simply accepting the answers given by AI, he says, “isn’t quite the same as what it means to be human.”
Maintaining a sense of community
Unlike many college campuses, St. John's students say they rarely see their classmates using AI. This may be due to the school's unique teaching methods. It offers only one liberal arts degree, and its entire curriculum includes a four-year reading list of what the college calls the “greatest books” in history. Titles include volumes such as Plato's Republic and Aristotle's Politics.
However, it’s not just St. John’s students who see their peers’ over-reliance on AI as a problem. Ashanti Rosario, a high school senior from New York, says she doesn't use AIand she wishes her classmates wouldn't do it either.
“I think we lose a sense of community in the classroom if we don't actively participate in whatever work is assigned to us,” she says. When students use AI instead of reaching out to their peers, it “harms not only the person using it, but also others who could very well gain a different perspective that enhances their learning.”
The rapid rise of AI-generated writing and art is also worsening anxiety about the future of the humanities. Technology has arrived on the scene at a troubling time for the creative disciplines. Number of students graduating from colleges with liberal arts degrees fell on According to the American Academy of Arts and Sciences, this figure will increase by 24% between 2012 and 2022.
“Most of the humanities and arts are original thinking. [and] creativity,” Ms. Rosario says. “It’s something that can’t be replicated, especially with a machine.” So, I think in order for this cycle – of art and culture – to continue, it has to come from within.”
Question of credibility
Abera Hettinga, a junior studying philosophy and psychology at the University of New Mexico, says he doesn't use AI because it would “do a disservice” to his future self. He also took courses in logic and critical thinking, which shaped his views. Students in the class, he said, examined the accuracy of ChatGPT's answers to various questions, and he was not impressed with the chatbot.
Sometimes when ChatGPT gave him a questionable answer, he would click on it to explain its logic. Mr. Hettinga found that the bot often “just predicts what you want it to say.”
OpenAI admitted that older models tended to tell users what they wanted to hear, even if it meant providing incorrect information. “It shaped my faith in her as much as possible,” Mr. Hettinga says. OpenAI says it has updated its ChatGPT software to combat “sycophancy.”
A writing instructor at the University of New Mexico's Center for Teaching and Learning, Mr. Hettinga has first-hand experience of how over-reliance on chatbots can rob students of the ability to create persuasive arguments.
“[AI] takes away the ability to structure an argument,” he says. – You lose this most important ability – to brainstorm, organize an article, know where to present your arguments, how to formulate a thesis, [and] as well as other important writing skills.”
Stanford's Professor Lee says charting a path to more sustainable use of AI can start with how schools approach the tools, although he acknowledges it can be challenging for educators to juggle the learning needs of dozens of students. Some teachers have already turned to old-fashioned testing methods, such as having students put pen to paper and write by hand. essays in class.
Another strategy “is to develop AI literacy among students to help them learn how to use it responsibly and what its capabilities and limitations are,” he says.
Students surveyed say AI bots do have potential useful applications. For example, they can be a useful place to start research because they quickly collect and synthesize vast amounts of information.
Ultimately, Mr. Langenbrunner of St. John's says he enjoys learning and finding answers on his own—and doesn't want to miss out on a good time.
“You know, I think [AI is] quite boring,” he laughs. “If I used AI to write all my articles, it would take all the fun out of it.”