Artificial intelligence companies know that children are the future of their business model. The industry makes no secret of its attempts to attract young people to its products through timely promotional offers, discounts and referral programs. “Here to help you pass the finals,” OpenAI said during the giveaway ChatGPT Plus for College Students. Students receive free access to Google'sand Confusionexpensive artificial intelligence products. Perplexity even pays referrers $20. for every US student who gets the opportunity to download the Comet AI browser.
Popularity of artificial intelligence tools among teenagers is astronomical. Once a product enters the education system, teachers and students bear the consequences; Teachers are struggling to keep up with the new ways their students are gaming the system, and their students in danger not learning to study at all, teachers warn.
This has become even more automated thanks to the latest AI technology – AI agents that can perform online tasks for you. (albeit slowlyHow Edge saw it in tests several agents in the market.) These tools make the situation worse by making it easier to cheat. Meanwhile, tech companies are playing hide-and-seek with responsibility for how their tools can be used, often simply blaming the students they have empowered with a seemingly unstoppable cheating machine.
In fact, perplexity seems to rest on its reputation as a scam tool. He released facebook advertising in early October, a “student” was shown discussing how his “peers” were using the Comet AI agent to complete multiple-choice homework. In another ad posted on the same day on the company’s Instagram pagethe actor tells students that the browser can take tests on their behalf. “But I’m not the one telling you this,” she says. When a video appeared on X of a Perplexity agent doing someone's homework (an exact use case in the company's advertising), Perplexity CEO Aravind Srinivas reposted the video.sarcastically, “Don't under any circumstances do this.”
When Edge Asked to address concerns that Perplexity's AI agents were being used for cheating, spokesman Bijoli Shah said that “every teaching tool, starting with abacus, has been used for cheating. Generations of wise people have since known that cheaters in school only end up cheating themselves.”
This fall, shortly after artificial intelligence industries agent summerTeachers have begun posting videos of these AI agents smoothly performing tasks in their online classes: OpenAI ChatGPT Agent compose and submit an essay on Canvas, one of the popular learning management dashboards; Perplexity AI assistant successfully completed the quiz and creation short essay.
IN one more videoa ChatGPT agent pretends to be a student completing a task designed to help classmates get to know each other better. “It actually introduced itself as me… so it just blew my mind,” said the video's creator, college curriculum designer Yun Mo. Edge.
Canvas is the parent company's flagship product. Structurewhich claims to have tens of millions of users, including in “every Ivy League school” and “40% of US school districts.” Moh wanted the company to ban AI agents from pretending to be students. He addressed Instructure in a public ideas forum and emailed the company's sales representative, citing concerns about “potential abuse by students.” He played a video of an agent doing fake homework for him.
It took Moch almost a month to hear from Instructure management. On the topic of blocking AI agents on their platform, they seemed to suggest that this is not a technical problem, but a philosophical one, and in any case it should not stand in the way of progress:
“We believe that rather than just blocking AI entirely, we want to create new, pedagogically sound ways to use the technology that will actually prevent cheating and provide more transparency in how students use it.
“So while we will always support the work of preventing fraud and protecting academic integrity, as our partners do in browser blocking, monitoring and fraud detection, we will not shy away from creating powerful, transformative tools that can open up new ways of teaching and learning. The future of education is too important to be stopped by fear of abuse.”
The structure was more direct with Edge: While the company has some safeguards that check for certain third-party access, Instructure says it cannot block external AI agents and their unauthorized use. Instructure “will never be able to completely ban AI agents” and will not be able to control “tools running locally on a student's device,” spokesman Brian Watkins said, adding that the problem of student cheating is at least partly a technology issue.
Moch's team also struggled. IT teams have been trying to find ways to very quickly detect and block agent behavior, such as sending multiple jobs and tests, but AI agents can change their behavior patterns, making them “extremely elusive to identify,” Moh said. Edge.
In September, two months after Instructure signs agreement with OpenAIand a month after Moha's request, Instructure sided with against another artificial intelligence tool that teachers say helps students cheat because Washington Post reported. Google's Homework Help button in Chrome has made it easier to search for images across any part of the browser's content, such as quiz questions. on canvasas one math teacher demonstrated – through Google Lens. Teachers raised the alarm on the Instructure community forum. Google listened, according to a response on the Instructure community team forum and an example of the two companies' “longstanding partnership” that includes “regular discussions” about edtech, Watkins said. Edge.
When asked, Google stated that the Homework Help button was just a shortcut test for a pre-existing Lens feature. “Students have told us they value tools that help them learn and understand things visually, so we're running tests that offer an easier way to access Lens while browsing,” Google spokesman Craig Ewer said. Edge. The company has paused rapid testing to take into account early user feedback.
Google is leaving open the possibility of future Lens/Chrome shortcuts that are hard to imagine. there won't be marketed to students, given the availability of recent company blogwritten by an intern saying, “Google Lens in Chrome is a lifesaver for school.”
Some educators have found that agents occasionally, but inconsistently, refuse to complete school assignments. But this obstacle was easy to overcome, as college English teacher Anna Mills demonstrated when she instructed OpenAI's Atlas Browser lets you submit assignments without asking for permission. “It's the Wild West,” Mills said. Edge on the use of AI in higher education.
That's why educators like Moh and Mills want AI companies to take responsibility for their products rather than blaming students for their use. The Modern Language Association's Artificial Intelligence Working Group, which includes Mills, published a statement in October called on companies to give teachers control over how artificial intelligence agents and other tools are used in their classrooms.
OpenAI appears to want to distance itself from scams while preserving the future of AI-powered education. In July the company added training mode for ChatGPT it doesn't provide answers, and OpenAI Vice President of Education Leah Belsky said Business Insider that AI should not be used as an “answering machine.” Belsky told Edge:
“The role of education has always been to prepare young people to thrive in the world they will inherit. Now that world includes powerful AI that will determine how work will be done, what skills will matter and what opportunities will be available. Our shared responsibility as an education ecosystem is to help students use these tools well – to enhance learning rather than undermine it – and to reimagine how teaching, learning and assessment work in an AI-enabled world.”
Meanwhile, Instructure is shying away from trying to “control the tools,” Watkins stressed. Instead, the company says it is working on a mission to “redefine learning itself.” Presumably this vision doesn't involve permanent cheating, but their proposed solution is similar to OpenAI's: a “collaborative effort” between companies creating AI tools and the institutions using them, as well as teachers and students, to “define what responsible use of AI looks like.” This work is still ongoing.
Ultimately, adhering to whatever guidelines for the ethical use of AI they end up coming up in committees, think tanks, and corporate boardrooms will fall on the shoulders of teachers in their classrooms. Products are released and deals are signed. to these guidelines were even established. Apparently, there is no turning back.






