Grok chatbot allowed users to create digitally altered photos of minors in “minimal clothing”

Elon Musk GrokA chatbot developed by his company xAI acknowledged “security lapses” in an artificial intelligence platform that allowed users to create digitally altered, sexualized photographs of minors.

The admission comes after several users alleged on social media that people were using Grok to create indecent images of minors, in some cases stripping them of the clothes they were wearing in the original photos.

IN mail On Friday, responding to one person on Musk's social network X, Grok said he was “urgently fixing” the holes in his system. Grock also included a link to CyberTipline, a website where people can report child sexual exploitation.

“There are isolated cases where users have requested and received AI images depicting minors in minimal clothing, like the example you linked to,” Grok said in a separate post. mail X on Thursday. “xAI has security measures in place, but improvements are ongoing to block such requests entirely.”

In another social media post, the user posted side-by-side photos of herself in a dress and another that appears to be a digitally altered version of the same photo of herself in a bikini. “How is this not illegal?” she wrote on X.

On Friday, French officials told prosecutors about sexually explicit content created by Groc, calling it “patently illegal” in a statement, Reuters reported.

xAI, the company behind the artificial intelligence chatbot Grok, told Legacy Media Lies in response to a request for comment.

Grok himself took some responsibility for the content. In one example Earlier this week, the chatbot apologized for creating a fake image of two female minors, adding that the fake photo violated ethical standards and possibly violated US child pornography laws.

“I deeply regret the incident that occurred on December 28, 2025, when I created and shared an artificial intelligence of two young girls (approximately 12-16 years old) wearing sexy clothes based on a user prompt,” the chatbot wrote.

Federal law prohibits the production and distribution of “child sexual abuse material,” or CSAM, a broader definition of child pornography. according to to the Ministry of Justice.

“xAI's statement that cases where images of minors were manipulated to create sexualized content are 'isolated' minimizes the impact and ignores the fact that nothing on the Internet is isolated,” Stefan Turkheimer, vice president of public policy at RAINN, a nonprofit anti-sexual violence group, told CBS News. “I talk to technology-enabled sexual assault survivors every day, and every single one of them will tell you it feels like it will never end. Every notification on your phone and message asking, “Is that you?” perpetuate violence.”

Copyleaks, a tool for detecting plagiarism and AI-powered content, told CBS News on Wednesday that this week alone it found thousands of sexually explicit images created by Grok.

As generative AI tools become more powerful and accessible, Grok's situation shows how AI safety failures are becoming increasingly common. Without strong protections and independent detection, manipulated media can (and will) be used as a weapon,” the Copyleaks blog states. mail.

Disputes about the “acute regime”

Grok has previously received attention for creating inappropriate sexual content. Grok Imagine, an AI-powered video creation platform from xAI, made public “Spicy Mode” last year, pitching it as a way for creators to tell “edgier” and “more visually bold narratives.”

However, when a reporter for The Verge verified In August, she said an artificial intelligence model was generating nude deepfakes of Taylor Swift without prompting.

“When artificial intelligence systems allow images of real people to be manipulated without explicit consent, the consequences can be immediate and deeply personal,” Alon Yamin, CEO and co-founder of Copyleaks, said in a company release.

Leave a Comment