Ofcom asks X about reports its Grok AI makes sexualised images of children

Ofcom has made “urgent contact” with Elon Musk's company xAI following reports that its artificial intelligence tool Grok could be used to create “sexualized images of children” and strip women.

A spokesman for the regulator said it was also investigating concerns that Grok was creating “undressed images” of people.

The BBC has seen several examples on social media platform X, people are asking a chatbot to alter real-life images to make women appear in bikinis without their consent, as well as placing them in sexual situations.

X did not respond to a request for comment. On Sunday it's issued a warning to users Do not use Grok to create illegal content, including child sexual abuse material.

Elon Musk also wrote to say anyone asking the AI ​​to create illegal content “will suffer the same consequences” as if they had uploaded it themselves.

XAI's own acceptable use policy prohibits “displaying images of people in a pornographic manner,” but people are using Grok to digitally strip people without their consent.

Images of Catherine, Princess of Wales were among many that were captured digitally by Grok users on X.

The BBC has contacted Kensington Palace for comment.

The European Commission, the EU's enforcement arm, said on Monday it was “seriously studying the matter” and authorities in France, Malaysia and India were said to be assessing the situation.

Meanwhile, the UK's Internet Watch Foundation told the BBC it had received reports from the public regarding images created by Grok on X.

But it said it had so far not seen any images that would cross the UK legal threshold to be considered images of child sexual abuse.

Grok is a free virtual assistant with some paid premium features that responds to X users when they tag it in a post.

Samantha Smith, a journalist who discovered that users were using AI to create bikini photos of her, told the BBC PM program on Friday that it left her feeling “dehumanized and reduced to a sexual stereotype”.

“Even though it wasn't me naked, it looked like me and felt like me, and it was just as offensive as if someone had actually posted a photo of me naked or in a bikini,” she said.

Under Online Safety Act (OSA)Ofcom says it is unlawful to create or distribute intimate or sexually explicit images (including AI-generated “deepfakes”) of a person without their consent.

Tech companies are also expected to take “appropriate steps” to reduce the risks UK users face from such content and “quickly” remove it when they become aware of it.

Dame Chi Onwurah, chairperson of the Science, Innovation and Technology Committee, said the reports were “deeply troubling”.

She said the committee found the OSA to be “grossly inadequate” and described it as a “shocking example of how UK citizens remain unprotected while social media companies operate with impunity”.

And she called on the Government to accept the committee's recommendations to force social media companies to “take greater responsibility for their content”.

Meanwhile, European Commission spokesman Thomas Regnier said on Monday that he was aware of Grok's posts “showing explicit sexual content” as well as “some material using images of children.”

“This is illegal,” he said, also calling it “horrible” and “disgusting.”

“This is how we see it, and it has no place in Europe,” he said.

Regnier said X was “well aware” that the EU takes enforcement of its rules for digital platforms “very seriously”. imposed a fine of €120 million (£104 million) on X in December for violating the Digital Services Act.

A Home Office spokesman said legislation was being introduced to ban nudity tools, and a new criminal offense would make anyone providing such technology “face a prison sentence and hefty fines.”

Leave a Comment