Woman felt ‘dehumanised’ after Musk’s Grok AI used to digitally remove her clothes

A woman told the BBC she felt “dehumanized and turned into a sexual stereotype” after Elon Musk's artificial intelligence “Grock” was used to digitally remove her clothes.

The BBC has seen several examples on social media platform X of people asking the chatbot to strip women to force them to appear in bikinis without their consent, as well as putting them in sexual situations.

XAI, the company behind Grok, did not respond to a request for comment beyond an auto-generated response that referred to “traditional media lies.”

Samantha Smith shared a post on X about the change in her image, which was met with comments from those who experienced the same thing, before others asked Grock to create more images of her.

“Women don’t agree to this,” she said.

“Even though it wasn't me in a state of undress, it looked like me and felt like me, and it was just as offensive as if someone had actually posted a nude or bikini photo of me.”

A Home Office spokesman said legislation was being introduced to ban nudity tools, and a new criminal offense would make anyone providing such technology “face a prison sentence and hefty fines.”

Regulator Ofcom said tech companies must “assess the risk” of people in the UK viewing illegal content on their platforms, but did not confirm whether it was currently investigating X or Grok over AI images.

Grok is a free AI assistant (with some paid premium features) that responds to prompts from X users when they tag it in a post.

It's often used to provide reaction or additional context to other posters' comments, but people on X can also edit an uploaded image using the AI ​​image editing feature.

It has been criticized for allowing users to create photographs and videos containing nudity and sexualized content, and has previously been accused of making an overtly sexual video with Taylor Swift.

Claire McGlynn, professor of law at Durham University, said X or Grok “could have prevented these forms of abuse if they had wanted to”, adding that they “appear to enjoy impunity”.

“The platform allowed these images to be created and shared for months without any action being taken, and we have yet to encounter any issues from regulators,” she said.

XAI's own acceptable use policy prohibits “displaying images of people in a pornographic manner.”

In a statement to the BBC, Ofcom said it was illegal to “create or distribute intimate images or child sexual abuse material without consent” and confirmed that this included sexual deepfakes generated by artificial intelligence.

It said platforms such as X should have taken “appropriate steps” to “reduce the risk” of UK users encountering illegal content on their platforms and quickly remove it when they become aware of it.

Additional reporting by Chris Vallance.

Leave a Comment