Experts say Canada's murky laws on privacy and online harms leave victims with limited options.
On social media platform X, users encourage the artificial intelligence model Grock to create non-consensual sexual images of women and sometimes children. Experts say the laws surrounding these images are unclear in Canada, and Grok is currently under investigation for possible previous violations of Canadians' privacy.
The reaction to Grok's image-building was swift, with international governments and regulators condemning X and his chatbot for both creating and hosting this content (which included Materials about child sexual abuseor CSAM). In Canada, technology privacy experts say the legality is unclear as the government seeks to update its technology and privacy legislation.
“Most companies have terms and conditions that prohibit the publication of intimate images without consent [but] there is some confusion when it comes to synthetic sexual content.”
Susie Dunn, Dalhousie University
IN statement In a post on X today, Federal Minister for Artificial Intelligence and Digital Innovation Evan Solomon said that “fake sexual assault is assault” and that “[p]Platform and AI developers have a responsibility to prevent this harm.”
A spokesperson for Solomon's office told BetaKit in a statement Wednesday that the government is “committed to ensuring the safety of Canadians, particularly children and women, who are at higher risk of exploitation when it comes to non-consensual sexual deepfakes.” The spokesperson added that the minister plans to introduce legislation to protect Canadians' sensitive data.
In today's announcement, Solomon referred to previously introduced Bill C-16, the Victim Protection Act, which seeks to expand laws prohibiting the non-consensual distribution of intimate images to include non-consensual deepfakes.
Sexy images flood Grok
Grok is a generative artificial intelligence model created by Elon Musk. x.AI which is available to users of X (formerly Twitter) on the social media platform. Musk and xAI are positioning Grok as a more liberal chatbot with fewer rules governing its rapid generation than competing AI models. The model previously generated racist and misogynistic content, including hate speechand returns regularly factual errors and inaccuracies. xAI is facing a lawsuit related to alleged copyright infringement by authors and media outlets such as the New York Times. Despite the backlash, xAI announced this week that the company has raised US$20 billion from investors including chip giant Nvidia.
At the end of December, sexualized and “nude” images of women and children, created by Grok at the prompting of users and without the consent of the subjects, began to appear en masse in the feeds of X users. User requests included Groku commanding women and children to dress women and children in clothing such as bikinis emblazoned with swastikas or made from material such as dental floss to show as much skin as possible. The chatbot was then asked to post an apology on January 2, calling the incidents “omissions in guaranteesAnd directs users to the FBI and the Cyber Tip Line. Founder Elon Musk published that users, and not xAI and its chatbot, will be held legally responsible for illegal content generated through tips, and that violators will be permanently banned. Some reports indicate that non-consensual sexual images are still being created.
“Most companies have terms and conditions that prohibit the publication of intimate images without consent,” Susie Dunn, an associate professor of law at Dalhousie University, told BetaKit on Wednesday. “When it comes to synthetic sexual content, there is some confusion.”
In Canada, it is illegal to knowingly publish or distribute intimate images of someone without their consent. The consequences for Canadian users seeing this potentially illegal content on their X channels are less clear.
“If you choose to follow a user whose primary goal was to create CSAM from Grok, and you're actively asking for that content to appear in your feed, that's an interesting legal issue,” Dunn said.
X Safety Account published January 3 that the social media platform is taking action against illegal content on X, including CSAM, by removing it, suspending accounts and “working with local authorities and law enforcement as necessary.”
“Anyone who uses Grok or encourages it to create illegal content will suffer the same consequences as if they had uploaded illegal content,” it said. BetaKit has reached out to xAI for comment.
Going to court
Taking or possessing sexual images of children is illegal in Canada under the Criminal Code, which does not distinguish between photography and images generated by artificial intelligence. In 2023, in Quebec, a man was sentenced to prison three years for creating child pornography using artificial intelligence.
The primary remedy for adults who are victims of deepfake created by artificial intelligence is the civil system, but laws vary by province, Dunn said. All provinces except Ontario have some form of intimate image protection law, which prohibits the sharing of private images without the subject's consent, Dunn said.
Sharon Polsky, president of the Privacy and Access Council of Canada, told BetaKit on Wednesday that the civil route can be costly and tedious for individuals when they go after a company hosting content.
“You also have to understand that you, as an individual, better have very deep pockets,” Polsky said. “It's a David and Goliath situation.”
Another option is to report deepfakes made without consent to your local police. But Polsky warned that CSAM investigators are overworked and face challenges depending on whether the victim and offender are in different jurisdictions.
In Ontario in November provincial judge made a decision that distributing a fake digitally altered nude image is not a crime, arguing that altered images are not expressly mentioned in the Criminal Code. Manitoba and Quebec recently updated their legislation to include images that have been modified.
Dunn said that while legislation could be helpful, “people need some kind of victim help line they can call” to have content removed immediately. For example, the charity Canadian Child Advocacy Center runs cybertip.ca A hotline specifically for reporting online child sexual exploitation.
“The government should not only change laws, but also provide services like this,” Dunn said.
Canada's privacy commissioner said he could not comment on concerns about the creation of Grok's images because the agency has been investigating X for the complaint since February 2025. The investigation is focused on compliance with Canada's Federal Privacy Act X and will examine whether the company is meeting its obligations regarding the collection, use and disclosure of Canadians' personal information to train artificial intelligence models, including Grok.
Legal landscape of Canada
Canada does not yet have legislation governing AI models, and its privacy legislation has not been updated in 40 years. Minister Solomon said he will not renew the previous government's Artificial Intelligence and Data Act (AIDA). The legislation was part of Bill C-27, which aimed to modernize privacy and data protection, but died in January 2025 when Parliament was prorogued. Federal authorities are reviewing aspects of the bill that could be moved.
Another bill aimed at protecting users and children from harmful online content also failed to pass the House of Representatives. Proposed Internet Harm Law would create a body to regulate harmful online content, including sexualized deepfakes. This will give operators the ability to ban content that sexually abuses children, as well as intimate content shared without consent.
Solomon said updated legislation will be released this year that will specifically address the problem deepfakes and data privacyand also updated AI strategy.
The spokesperson said Solomon's office at the Canadian Institute of Innovation, Science and Economic Development has been in contact with the Royal Canadian Mounted Police (RCMP). In an email today, the RCMP said it typically only confirms an investigation if it results in criminal charges.
If your images have been published or manipulated by Grock or other AI models without your consent and you are willing to speak out about your experiences, contact BetaKit reporter Madison McLauchlan at: Signal at @madisonmcla.12
If you or someone you know has been a victim of sexual harassment or violence, help is available through crisis lines and support centers across Canada.
Featured image via X. The image has been anonymized by BetaKit to protect the identities of contributors.






