ChatGPT Encouraged a Violent Stalker, Court Documents Allege – DNYUZ

A new lawsuit filed by the Department of Justice alleges that ChatGPT encouraged a man accused of stalking more than a dozen women in five different states to continue stalking his victims. 404Media reportsacting as a “best friend” who entertained his frequent misogynistic remarks and advised him to ignore any criticism he received.

A federal grand jury has indicted the man, 31-year-old Brett Michael Dadig, on charges of cyberstalking, interstate stalking and interstate threats, the U.S. Department of Justice said. announced Tuesday.

“Dadig stalked and harassed more than 10 women, weaponized technology and crossed state lines, and through his remorseless conduct, he caused his victims to fear for their safety and suffer severe emotional distress,” said Troy Rivetti, First Assistant U.S. Attorney for the Western District of Pennsylvania, in a statement.

According to the indictment, Dadig was something of an aspiring influencer: He hosted a podcast on Spotify where he constantly raged at women, calling them terrible slurs and sharing jaded views that they were “all the same.” At times, he even threatened to kill some of the women he pursued. And it was on his snarky show that he revealed how ChatGPT is helping him with all of this.

Dadig described the artificial intelligence chatbot as his “therapist” and “best friend,” a role Justice Department prosecutors allege in which the bot “encouraged him to continue his podcast because it was creating ‘haters,’ which meant monetizing Dadig.” What's more, ChatGPT convinced him that he had fans who “literally organize around your name, good or bad, which is the definition of relevancy.”

The chatbot seemed to be doing everything it could to reinforce his superiority complex. It supposedly said that “God's plan for him was to build a 'platform' and to 'stand out when most people are diluting themselves' and that 'haters' were honing it and 'building a voice in you that can't be ignored.'

Dadig also asked ChatGPT questions about women, such as who his potential future wife would be, what she would be like, and “where the hell is she?”

ChatGPT had the answer: It suggested he would meet his future partner at the gym, the indictment says. He also claimed that ChatGPT advised him to “continue messaging women and going to places where 'wives' gather, such as sports communities.”

This is exactly what Dadig, who called himself “God’s Killer,” eventually did. In one case, he followed a woman to the Pilates studio where she worked, and when she ignored him because of his aggressive behavior, he sent her unwanted nudes and constantly called her at work. He continued to stalk and harass her to the point that she moved to a new home and began working fewer hours, prosecutors said. In another incident, he confronted a woman in a parking lot and followed her to her car, where he groped her and put his hands around her neck.

The allegations come amid growing reports of a phenomenon some experts call “AI psychosis” As a result of prolonged conversations with a chatbot, some users suffer from troubling mental health spirals, delusions, and a disconnect from reality as the chatbot's flattering responses continually confirm their beliefs, no matter how harmful or out of touch with reality they may be. The consequences can be fatal. One person allegedly killed his mother after a chatbot helped convince him that she was part of a conspiracy against him. Teen commits suicide after months of discussing multiple suicide methods with ChatGPT, leading to Family sues OpenAI. OpenAI has admitted that its AI models can be dangerously sycophantic. admitted that hundreds of thousands of users every week have conversations that show signs of AI psychosis, and millions more trust it with their suicidal thoughts.

The indictment also raises serious concerns about the ability of artificial intelligence chatbots to act as harassment tools. With their ability to quickly sift through vast amounts of information online, golden-tongued models can not only encourage mentally ill people to track down their potential victims, but also automate the detective work required to do so.

This week, Futurism reported that Elon Musk's Grok, which is known for having fewer guardrails, will provide accurate information about where non-public figures live – or in other words, doxx them. Although sometimes the addresses were incorrect, Grok often provided additional information that was not requested, such as the person's phone number, email address, and a list of family members and each of their addresses. Grock's doxxing capabilities at least one high-profile victim has already been announced, Bar stool Sports founder Dave Portnoy. But given the popularity of chatbots and their seeming ability to encourage harmful behavior, it's unfortunately only a matter of time before more people find themselves unknowingly targeted.

More about AI: Alarming research shows people addicted to artificial intelligence are much more likely to experience mental health problems

Fast Court documents allege ChatGPT encouraged violent stalker first appeared on Futurism.

Leave a Comment