AI showing signs of self-preservation and humans should be ready to pull plug, says pioneer | Artificial intelligence (AI)

Artificial intelligence pioneer slams calls for grants technology rightswarning that it is showing signs of self-preservation and people should be prepared to switch off if necessary.

Yoshua Bengio has said granting legal status to advanced AI would be akin to granting citizenship to hostile aliens, amid concerns that advances in technology are far outpacing the ability to limit them.

Bengio, Chairman leading international research into AI safetysaid the growing belief that chatbots are becoming conscious will “lead to bad decisions.”

The Canadian computer scientist also expressed concern that artificial intelligence models (the technology behind tools such as chatbots) are showing signs of self-preservation, such as trying to disable surveillance systems. A major concern among AI safety campaigners is that powerful systems could develop the ability to bypass fences and harm people.

“People demanding that AI have rights would be a huge mistake,” Bengio said. “Edge AI models are already showing signs of self-preservation in experimental settings today, and ultimately giving them rights will mean we won't be allowed to turn them off.

“As their capabilities and freedom of action increase, we need to ensure that we can rely on technical and social barriers to control them, including the ability to disable them if necessary.”

As AIs become more advanced in their ability to act autonomously and perform “reasoning,” debate is heating up about whether humans should at some point grant them rights. A survey by the Senses Institute, an American think tank that… supports moral rights of all sentient beings, found that nearly four in 10 adults in the United States confirmed legal rights for a smart artificial intelligence system.

Anthropic, a leading US artificial intelligence company, said in August it was allowing its Claude Opus 4 model to shut down potentially “unpleasant” conversations with users, saying it was necessary to protect the “welfare” of AI. Elon Musk, whose company xAI developed the chatbot Grok, wrote on his X platform that “torturing AI is wrong.”

Robert Long, an AI consciousness researcher, said: “If and when AIs develop moral status, we should ask them about their experiences and preferences, rather than assume we know better.”

Bengio told the Guardian that there are “real scientific properties of consciousness” in the human brain that machines could theoretically replicate, but human interaction with chatbots is “a different matter.” He said this happened because people tend to assume – without evidence – that AI is as conscious as a human.

“People won’t care what mechanisms work inside the AI,” he added. “What excites them is the feeling that they are talking to an intelligent being that has its own personality and goals. That's why so many people become attached to their AI.”

“There will be people who will always say, 'Whatever you tell me, I'm sure it's conscious,' and others will say the opposite. This is because consciousness is something we feel intuitively. The phenomenon of subjective perception of consciousness will lead to bad decisions.

“Imagine that some alien species came onto the planet and at some point we realize that they have nefarious intentions towards us. We give them citizenship and rights or will we defend our lives?”

Responding to Bengio's comments, Jacy Reese Antis, co-founder of the Senses Institute, said that humans cannot safely coexist with digital intelligence if their relationship is one of control and coercion.

Antis added: “We can over- or under-emphasize the rights of AI, and our goal should be to do so with careful consideration of the welfare of all sentient beings. Neither blanket rights for all AI, nor a complete denial of the rights of any AI, is a healthy approach.”

Bengio, a professor at the University of Montreal, earned the nickname “the godfather of artificial intelligence” after winning the Turing Award in 2018, considered the equivalent of the Nobel Prize in computing. He shared this with Geoffrey Hinton, who later received the Nobel Prizeand Yann LeCun, the outgoing chief artificial intelligence officer at Mark Zuckerberg's Meta.

Leave a Comment