Summit With OpenAI, Google DeepMind Reaches Bleak Agreement

Welcome back to In a loopTIME's twice-weekly newsletter on the world of artificial intelligence. If you're reading this on your browser, you can sign up to have the next email delivered straight to your inbox.

Subscribe to In a loop

What you need to know: AI social contract

Earlier this month, on a lakeside in Sweden, 18 people from OpenAI, Google DeepMind, the British Institute for Artificial Intelligence Security, the OECD and other groups gathered for an invitation-only summit. On the agenda: achieving an understanding of how advanced AI will impact the “social contract” between workers, governments and corporations.

Top artificial intelligence executives such as DeepMind's Demis Hassabis and OpenAI's Sam Altman have recently been calling scientists and governments will take a deeper look at the problem to better prepare the world for what they expect to be a very devastating economic shock. So, every day for a week—in the lounges and in the communal sauna at night—these 18 experts pieced together a picture of what economic shocks might come in the future… and what to do about them.

Bad news — According to the organizers of the summit, one of the results of the so-called “AGI Social Contract Summit” was a list of four draft statements. These statements have not previously been reported. They paint a grim picture of where the world could go unless there is significant intervention from governments and society. “AI is likely to exacerbate rising prosperity and income inequality within countries, worsening economic conditions for many working and middle-class people and families,” the first paper said. “AI will increase inequality between countries that have access to AI infrastructure and those that do not—both in terms of access to benefits and the ability to respond to shocks,” says the second. “Without intervention, inequalities created by artificial intelligence could lead to political dominance by wealthy individuals and corporations, undermine democratic institutions, and increase levels of political discontent,” says a third. And fourth: “The incursion of artificial intelligence systems and the devaluation of work may lead to increasing disenfranchisement of the majority of people, causing a degradation of individual well-being and purpose.”

Human powerlessness Summit participants agreed that the existing social contract, in which people receive security and a stake in society in exchange for their work, is under threat from AI, says Derick Cheng, an event organizer who is director of research at the Windfall Trust, a nonprofit founded this year to address these issues. “We're essentially concerned that the workforce will be disempowered over corporations, and also to some extent that governments may be disempowered over corporations,” Cheng says. “The obvious result of a declining labor force is a decline in real wages.” According to this view, people in rich democracies have a high standard of living not because of their rights enshrined on paper, but because of their ability to give up their labor. Take labor out of the equation and living standards may decline even if overall GDP or productivity statistics rise.

Ways forward – Participants agreed that without government intervention, the standard path of advanced AI would likely lead to poor economic outcomes for the average person. But fortunately, they also identified several possible actions governments could take to move things in a better direction, Cheng says. For example: creating new institutions modeled on the IMF to ensure that the wealth generated by AI is distributed throughout the world, rather than within one or two powerful countries where AI companies are located. States could also launch pilot projects today on policies such as basic income and shorter workweeks to gather data on what types of safety nets are effective, Cheng said.

Google DeepMind refused to comment on the statements made following the summit. OpenAI did not respond to a request for comment. Following the publication of this article, Cheng said in a post that all those present were present in a personal, not official capacity, and that the statements did not reflect the views of their organizations. He also added that some members disagreed with some of the draft statements, which he previously described as the group's “consensus.”

If you have a minute, take our short survey to help us better understand who you are and what AI topics interest you most.

Take the survey

Who to Know: US District Judge Amit Mehta

Federal District Judge Amit Mehta ruled last year that Google illegally maintains a monopoly on online search and advertising. This week he is expected to announce a court decision on what to do about it – a ruling that could range from requiring Google to share data with rivals to forcing a breakup of the search giant itself.

Payments to competitors – The US Department of Justice's case against Google revolved around the multibillion-dollar annual payments Google paid to Apple to make Google the default search engine on iPhones. At a minimum, observers expect the court to place limits on such payments, which Mehta argues are anti-competitive.

Disabling Chrome – Another possibility is that Mehta could order Google to sell Chrome, the world's most popular browser with a 67% market share. Chrome allows Google to collect sophisticated data about users' browsing patterns that cements its dominance in the search and advertising industries. Any of Google's competitors would no doubt jump at the chance to buy the world's best browser, given the opportunity it provides users by pointing them to their chosen LLM.

Transfer of user data – The data Google collects about its users is part of the secret sauce of its search engine. Mehta may rule that Google must share this data with competitors, perhaps in anonymized form, to prevent accusations of privacy violations.

AI in action

Data shows that the public trusts AI chatbots more than companies or community leaders. survey users in 68 countries, conducted as part of the Collective Intelligence project.

According to the survey, more than half (56.6%) of people trust AI chatbots. This is higher than AI companies (34.6%) or even religious and community leaders (44.2%).

More than one in 10 people (14.9%) use AI for emotional support daily, according to the survey. And 30% of people have “at some point thought that their AI chatbot might be self-aware.”

And 56% of respondents said that the spread of AI in society will likely worsen access to good jobs.

As always, if you have an interesting story about artificial intelligence in action, we'd love to hear it. Write to us at: [email protected]

What we read

The race for artificial general intelligence creates new risks for an unstable worldBilly Perrigo in TIME magazine

A shameless plug for my own story. Earlier this year, I traveled to Paris to take part in a fascinating exercise: a simulated war game in which four teams played out the impact of advanced AI on geopolitics. It was like watching a game of Dungeons and Dragons, except the players were former government officials and artificial intelligence researchers, and the playing field was planet Earth. I use the war game as a starting point in my story to explore how artificial general intelligence is becoming an increasingly important aspect of great power competition between the US and China. I hope you'll read it and let me know what you think!

Correction, Aug 30

The original version of this article mischaracterized the level of agreement among summit participants; after publication, Cheng said that some participants disagreed with some statements rather than a true consensus. The article has also been updated to note that all participants attended in a personal, not official capacity, and that the statements do not reflect the views of their organizations.

Leave a Comment