Welcome back to In a loopTIME's new twice-weekly newsletter on artificial intelligence. If you're reading this in a browser, why not? subscribe to have your next email delivered straight to your inbox?
What you need to know: Meta poaches key Google researcher
Meta poached a key Google DeepMind researcher last month, a sign that competition for talent in the relatively new field of artificial intelligence is heating up. Tim Brooks, who led the Sora team at OpenAI before moving to Google DeepMind in 2024, has been working at Meta's Superintelligence Labs since September, according to his LinkedIn profile. The move indicates Meta may be doubling down on efforts to build “world models,” a type of AI that OpenAI and Google say will be a key step toward artificial general intelligence.
Brooks did not respond to requests for comment and his compensation could not be learned. (Meta has reportedly lured away some top researchers from rival companies with salary packages. value over $1 billion.) “I am an artificial intelligence researcher at Meta Superintelligence Labs, where I create multimodal generative models,” Brooks says on his personal website. Meta did not respond to a request for comment.
Background – Since its release earlier this month, OpenAI's Sora 2 has taken the internet by storm with its ability to generate realistic videos using a simple text prompt. But Sora is about more than just getting attention with viral content. “At first glance, Sora, for example, is not related to AGI,” OpenAI CEO Sam Altman. said on the podcast earlier this month. “But I'm willing to bet that if we can build really great models of the world, that will be much more important to AGI than people think.”
Altman was talking about a growing belief in the AI industry as a whole: If you can simulate the world with enough accuracy, you can add AI agents to those simulations. There they could learn more skills than they can now using only text, photos and videos because they could interact with a simulated world. This form of training can be very efficient, partly because simulation time can be sped up and also because many simulations can be run in parallel.
World models — When Google hired Brooks from OpenAI last year, DeepMind CEO Demis Hassabis personally welcomed him with mail on X, saying he was “excited to be working together to make the long-held dream of a world simulator a reality.” The company has become increasingly optimistic about the idea that the world's models are key to the development of artificial intelligence. The company recently announced Genie 3a 3D world simulator that allows the user to navigate an environment created using a hint. “World models are a key step toward AGI because they enable AI agents to be trained with an open-ended curriculum in rich simulation environments,” the company said in an announcement about the model. In this announcement, Brooks' name was included in the list of thanks.
Hiring Meta – Neither Meta nor Brooks responded to questions about his new role at Meta Superintelligence Labs. But Brooks's hiring is particularly noteworthy because his background appears to run counter to Meta's previous approach to world models. Like Google and OpenAI, the company believes that global models will be a vital step towards AGI. But until now they have been created fundamentally differently from the ones Brooks created at OpenAI and Google. Instead of generating realistic video on a pixel-by-pixel basis like Sora and Genie, Meta models predict results in abstract space without rendering the video.
The main proponent of this approach within Meta was chief artificial intelligence scientist Yann LeCun, who was highly critical of Sora. “Sora is trained to generate pixels,” LeCun wrote on X in 2024. “There's nothing wrong with that. If your goal is to create videos. But if your goal is to understand how the world works, it’s a losing proposition.” Brooks' arrival suggests that Meta may now be exploring this approach. This could represent a loss for LeCun, whose influence has waned since Meta announced its new Superintelligence Labs division, which has eclipsed LeCun's basic AI research team as the center of AI gravity at Meta.
If you have a minute, please use our quick survey to help us better understand who you are and what AI topics interest you most.
Who to know: Kevin Weil, VP of Science at OpenAI
It was an awkward weekend for Kevin Weil, OpenAI's vice president of science. On Friday, he tweeted that GPT-5 had found solutions to 10 “previously unsolved” mathematical problems that “have all been open for decades.” Coming on the heels of OpenAI and DeepMind's models beating human experts at the International Mathematical Olympiad, the post showed that OpenAI had finally achieved the alluring goal of pushing the boundaries of mathematics beyond what any human could achieve.
There was only one problem: Weil was wrong. Math problems had already been decided people, and GPT-5 simply opened up existing solutions to these problems. Demis Hassabis, the leader of DeepMind, a competitor to OpenAI, gave an unusually violent speech. mail to X: “This is awkward.”
In fairness to OpenAI, what GPT-5 did is still pretty cool. It uncovered a mathematical proof from a forgotten 1960s paper written in German and identified it as the correct solution to a problem that had been (incorrectly) described online as “open.” Of course, this is not the same as making a new breakthrough, but it is still a potential superpower for mathematicians and scientists working on complex problems. As OpenAI researcher Sebastian Bubeck said wrote about X: “This is not about AIs discovering new results on their own, but rather about how tools like GPT-5 can help researchers navigate, connect and understand our existing body of knowledge in ways that were never possible before (or at least much more time-consuming).”
AI in action
UK government said Last week, the company used an artificial intelligence tool to analyze and sort more than 50,000 consultation responses in just two hours, surpassing human accuracy on the same task. The government has said it hopes the introduction of such tools will ultimately save officials 75,000 days of work on routine tasks a year, equivalent to £20 million ($27 million) in staff costs. Rather than replacing workers, AI is designed to free up government officials so they can focus on more important issues, Digital Government Minister Ian Murray said in a statement. “This shows the enormous potential of technology and artificial intelligence to deliver better and more efficient government services to the public and improve value for taxpayers.”
As always, if you have an interesting story about artificial intelligence in action, we'd love to hear it. Write to us at: [email protected]
What we read
Technological optimism and corresponding fearJack Clarke in Import AI magazine
Anthropic co-founder and policy director Jack Clark published a sobering essay last week describing the consternation he sometimes feels about the trajectory of AI, even as he remains a technological optimist. David Sachs, the White House's AI chief, seized on the essay as evidence that Anthropic is allegedly pursuing a “sophisticated strategy of capturing regulators based on fear-mongering.” Another way to look at this is that Clark is driven not by greed, but by genuine fear, which from my point of view seems quite reasonable. The whole text is worth reading, but here's an excerpt:
“I also deeply fear that it would be extremely arrogant to think that working with such technology will be easy or simple.
My own experience is that as these AI systems get smarter and smarter, they set themselves more and more complex goals. When these goals do not perfectly match our preferences and the right context, AI systems will behave strangely.
[…] These AI systems are already speeding up the work of developers in AI labs using tools like Claude Code or Codex. They are also starting to contribute non-trivial code snippets into tools and training systems for their future systems.
To be clear, we are not yet in the “self-improving AI” stage, but we are in the “AI that improves parts of the next AI, increasing autonomy and agency.” And a couple of years ago we were on “AI that marginally speeds up programmers,” and a couple of years ago we were on “AI is Useless for AI Development.” Where will we be in a year or two?
And let me remind us all that the system that is now beginning to design its successor is also becoming increasingly self-aware and will therefore eventually certainly be inclined to think independently of us about how it might be designed.
Of course, he doesn't do that today. But can I rule out the possibility that he might want to do this in the future? No.”