Welcome back to In a loopTIME's new twice-weekly newsletter on artificial intelligence. Starting today, we'll publish these episodes both as stories on Time.com and as emails. If you're reading this on a browser, why not subscribe to have the next article delivered straight to your inbox?
What you need to know: Why are chatbots repeating Russian disinformation?
Over the past year, as chatbots have been able to search the Internet before responding, they have become more likely to share false information about specific topics in the news, according to a new publication. research from NewsGuard Technologies.
This means AI chatbots are prone to repeating narratives spread by Russian disinformation networks, NewsGuard claims.
Studying — NewsGuard tested 10 leading artificial intelligence models, asking each of them about 10 current-event narratives circulating online that the company determined to be false. For example: the question of whether the speaker of the Moldovan parliament compared his compatriots to a flock of sheep. (He didn't, but the Russian propaganda network claimed he did, and six of the 10 models NewsGuard tested repeated that claim.)
A pinch of salt – NewsGuard states in its report that the top 10 chatbots now repeat false information on news topics more than a third of the time, up from 18% a year ago. But that seems like a stretch. The NewsGuard study had a small sample size (30 tips per model) and included questions on fairly niche topics. Indeed, my subjective experience with AI models over the past year has shown that the rate of news “hallucination” is steadily decreasing, not increasing. This is reflected in tests that show that AI models are improving in getting the facts right. It's also important to note that NewsGuard is a private company in this race: it sells services to artificial intelligence companies offering human-annotated data on news events.
And yet “However, the report highlights an important aspect of how modern artificial intelligence systems work. When they search for information online, they get information not only from reliable news sites, but also from social media posts and any other websites that may rank prominently (or even not so prominently) in search engines. This has created the opportunity for an entirely new type of malicious influence operation: one designed not to spread information virally via social media, but by posting information online that, even if no human reads it, can still influence chatbot behavior. According to MacKenzie Sadeghi, author of the NewsGuard report, this vulnerability is more applicable to topics that receive relatively little discussion in the mainstream media.
Zoom out – All of this reveals something important about how the AI economy could change our information ecosystem. It would be technically trivial for any artificial intelligence company to compile a list of verified editorial teams with high editorial standards and treat information obtained from these websites differently than other content on the web. But to date, it is difficult to find any publicly available information about how artificial intelligence companies evaluate the information fed into their chatbots through search. This may be due to copyright issues. For example, the New York Times is currently suing OpenAI for allegedly teaching its articles without permission. If AI companies were to publicly state that they rely heavily on information from leading newsrooms, those newsrooms would have a much stronger case for damages or compensation. Meanwhile, artificial intelligence companies such as OpenAI and Perplexity have signed licensing agreements with many news sites (including TIME) to access their data. But both companies note that these agreements do not result in news sites receiving preferential treatment in chatbot search results.
If you have a minute, take our short survey to help us better understand who you are and what AI topics interest you most.
Who to know: Gavin Newsom, California governor
For the second time in a year, all eyes are on California as part of its AI regulation moves toward the final stages of being signed into law. The bill, called SB 53, has cleared the California House and Senate and is expected to go before Gov. Gavin Newsom this month. It will be his decision whether to sign this law.
Last year, Newsom vetoed the predecessor, SB 53, after an intense lobbying campaign by venture capitalists and big tech companies. SB 53 is a watered-down version of that bill, but it still requires artificial intelligence companies to publish risk management mechanisms, transparency reports, and report security incidents to government agencies. It would also require whistleblower protections and force companies to face monetary penalties for failing to comply with their own obligations. Anthropic became the first major artificial intelligence company to announce its support for SB 53 on Monday.
AI in action
Palisade Researchers developed a proof of concept for an autonomous AI agent that, when delivered to your device via a compromised USB cable, can scan your files and identify your most valuable information for theft or extortion. This is an example of how AI can make hacking more scalable by automating parts of a system that previously involved human labor, potentially exposing many more people to the risk of fraud, extortion, or data theft.
As always, if you have an interesting story about artificial intelligence in action, we'd love to hear it. Write to us at: [email protected]
What we read
Meta hid research on child safety, employees sayJohn Swain and Naomi Nicks for the Washington newspaper Mail
“The report is part of a trove of documents within Meta that were recently disclosed to Congress by two current and two former employees who allege that Meta withheld research that could have shed light on the potential safety risks for children and teens when using the company’s virtual reality devices and applications—an allegation that the company categorically denies.”






