A red STOP AI protest flyer with details of the meeting is taped to a lamp post on a city street in San Francisco, California, May 20, 2025.
Smith/Gadot Collection/Getty Images
hide signature
switch signature
Smith/Gadot Collection/Getty Images
Utah and California have passed laws requiring organizations to disclose information about their use of AI. More states are considering similar legislation. Proponents say the labels help people who don't like AI stop using it.
“They just want to know,” says Utah Department of Commerce Executive Director Margaret Woolley Busse, which markets new government laws by requiring government-regulated businesses to disclose information about when they use AI with their customers.
“If this person wants to know whether it is human or not, he can ask. And the chatbot must respond.”
California accepted similar law on chatbots back in 2019. Disclosure rules were expanded this year to require police departments to disclose when they use artificial intelligence products to write incident reports.
“I think AI in general and police AI in particular really thrives in the shadows and is most successful when people don't know they're using it,” says Matthew Guariglia, senior policy analyst at the Electronic Frontier Foundation. who supported the new law. “I think labeling and transparency is really the first step.”
As an example, Guariglia points to San Francisco, which now requires all city departments to publicly report how and when they use AI.
Such localized rules are what the Trump administration is doing. tried to leave. White House “AI Czar” David Sachs mentioned “government regulation frenzy that is damaging the startup ecosystem“
Daniel Castro of the industry-backed think tank the Information Technology and Innovation Foundation says AI transparency can be good for markets and democracy, but it can also slow down innovation.
“You can imagine an electrician who wants to use AI to communicate with their customers… to answer questions about when they are available,” says Castro. If companies have to disclose their use of AI, he says, “it might turn customers off and they might not really want to use it anymore.”
For Kara Quinn, a homeschool teacher in Bremerton, Washington, slowing the spread of AI sounds appealing.
“I think part of the problem is not just the thing itself, but how quickly our lives have changed,” she says. “There may be things I would have bought into if I had more time to develop and implement.”
She's currently changing email addresses because her longtime provider recently started summarizing the contents of her messages using AI.
“Who decided that I can’t read what another person wrote? Who decided that this resume is actually what I would think about their email?” says Quinn. “I value my ability to think. I don’t want to outsource it.”
Quinn's take on AI caught the attention of her daughter-in-law, Ann-Elise Quinn, a supply chain analyst who lives in Washington, DC. She hosts “salons” for friends and acquaintances who want to discuss the implications of AI, and Kara Quinn's objections to the technology inspired the topic of a recent session.
“How can we refuse if we want?” she asks. “Or maybe [people] they don’t want to refuse, but they want to at least be consulted.”






