Faisal Islam,economics editor,
Rachel Klanbusiness reporter And
Liv McMahonTechnology reporter
Getty ImagesPeople shouldn't “blindly trust” everything artificial intelligence tools tell them, the head of Google parent company Alphabet has told the BBC.
In an exclusive interviewCEO Sundar Pichai said AI models are “prone to error” and urged people to use them alongside other tools.
Mr Pichai said this highlights the importance of having a rich information ecosystem and not just relying on artificial intelligence technologies.
“That's why people also use Google search, and we have other products that are more focused on providing accurate information.”
However, some experts say big tech companies such as Google should not invite users to test the results of their tools and should instead focus on improving the reliability of their systems.
While artificial intelligence tools were useful “if you want to write something creatively,” Mr. Pichai said people “need to learn to use these tools for what they're good at and not blindly trust everything they say.”
He told the BBC: “We're proud of the amount of work we've done to provide us with the most accurate information possible, but current state-of-the-art artificial intelligence technology is prone to some errors.”
The company displays disclaimers in its artificial intelligence tools to let users know they may make mistakes.
But that hasn't protected the company from criticism and concerns about mistakes made by its own products.
Google's release of artificial intelligence reviews summarizing search results has been marred by criticism and ridicule. due to some erroneous and inaccurate answers.
The tendency of generative artificial intelligence products such as chatbots to convey misleading or false information has raised concerns among experts.
“We know that these systems produce answers, and they come up with answers to please us – and that's a problem,” Gina Neff, professor of responsible artificial intelligence at Queen Mary University of London, told BBC Radio 4's Today programme.
“It's okay if I ask, 'What movie should I see next?' It's a different story if I'm asking really sensitive questions about my health, my mental well-being, about science, about the news,” she said.
She also called on Google to take more responsibility for its artificial intelligence products and their accuracy, rather than passing it on to consumers.
“Now the company is asking to deliver their own exam paper while they burn down the school,” he said.
“New stage”
The tech world has been eagerly awaiting the latest launch of Google's consumer AI model, Gemini 3.0, as it begins to take market share from ChatGPT.
The company unveiled the model on Tuesday, saying it would usher in a “new era of intelligence” at the heart of its own products, such as its search engine.
In a blog post He said Gemini 3 boasts industry-leading performance in understanding and responding to multiple input modes such as photo, audio and video, as well as “state-of-the-art” reasoning capabilities.
In May of this year, Google began introducing a new “AI Mode” into its search by integrating its Gemini chatbot. aims to give users the opportunity to talk to an expert.
At the time, Mr. Pichai said Gemini's integration with search signaled “a new stage in the AI platform shift.”
The move is also part of the tech giant's bid to remain competitive with artificial intelligence services such as ChatGPT, which threaten Google's dominance in online search.
His comments support BBC research earlier this year which found AI chatbots were inaccurate in summarizing news.
OpenAI's ChatGPT, Microsoft's Copilot, Google's Gemini and AI Perplexity received content from the BBC website and asked questions about it, and the research provided AI answers. contained “material inaccuracies.”“.
Wider BBC findings have since shown that despite improvements, AI assistants still distort news 45% of the time.
In an interview with the BBC, Mr Pichai said there was some tension between how quickly technology advances and how mitigation measures are built in to prevent potential harmful consequences.
As for Alphabet, Mr. Pichai said dealing with the tension meant being “brave and responsible at the same time.”
“So we're moving through this moment quickly. I think our consumers are demanding it,” he said.
The tech giant has also increased its investments in AI security in proportion to its investments in AI, Mr. Pichai said.
“For example, we offer open source technology that will allow you to determine whether an image is generated by artificial intelligence,” he said.
Asked about tech billionaire Elon Musk's recently discovered years-long comments to OpenAI's founders about concerns that DeepMind, now owned by Google, could create an AI “dictatorship”, Mr Pichai said that “no company should own a technology as powerful as AI”.
But he added that there are many companies operating in the AI ecosystem today.
“If there was just one company that created artificial intelligence technology and everyone else had to use it, I would worry about that too, but we're so far away from that scenario right now,” he said.






