OpenAI admits prompt injection attacks can’t be fully patched in AI systems

NEWNow you can listen to Fox News articles!

Cybercriminals no longer always need malware or exploits to compromise systems. Sometimes they just need the right words in the right place. OpenAI now openly acknowledges this reality. The company claims rapid injection attacks on Browsers powered by artificial intelligence (AI) is not a bug that can be completely corrected, but a long-term risk associated with allowing AI agents to roam the open network. This raises vexing questions about how secure these tools really are, especially as they gain greater autonomy and access to your data.

Subscribe to my FREE CyberGuy Report

Get my best tech tips, breaking security alerts, and exclusive offers straight to your inbox. Plus, you'll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

NEW MALWARE CAN READ YOUR CHATS AND STEAL YOUR MONEY

AI-powered browsers can read and act on web content, which also makes them vulnerable to hidden instructions that attackers can sneak into pages or documents. (Kurt “CyberGuy” Knutsson)

Why quick injections don't go away

In a recent blog post, OpenAI admitted that fast injection attacks are unlikely to ever be completely eliminated. Live implementation works by hiding instructions inside web pages, documents, or emails in a way that humans don't notice, but AI agents do. Once the AI ​​reads this content, it can be tricked into following malicious instructions.

OpenAI compared the problem to fraud and social engineering. You can reduce them, but you cannot make them disappear. The company also acknowledged that “agent mode” in its ChatGPT Atlas browser increases risk because it expands the attack surface. The more AI can do on your behalf, the more damage it can cause if something goes wrong.

OpenAI launched the ChatGPT Atlas browser in October, and security researchers immediately began testing its limitations. Within hours, demos appeared showing that a few carefully placed words in a Google Doc could affect browser behavior. That same day, Brave published its own warning explaining that indirect injection of hints is a structural problem for AI-powered browsers, including tools like Perplexity's Comet.

This is not just a problem for OpenAI. Earlier this month, the UK's National Cyber ​​Security Center warned that rapid deployment attacks against generative AI systems could never be fully mitigated.

FAKE AI CHAT RESULTS SPREAD DANGEROUS MAC SOFTWARE

ChatGPT Atlas screen in the classroom

Rapid injection attacks exploit trust at scale, allowing malicious instructions to influence the actions of an AI agent even without the user seeing it. (Kurt “CyberGuy” Knutsson)

Trade-off risk with AI browsers

OpenAI says it views rapid adoption as a long-term security issue that requires ongoing pressure rather than a one-time solution. Its approach is based on faster patch cycles, continuous testing and layered security. This generally puts it in line with competitors such as Anthropic and Google, which argue that agent systems require architectural controls and constant stress testing.

OpenAI takes a different approach, using what it calls an “LLM-based automated attacker.” Simply put, OpenAI trained AI will act like a hacker. Using reinforcement learning, this malicious bot looks for ways to inject malicious instructions into the AI ​​agent's workflow.

The bot first carries out attacks in a simulation. It predicts how the target AI will reason, what steps it will take, and where it might fail. Based on this feedback, it refines the attack and tries again. Because the system has insight into the AI's internal decision-making process, OpenAI believes it can identify weaknesses faster than real attackers.

Even with this protection, AI-powered browsers are not secure. They combine two things that attackers like: autonomy and access. Unlike regular browsers, they don't just display information, but also read emails, scan documents, click links, and perform actions on your behalf. This means that one malicious hint hidden on a web page, document or message can influence the AI's actions without you even seeing it. Even with security measures in place, these agents operate by trusting content on a large scale, and this trust can be manipulated.

THIRD PARTY VIOLATION REVEALS CHATGPT ACCOUNT DETAILS

A man in a sweatshirt works on several computer screens displaying digital data in a dark room.

As AI-powered browsers gain more autonomy and access to personal data, limiting permissions and maintaining human control becomes critical to security. (Kurt “CyberGuy” Knutsson)

7 Steps You Can Take to Reduce Risk with AI Browsers

You may not be able to eliminate rapid penetration attacks, but you can significantly limit their impact by changing the way you use AI tools.

1) Restrict access to the AI ​​browser

Give the AI ​​browser access to only what it absolutely needs. Don't connect your primary email account, cloud storage, or payment methods unless there is a clear reason. The more data AI can see, the more valuable it becomes to attackers. Restricting access reduces the blast radius if something goes wrong.

2) Require confirmation for every sensitive action.

Never allow an AI browser to send emails, make purchases, or change account settings without asking you first. Confirmation breaks long chains of attacks and gives you the opportunity to detect suspicious behavior. Many rapid deployment attacks rely on AI operating silently in the background without user control.

3) Use a password manager for all accounts.

Password manager ensures that each account has a unique and strong password. If an AI browser or malicious page leaks one credential, attackers won't be able to reuse it elsewhere. Many password managers also refuse to autofill on unfamiliar or suspicious sites, which can alert you that something is wrong before you enter anything manually.

Next, check to see if your email has been compromised in past hacks. Our #1 password manager (see. Cyberguy.com) Pick includes a built-in breach scanner that checks to see if your email address or passwords have been involved in known breaches. If you find a match, immediately change any reused passwords and secure those accounts with new, unique credentials.

Check out the best password managers of 2025, reviewed by experts, at Cyberguy.com

4) Install powerful antivirus software on your device.

Even if the attack starts inside the browser, antivirus software can still detect suspicious scripts, unauthorized system changes, or malicious network activity. Powerful antivirus software focuses on behavior, not just files, which is critical when fighting AI-based or scripted attacks.

The best way to protect yourself from malicious links that install malware and potentially access your personal information is to install powerful antivirus software on all your devices. This protection can also alert you to phishing emails and ransomware, keeping your personal information and digital assets safe.

Get my picks for 2025's top antivirus protection winners for your Windows, Mac, Android, and iOS devices at Cyberguy.com

5) Avoid broad or open-ended instructions.

Telling the AI ​​browser to “handle whatever is necessary” gives attackers the opportunity to manipulate it with hidden hints. Be specific about what the AI ​​is allowed to do and what it should never do. Narrow instructions make it difficult for malicious content to influence the agent.

6) Be careful with AI summaries and automatic scans.

When an AI browser scans emails, documents, or web pages for you, be aware that hidden instructions may be contained within that content. Treat AI-generated actions as blueprints or suggestions rather than final solutions. Review everything the AI ​​plans to do before you approve it.

7) Keep your browser, AI tools, and operating system updated.

Security patches for AI-powered browsers are rapidly evolving as new attack methods emerge. Delayed updates leave known vulnerabilities open longer than necessary. Enabling automatic updates ensures that you get protection as soon as they become available, even if you miss an announcement.

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Kurt's Key Takeaway

There is a rapid growth in the number of browsers with artificial intelligence. We're now seeing them from major tech companies, including OpenAI's Atlas, Browser Company's Dia, and Perplexity's Comet. Even existing browsers like Chrome and Edge are working hard to add AI and agent features to their current infrastructure. While these browsers can be useful, the technology is still in its early stages. It’s better not to give in to the hype and wait until it matures.

Do you think AI browsers today are worth the risk, or are they advancing faster than security can keep up? Let us know by writing to us at Cyberguy.com

Subscribe to my FREE CyberGuy Report

Get my best tech tips, breaking security alerts, and exclusive offers straight to your inbox. Plus, you'll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

Copyright CyberGuy.com 2025. All rights reserved.

Leave a Comment