- Tenable says it has discovered seven rapid deployment flaws in ChatGPT-4o, dubbed the “HackedGPT” attack chain.
- The vulnerabilities include hidden commands, memory retention, and security bypasses through trusted shells.
- OpenAI has fixed some issues in GPT-5; others remain, prompting calls for stronger protections
ChatGPT has many security issues that could allow attackers to insert hidden commands, steal sensitive data, and spread misinformation in Artificial Intelligence Toolssay security researchers.
Security experts at Tenable recently tested OpenAI's ChatGPT-4o and found seven vulnerabilities, which they collectively called HackedGPT. These include:
- Indirectly injecting hints through trusted sites (hiding commands inside public sites that GPT may unknowingly follow when reading content)
- Indirectly embedding a 0-click hint into the search context (GPT searches the Internet and finds a page with hidden malicious code. Asking questions may unknowingly trick GPT into following instructions)
- Fast one-click implementation (phishing variant in which the user clicks on a link with hidden GPT commands)
- Security mechanism bypass (wrapping malicious links in trusted wrappers, tricking GPT to display links to the user)
- Conversation Injection: (Attackers can use the SearchGPT system to insert hidden instructions that ChatGPT later reads, effectively injecting itself).
- Hiding malicious content (malicious instructions can be hidden inside markdown code or text)
- Permanent memory injection (malicious instructions can be placed in saved chats, causing the model to repeat commands and constantly lose data).
Calls for increased protection
OpenAI, the company behind ChatGPT, has addressed some of the flaws in its GPT-5 model, but not all of them, leaving millions of people potentially at risk.
Security researchers have been warning about rapid penetration attacks for quite some time.
GoogleGemini appears to be susceptible to a similar issue due to its integration with Gmail, as users can receive emails with hidden prompts (such as those printed in white font on a white background), and if the user asks the tool for anything regarding that email, they can read and act on the hidden prompt.
Although in some cases tool developers may install guardrails, in most cases the user must be vigilant and not fall for these tricks.
“HackedGPT exposes a fundamental weakness in how large language models determine what information to trust,” said Moshe Bernstein, senior research engineer at Tenable.
“Individually, these flaws seem small, but together they form a complete chain of attacks, from injection and evasion to data theft and retention. This shows that AI systems are not just potential targets; they can be turned into attack tools that quietly collect information from everyday chats or browsing.”
Tenable said OpenAI had fixed “some identified vulnerabilities,” adding that “some” remain active in ChatGPT-5, without specifying which ones. As a result, the company advises AI vendors to strengthen their defenses against rapid adoption by ensuring that security mechanisms are working as expected.
The best antivirus for any budget
Follow TechRadar on Google News. And add us as your preferred source to get our expert news, reviews and opinions in your feeds. Be sure to click the “Subscribe” button!
And of course you can also Follow TechRadar on TikTok. for news, reviews, unboxing videos and get regular updates from us on whatsapp too much.






