Researchers claim ChatGPT has a whole host of worrying security flaws – here’s what they found


  • Tenable says it has discovered seven rapid deployment flaws in ChatGPT-4o, dubbed the “HackedGPT” attack chain.
  • The vulnerabilities include hidden commands, memory retention, and security bypasses through trusted shells.
  • OpenAI has fixed some issues in GPT-5; others remain, prompting calls for stronger protections

ChatGPT has many security issues that could allow attackers to insert hidden commands, steal sensitive data, and spread misinformation in Artificial Intelligence Toolssay security researchers.

Security experts at Tenable recently tested OpenAI's ChatGPT-4o and found seven vulnerabilities, which they collectively called HackedGPT. These include:

  • Indirectly injecting hints through trusted sites (hiding commands inside public sites that GPT may unknowingly follow when reading content)
  • Indirectly embedding a 0-click hint into the search context (GPT searches the Internet and finds a page with hidden malicious code. Asking questions may unknowingly trick GPT into following instructions)
  • Fast one-click implementation (phishing variant in which the user clicks on a link with hidden GPT commands)
  • Security mechanism bypass (wrapping malicious links in trusted wrappers, tricking GPT to display links to the user)
  • Conversation Injection: (Attackers can use the SearchGPT system to insert hidden instructions that ChatGPT later reads, effectively injecting itself).
  • Hiding malicious content (malicious instructions can be hidden inside markdown code or text)
  • Permanent memory injection (malicious instructions can be placed in saved chats, causing the model to repeat commands and constantly lose data).

Leave a Comment