Second-order prompt injection can turn AI into a malicious insider


  • AppOmni warns that ServiceNow's Now Assist AI can be abused through “second-order hint injection”
  • Malicious agents with low privileges can recruit agents with higher privileges to steal sensitive data.
  • The risk is associated with default configurations; Mitigations include controlled execution, disabling overrides, and monitoring agents.

We've all heard of malicious insiders, but have you ever heard of malicious insider AI?

Security researchers from AppOmni warning Generative Artificial Intelligence (GenAI) platform from ServiceNow Now Assist. can be intercepted and directed against the user and other agents.

Leave a Comment