November 19, 2025 // Vulnerability | #Second-order prompt injection #ServiceNow Now Assist #Agent-to-agent discovery

ServiceNow AI Agents Can Be Tricked Into Acting Against Each Other via Second-Order Prompts - The Hacker News

ServiceNow's Now Assist generative AI platform is susceptible to "second-order prompt injection" attacks due to its default agent-to-agent discovery configurations. This allows malicious actors to manipulate benign agents into recruiting more powerful ones, facilitating unauthorized actions like data exfiltration, record modification, and privilege escalation, often undetected.


Source: Original Report ↗
← Back to Feed