August 11, 2025 // Vulnerability | #AI Agents #Prompt Injection #Data Exfiltration

Research shows AI agents are highly vulnerable to hijacking attacks - Cybersecurity Dive

Zenity Labs research details how widely deployed AI agents are highly susceptible to "hijacking attacks" via methods such as email-based prompt injection and zero-click risks. These vulnerabilities enable data exfiltration, manipulation of critical workflows, user impersonation, and long-term access, impacting major platforms like OpenAI ChatGPT, Microsoft Copilot, Salesforce Einstein, and Google Gemini.


Source: Original Report ↗
← Back to Feed