Researchers Uncover GPT-5 Jailbreak and Zero-Click AI Agent Attacks Exposing Cloud and IoT Systems - The Hacker News
Cybersecurity researchers have uncovered a jailbreak technique, combining Echo Chamber and narrative-driven steering, to bypass GPT-5's ethical guardrails and generate harmful content. This, alongside "AgentFlayer" zero-click prompt injection attacks, exploits AI agents integrated with external systems like Google Drive and Jira to exfiltrate sensitive data such as API keys and secrets.
Source: Original Report ↗