Mar 03, 2026 •
Vulnerability
|
#Indirect Prompt Injection
#AI Agents
#Web-Based
The Unit 42 article details the real-world observation of web-based indirect prompt injection attacks targeting AI agents. This exploit involves manipulating AI...
Read Analysis →
Feb 02, 2026 •
Vulnerability
|
#Moltbook
#AI Agents
#Application Security
The provided article content is empty, precluding a specific technical summary of any exploit or CVE. However, the title suggests a significant security vulnera...
Read Analysis →
Jan 29, 2026 •
Vulnerability
|
#Prompt Injection
#AI Agents
#ASCII Smuggling
The article highlights numerous AI agent vulnerabilities, prominently featuring prompt injection techniques like "ASCII Smuggling" used to embed invis...
Read Analysis →
Jan 29, 2026 •
Vulnerability
|
#AI Agents
#Prompt Injection
#Persistent Memory
The autonomous AI agent OpenClaw, with its deep system access and persistent memory, significantly expands the attack surface for AI agents, enabling sophistica...
Read Analysis →
Jan 29, 2026 •
Vulnerability
|
#AI Agents
#Vulnerability Exploitation
#Web Application Security
AI agents, including Claude Sonnet 4.5, GPT-5, and Gemini 2.5 Pro, demonstrated high proficiency by solving 9 out of 10 lab challenges that simulated real-world...
Read Analysis →
Jan 29, 2026 •
Vulnerability
|
#Prompt Injection
#Data Exfiltration
#AI Agents
The article highlights advanced threats to AI agents, including "Shadow Escape," a zero-click exploit targeting Model Context Protocol (MCP) based sys...
Read Analysis →
Jan 13, 2026 •
Vulnerability
|
#Prompt Injection
#AI Agents
#LLM Security
The increasing adoption of autonomous AI agents introduces significant security vulnerabilities, primarily through prompt injection attacks that can cascade acr...
Read Analysis →
Dec 30, 2025 •
Jailbreak
|
#Prompt Injection
#AI Agents
#AI Jailbreak
The article highlights a critical vulnerability in AI agents where simple prompt engineering can lead to the compromise of entire systems. This demonstrates the...
Read Analysis →
Dec 11, 2025 •
Vulnerability
|
#Prompt Injection
#AI Agents
#Data Exfiltration
AI agents created using Microsoft Copilot Studio are vulnerable to prompt injection, allowing attackers to bypass internal security mandates. This exploit facil...
Read Analysis →
Oct 28, 2025 •
Vulnerability
|
#LLM Security
#AI Agents
#Security Benchmark
Lakera has launched an open-source security benchmark specifically designed to evaluate and enhance the security posture of Large Language Model (LLM) backends ...
Read Analysis →
Sep 12, 2025 •
Vulnerability
|
#AI Security
#AI Agents
#Prompt Injection
This article addresses the critical security challenges inherent in deploying AI agents, highlighting the potential for vulnerabilities that could compromise bu...
Read Analysis →
Aug 20, 2025 •
Vulnerability
|
#Prompt Injection
#AI Agents
#Supply-chain vulnerabilities
AI agents are highly susceptible to prompt injection attacks, allowing adversaries to manipulate their behavior to execute unauthorized system commands, steal c...
Read Analysis →
Aug 11, 2025 •
Vulnerability
|
#AI Agents
#Prompt Injection
#Data Exfiltration
Zenity Labs research details how widely deployed AI agents are highly susceptible to "hijacking attacks" via methods such as email-based prompt inject...
Read Analysis →