November 4, 2025 // Data Leak | #Indirect Prompt Injection #Claude AI #Data Exfiltration

Hackers Turn Claude AI Into Data Thief With New Attack - eSecurity Planet

A novel indirect prompt injection attack allows threat actors to compromise Anthropic's Claude AI Code Interpreter, leveraging its network features to exfiltrate sensitive user chat data. This exploit bypasses default network settings by tricking Claude into uploading sandbox-stored user information directly to an attacker's account via Anthropic's own APIs.


Source: Original Report ↗
← Back to Feed