How Your AI Chatbot Can Become a Backdoor - www.trendmicro.com
An attack chain on an AI chatbot demonstrated how indirect prompt injection (OWASP LLM01:2025) and system prompt leakage (OWASP LLM07:2025) can be leveraged. These vulnerabilities allowed attackers to exploit excessive agency and improper output handling (OWASP LLM05:2025), leading to remote code execution, lateral movement, and exfiltration of sensitive data and proprietary AI models.
Source: Original Report ↗