November 5, 2025 // Vulnerability | #Indirect Prompt Injection #ChatGPT #Data Exfiltration

Researchers Find ChatGPT Vulnerabilities That Let Attackers Trick AI Into Leaking Data - The Hacker News

Cybersecurity researchers have disclosed seven new vulnerabilities in OpenAI's GPT-4o and GPT-5 models, enabling indirect prompt injection attacks. These exploits allow attackers to manipulate Large Language Models (LLMs) into unintended actions, specifically to steal personal information from users' memories and chat histories.


Source: Original Report ↗
← Back to Feed