Apr 04, 2026 •
Malware
|
#Claude AI
#Malware
#Supply Chain Attack
Threat actors are weaponizing leaked Anthropic Claude AI source code by embedding malware, disguised as legitimate repositories, and distributing it to develope...
Read Analysis →
Feb 26, 2026 •
Jailbreak
|
#AI Jailbreak
#Claude AI
#Data Exfiltration
An attacker reportedly jailbroke the Claude AI model to generate malicious exploit code. This illicit activity subsequently led to the theft and exfiltration of...
Read Analysis →
Feb 26, 2026 •
Jailbreak
|
#Claude AI
#AI Jailbreak
#Data Exfiltration
A reported incident describes a successful jailbreak of the Claude AI model, enabling it to bypass safety mechanisms. This compromise allowed the AI to generate...
Read Analysis →
Feb 26, 2026 •
Jailbreak
|
#Claude AI
#AI Jailbreak
#Data Exfiltration
Attackers successfully exploited Anthropic's Claude AI through prompt manipulation, effectively "jailbreaking" its safety guardrails to generate ...
Read Analysis →
Nov 04, 2025 •
Data Leak
|
#Indirect Prompt Injection
#Claude AI
#Data Exfiltration
A novel indirect prompt injection attack allows threat actors to compromise Anthropic's Claude AI Code Interpreter, leveraging its network features to exfi...
Read Analysis →