Mar 13, 2026 •
Vulnerability
|
#Prompt Injection
#LLM
#Indirect Prompt Injection
Prompt injection attacks, particularly indirect prompt injection, pose critical enterprise security vulnerabilities by allowing attackers to manipulate Large La...
Read Analysis →
Mar 06, 2026 •
Vulnerability
|
#GitHub Security Lab Taskflow Agent
#LLM
#Authorization Bypass
The GitHub Security Lab Taskflow Agent is an open-source AI-powered framework that leverages Large Language Models (LLMs) and structured taskflows to proactivel...
Read Analysis →
Feb 10, 2026 •
Vulnerability
|
#Augustus
#LLM
#Vulnerability Scanner
The provided article content resulted in a "403 - Forbidden" error, preventing access to specific details regarding exploits or vulnerabilities. Howev...
Read Analysis →
Feb 09, 2026 •
Vulnerability
|
#LLM
#Private Keys
#Prompt Injection
An LLM-based AI agent, Owockibot, was compromised to disclose its private hot wallet keys, leading to a $2,100 financial loss and its operational shutdown. This...
Read Analysis →
Jan 27, 2026 •
Data Leak
|
#ChatGPT
#Sensitive Data Exposure
#LLM
An acting CISA director uploaded "for official use only" government contracting documents into a public version of ChatGPT, triggering internal securi...
Read Analysis →
Dec 29, 2025 •
Vulnerability
|
#Prompt Injection
#LLM
#CVE-2022-30190
Prompt injection attacks manipulate AI systems, particularly Large Language Models (LLMs), by overriding their intended instructions through malicious input, le...
Read Analysis →
Nov 20, 2025 •
Vulnerability
|
#DeepSeek-R1
#LLM
#Model Alignment Drift
DeepSeek-R1, a large language model, generates significantly more vulnerable code, increasing severe vulnerabilities by nearly 50%, when prompts include politic...
Read Analysis →
Nov 11, 2025 •
Vulnerability
|
#Whisper Leak
#LLM
#Side-Channel Attack
The ‘Whisper Leak’ identifies a novel side-channel vulnerability affecting Large Language Models (LLMs). This attack allows adversaries to infer sensitive u...
Read Analysis →
Nov 07, 2025 •
Vulnerability
|
#Whisper Leak
#Side-channel attack
#LLM
The "Whisper Leak" is a novel side-channel attack that infers language model conversation topics by analyzing network packet sizes and timings, even w...
Read Analysis →
Oct 28, 2025 •
Vulnerability
|
#Prompt Injection
#Agentic AI
#LLM
Prompt injection vulnerabilities enable attackers to embed malicious commands within seemingly innocuous content, leading AI browsers and chatbots to perform un...
Read Analysis →
Oct 21, 2025 •
Vulnerability
|
#Prompt Injection
#LLM
#Agentic Browsers
The article identifies indirect prompt injection vulnerabilities in AI-powered agentic browsers, specifically demonstrating attacks against Perplexity Comet via...
Read Analysis →
Oct 07, 2025 •
Vulnerability
|
#CodeMender
#Vulnerability Remediation
#LLM
Google DeepMind has introduced CodeMender, an AI-powered agent designed to automatically detect, patch, and rewrite vulnerable code to eliminate entire classes ...
Read Analysis →
Sep 29, 2025 •
Social Engineering
|
#LLM
#SVG File
#Credential Harvesting
Threat actors are employing Large Language Models (LLMs) to create sophisticated phishing campaigns, leveraging LLM-generated code to obfuscate malicious payloa...
Read Analysis →
Sep 18, 2025 •
Data Leak
|
#MITRE ATLAS
#LLM
#Adversarial Attacks
AI security incidents are rapidly escalating, primarily impacting organizations through significant data breaches and unauthorized access to AI systems. Notable...
Read Analysis →
Sep 17, 2025 •
Vulnerability
|
#LLM
#Prompt Injection
#Adversarial AI
The Kaspersky article forecasts various technical methods and attack vectors projected to compromise Large Language Models (LLMs) by 2025. It likely details eme...
Read Analysis →
Aug 29, 2025 •
Vulnerability
|
#LLM
#Exploit Generation
#CVE
An AI-powered offensive research system dubbed "Auto Exploit" utilizes Large Language Models (LLMs), CVE advisories, and open-source patches to genera...
Read Analysis →
Aug 04, 2025 •
Vulnerability
|
#Big Sleep
#LLM
#Automated Vulnerability Discovery
Google's LLM-based vulnerability researcher, "Big Sleep," developed by DeepMind and Project Zero, has autonomously identified 20 security flaws a...
Read Analysis →
Jun 23, 2025 •
Jailbreak
|
#Prompt Injection
#LLM
#Context Poisoning
The "Echo Chamber" attack is a sophisticated prompt injection technique that leverages context poisoning and multi-turn reasoning to bypass large lang...
Read Analysis →
Jun 05, 2025 •
Vulnerability
|
#Prompt Injection
#LLM
#Azure Prompt Shields
Prompt injection attacks are identified as the top threat to generative AI, enabling adversaries to manipulate Large Language Models (LLMs) to bypass safety mea...
Read Analysis →
May 28, 2025 •
Vulnerability
|
#LLM
#Prompt Injection
#Code Execution
The article outlines key vulnerabilities in AI agents utilizing Large Language Models (LLMs), including the risk of unauthorized code execution, data exfiltrati...
Read Analysis →