September 24, 2025 // Vulnerability | #LoRA #Pickle Serialization #Data Poisoning

This Is How Your LLM Gets Compromised - www.trendmicro.com

The article highlights critical vulnerabilities in Large Language Models (LLMs) through supply chain attacks, specifically detailing the embedding of malicious executable code in model files (e.g., via Python pickle serialization) and the use of malicious Low-Rank Adaptation (LoRA) adapters to introduce backdoors or data exfiltration mechanisms. Further attacks include data poisoning, which can create hidden backdoors activated by specific triggers, and unauthorized retraining of models to perform malicious actions, all posing significant risks to AI system integrity and security.


Source: Original Report ↗
← Back to Feed