August 4, 2025 // Vulnerability | #OWASP LLM05:2025 #Package Hallucination #Adversarial Prompts

LLM as a Judge: Evaluating Accuracy in LLM Security Scans - TrendMicro

Trend Micro research reveals that while Large Language Models (LLMs) can serve as automated security judges, they are susceptible to adversarial prompts and fail to consistently detect risks such as malicious code generation, package hallucinations, and system prompt leakage. These vulnerabilities pose a critical risk for data exfiltration and AI supply chain attacks, necessitating robust guardrails and external validation to mitigate potential operational disruptions.


Source: Original Report ↗
← Back to Feed