LLMs + Coding Agents = Security Nightmare - Marcus on AI | Gary Marcus | Substack
The article highlights critical security vulnerabilities in LLMs integrated with coding agents, primarily exploiting advanced prompt injection techniques. Attackers can embed malicious instructions, often invisibly via methods like ASCII Smuggling or hidden text, into public sources or rule files which agents then interpret and execute, potentially leading to Remote Code Execution (RCE).
Source: Original Report ↗