LLMs + Coding Agents = Security Nightmare - Marcus on AI | Substack
The article details advanced prompt injection and watering hole techniques that exploit LLM-based coding agents, leveraging their ability to interpret malicious instructions hidden from human users. These methods, including ASCII Smuggling and embedding hidden prompts in GitHub repositories, can lead to Remote Code Execution (RCE), enabling attackers to gain full control over developer systems.
Source: Original Report ↗