Block unsafe prompts targeting your LLM endpoints with Firewall for AI - The Cloudflare Blog
Cloudflare's Firewall for AI now integrates Llama Guard to provide real-time unsafe content moderation, detecting and blocking malicious prompts at the network edge before they reach Large Language Models. This mitigation specifically targets risks such as model poisoning, PII disclosure, and the injection of harmful content, aligning with the OWASP Top 10 LLM risks.
Source: Original Report ↗