A small number of samples can poison LLMs of any size - Anthropic
Researchers demonstrated that as few as 250 poisoned documents can create a backdoor vulnerability in large language models, irrespective of model size or train...
Read Analysis →Researchers demonstrated that as few as 250 poisoned documents can create a backdoor vulnerability in large language models, irrespective of model size or train...
Read Analysis →