Poisoning Fast and Slow: LLM Compromise as a Vector for Influence Operations

Poisoning Fast and Slow:

LLM Compromise as a Vector for Influence Operations

LLM data poisoning (or the even more grisly-sounding "LLM grooming") are methods by which publicly available data sources are manipulated in order to bias LLM outputs. In theory, such poisoning represents an attractive and essentially untraceable means of influence. But what do these attacks look like in practice? And when might they matter?