The 8088 The 8088 ← All news
Simon Willison AI Safety Apr 21

scosman/pelicans_riding_bicycles

★★★★ significance 1/5

The article discusses the concept of data poisoning in AI training sets, specifically referencing a GitHub repository involving humorous or nonsensical imagery. It highlights the intentional introduction of misleading data to affect model training.

Why it matters Intentional data poisoning demonstrates the growing vulnerability of generative models to targeted, nonsensical training set manipulation.
Read the original at Simon Willison

Tags

#data poisoning #training data #generative ai #llms

Related coverage