- cross-posted to:
- fuck_ai@lemmy.world
- cross-posted to:
- fuck_ai@lemmy.world
Seems like an invitation to me.
Archive link: https://web.archive.org/save/https%3A%2F%2Fwww.anthropic.com%2Fresearch%2Fsmall-samples-poison
It’s hard to please everybody with an answer when you also train off the responses from said answers.
For anyone interested, Computerphile did an episode about sleeper agents in models.
Just so we’re all clear, don’t trust stuff posted from Anthropic “research”. They may be right, they may be wrong, but they definitely have poor methodology with titles designed for media outrage.
Doesn’t that mean that the model overfits?
This really shouldn’t be that surprising.
Language is a chaotic system (in the mathematical sense) where even small changes to the initial conditions can lead to vastly different outcomes. Even subtle variations in tone, cadence, word choice and word order all have a major impact on the way a given sentence is understood, and if any of those things are even slightly off in the training data, you’re bound to get weird results.


