In a landmark study, OpenAI researchers reveal that large language models will always produce plausible but false outputs, even with perfect data, due to fundamental statistical and computational limits.
The first case is when it has limited samples on a particular topic, leaving it with insufficient data - “epistemically uncertain”
The second is when it tries to think about something, but it doesn’t have the ability to hold it in mind, the thought is in some way too large, that’s representational capacity.
The last is just, factoring stupid large numbers, and things of that nature.
We’re not going to read this one deeply, because we’re pretty sure this was demonstrated a while ago. Cool for OAI to catch up with the current state of the field, we guess?
It all means something.
The first case is when it has limited samples on a particular topic, leaving it with insufficient data - “epistemically uncertain”
The second is when it tries to think about something, but it doesn’t have the ability to hold it in mind, the thought is in some way too large, that’s representational capacity.
The last is just, factoring stupid large numbers, and things of that nature.
We’re not going to read this one deeply, because we’re pretty sure this was demonstrated a while ago. Cool for OAI to catch up with the current state of the field, we guess?
Thank you.
They should have hired you to write this part of the article.