Ask ChatGPT to estimate the carbs in your lunch. Now ask it again. And again. Five hundred times. You’d expect the same answer each time. It’s the same photo, the same model, the same question. But you won’t get the same answer. Not even close — and the differences are large enough to cause a
Your 40% depends a lot on how you ask the questions and the field of these questions.
Dude, they fail that exam with even worse error rates than I see!
When you can verify it, it’s OFTEN and REGULARLY wrong. It’s stupid to trust if for anything you can’t personally verify.
The designed purpose of LLMs is to respond to human interaction, not to be correct. They are the showoff who pretends he can answer every question. They are the confident drunkard at the bar who will tell you anything that pops into their head. Intelligent, knowledgeable people say “I don’t know” when they don’t know. LLMs don’t do that. Ever. Trouble is, they don’t “know” anything. They’re a chatbot from the bottom up. Chatbot through and through. It’s their fundamental nature.
Yes there was knowledge and deep understanding in their training data. Also, I ate chicken curry for tea. However, I am not a chicken, I do not cluck, I haven’t started eating worms, I cannot produce any chicken, and my poop is not chicken either. My poop smells faintly of curry. So it is with LLMs and the knowledge and understanding in their training data.
Dude, they fail that exam with even worse error rates than I see!
When you can verify it, it’s OFTEN and REGULARLY wrong. It’s stupid to trust if for anything you can’t personally verify.
The designed purpose of LLMs is to respond to human interaction, not to be correct. They are the showoff who pretends he can answer every question. They are the confident drunkard at the bar who will tell you anything that pops into their head. Intelligent, knowledgeable people say “I don’t know” when they don’t know. LLMs don’t do that. Ever. Trouble is, they don’t “know” anything. They’re a chatbot from the bottom up. Chatbot through and through. It’s their fundamental nature.
Yes there was knowledge and deep understanding in their training data. Also, I ate chicken curry for tea. However, I am not a chicken, I do not cluck, I haven’t started eating worms, I cannot produce any chicken, and my poop is not chicken either. My poop smells faintly of curry. So it is with LLMs and the knowledge and understanding in their training data.
They beat any human on that knowledge benchmark, completely unrelated to your 40% “test”. Try to answer any of the example questions on the main page.
I don’t need a metaphor I know LLMs are hallucinating, lying, bullshitting. That doesn’t invalidate my point.