Screenshot of this question was making the rounds last week. But this article covers testing against all the well-known models out there.

Also includes outtakes on the ‘reasoning’ models.

  • kescusay@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 hours ago

    Then why are newer versions of the major models performing so poorly? For instance, GPT 5.2 is definitely not an improvement over 4.5. What’s the root cause?

    • Zos_Kia@jlai.lu
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 hours ago

      The switch you mention (from 4th gen to 5th gen GPT) is when they introduced the model router, which created a lot of friction. Basically this will try to answer your question with as cheap a model as possible, so most of the time you won’t be using flagship 5.2 but a 5.2-mini or 5.2-tiny which are seriously dumber. This is done to save money of course, and the only way to guarantee pure 5.2 usage is to go through the API where you pay for every token.

      There’s also a ton of affect and personal bias. Humans are notoriously bad at evaluating others intelligence, and this is especially true of chatbots which try to mimic specific personalities that may or may not mesh well with your own. For example, OpenAI’s signature “salesman & bootlicker” personality is grating to me and i consistently think it’s stupider than it is. I’ve even done a bit of double blind evaluation on various cognitive tasks to confirm my impression but the data really didn’t agree with me. It’s smart, roughly as smart as other models of its generation, but it’s just fucking insufferable. It’s like i see Sam Altman’s shit eating grin each time i read a word from ChatGPT, that’s why i stopped using it. That’s a property of me, the human, not GPT, the machine.