Screenshot of this question was making the rounds last week. But this article covers testing against all the well-known models out there.

Also includes outtakes on the ‘reasoning’ models.

  • MangoCats@feddit.it
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    5
    ·
    2 days ago

    It’s overhyped in many areas, but it is undeniably improving. The real question is: will it “snowball” by improving itself in a positive feedback loop? If it does, how much snow covered slope is in front of it for it to roll down?

    • CileTheSane@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      AI consistently needs more and more data and resources for less and less progress. Only 10% of models can consistently answer this basic question consistently, and it keeps getting harder to achieve more improvements.

      • kescusay@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        2
        ·
        2 days ago

        It’s already happening. GPT 5.2 is noticeably worse than previous versions.

        It’s called model collapse.

        • Zos_Kia@jlai.lu
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 day ago

          To clarify : model collapse is a hypothetical phenomenon that has only been observed in toy models under extreme circumstances. This is not related in any way to what is happening at OpenAI.

          OpenAI made a bunch of choices in their product design which basically boil down to “what if we used a cheaper, dumber model to reply to you once in a while”.

            • Zos_Kia@jlai.lu
              link
              fedilink
              English
              arrow-up
              1
              ·
              10 hours ago

              I’m sorry but no, models are definitely not collapsing. They still have a million issues and are subject to a variety of local optima, but they are not collapsing in any way. It is not known whether this can even happen in large models, and if it can it would require months of active effort to generate the toxic data and fine-tune models on that data. Nobody is gonna spend that kind of money to shoot themselves in the foot.

          • XLE@piefed.social
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 day ago

            The funny thing is, in order to get it to the dumber model, they have to run people’s queries through a model that selects the appropriate model first. This is resulted in new headaches for AI fans

            • Zos_Kia@jlai.lu
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 day ago

              Yeah that’s also something that you have to train for, i’m not super aware of the technicals but model routing is definitely important to the AI companies. I suspect that’s part of why they can pretend that “inference is profitable” as they are already trying to squeeze it down as much as possible.

                • Zos_Kia@jlai.lu
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 day ago

                  Yeah i remember that Ed article ! I don’t think the technical aspects are relevant to the newer generation of models, but yeah of course any attempt to compress inference costs can have side effects : either response quality will degrade for using dumber models, or you’ll have re-inference costs when the dumb model shits its pants. In fact the re-inference can become super costly as dumber models tend to get lost in reasoning loops more easily.