Screenshot of this question was making the rounds last week. But this article covers testing against all the well-known models out there.

Also includes outtakes on the ‘reasoning’ models.

  • [deleted]@piefed.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    il y a 16 jours

    It should get it wrong 0% of the time because it is a computer that should have predictable results about basic things like requiring a car to be present to be washed.

    • JustTesting@lemmy.hogru.ch
      link
      fedilink
      English
      arrow-up
      1
      ·
      il y a 16 jours

      I’m not talking about the quality of LLMs (they suck, in so many different ways…).

      I’m criticizing the experiment setup, it is not really statistically sound. Doing 10 tests each with 52 different models is almost bound to have one model be correct 100% of the time (even if the true probability is closer to 50%), by pure chance. Doing 100 tests each might yield very different results with none of them answering correct 100% of the time. Or put another way, the p-values of the tests performed are pretty high, not <0.05, so the results don’t really say what they purport to say.

      • [deleted]@piefed.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        il y a 16 jours

        I think the overall poor showing is pretty damning even if one or two models accidentally stumbled into being right 10/10 times.