• jonathan7luke@lemmy.zip
    link
    fedilink
    arrow-up
    38
    ·
    edit-2
    2 days ago

    I know this comes from a good place, but you are misunderstanding how LLMs work at a fundamental level. The LLMs “admitted” to those things in the same way that parrots speak English. LLMs aren’t self-aware and do not understand their own implementation or purpose. They just spit out a statistically reasonable series of words from their dataset. You could just as easily get LLMs to admit they are an alien, the flying spaghetti monster, or the second coming of Jesus.

    Realistically, engaging with these LLMs directly in any way is not really a good idea. It wastes resources, shows engagement with the app, and gives it more training data.

    • Catoblepas@piefed.blahaj.zone
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 days ago

      The LLMs “admitted” to those things in the same way that parrots speak English

      Parrots speak English in a more concrete sense than LLMs do, and the smarter species can understand the concept of zero (which children only reliably do around 3 or 4 years old). I’m not disagreeing with your overall point, I just think it’s important to point out animals have way more sapience than LLMs!