^^^

  • trxxruraxvr@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    4 days ago

    This concept is completely bullshit

    The concept isn’t necessarily bullshit, the technology just isn’t nowhere near there yet. Given our current level of understanding of human intelligence it probably won’t be for a there for a very long time, but that doesn’t invalidate the concept as a future goal. Companies currently working on AI products just seem incapable of being honest about that.

    • PonyOfWar@pawb.social
      link
      fedilink
      arrow-up
      4
      ·
      4 days ago

      What’s bullshit is the claim that today’s “AI” - LLMs - could one day advance to AGI. That’s really not possible if you understand how LLMs work. Could there be truly intelligent technology one day? Maybe. But the AI industry isn’t really moving towards that, despite what they claim.

      • lokalhorst@feddit.org
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        4 days ago

        Exactly. We both typed the same thought basically at the same time. It is the expection AGI was a logical consequence of LLMs that is driving this insane market.

      • Rhaedas@fedia.io
        link
        fedilink
        arrow-up
        1
        ·
        3 days ago

        AGI might use LLM tech in their process, but LLMs by themselves aren’t going to become aware. What happened is LLM tech became a gold mine, some who were doing AGI research jumped on it instead, and others followed. There is certainly still AGI research going on somewhere, but it’s buried by the race to… something. The biggest problem I see, outside of the need for profit guiding all this, is that what they are building has become so complex they don’t really understand it fully, they just keep finding ways to tack on things to get to some higher level without knowing why it works (or why it will break).

        And while LLMs aren’t AGI, they still have the issue of misalignment, even without a self-awareness. We’ve seen early on the misdirection to obtain a goal, and the models now are more sophisticated. Maybe it’s not their own goal, but a misunderstood goal that they’ll say and do anything to get to.

        Good thing we’re not putting them in control of important things, or full access to systems, right? Right?

        • trxxruraxvr@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          3 days ago

          Research info AGI had always been the domain of universities, not companies trying to get investments or profit. It’s still going on but you’ll only hear from it when there’s a new development that some company tries to turn into profit.

    • lokalhorst@feddit.org
      link
      fedilink
      arrow-up
      4
      ·
      4 days ago

      People try to always frame it as AGI is the logical next expansion step of LLMs, but it is not. This is not a linear process and transformer based LLMs and the science fiction like goal of AGI just don’t have much to do with each other.

        • SaveTheTuaHawk@lemmy.ca
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          3 days ago

          the whole concept assumes LLMs will reach some mythical enlightenment after feeding them exabytes of bullshit on the internet.

          Classic case of garbage in, garbage out.

          • trxxruraxvr@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            3 days ago

            You are applying such an unusual definition to the word concept that I feel there’s no point to this argument anymore.