I have some data science background, and I kinda understand how LLM parameter tuning works and how model generates text.

Simplifying and phrasing my understanding, an LLM works like - Given a prompt: Write a program to check if input is an odd number (converts the prompt to embedding), then the LLM plays a dice game/probability game of: given prompt, then generate a set of new tokens.

Now my question is, how are the current LLM’s are able to parse through a bunch of search results and play the above dice game? Like at times it reads through say 10 URLs and generate results, how are they able to achieve this? What’s the engineering behind generating such huge verbose of texts? Cause I always argue about the theoretical limitations of LLM, but now that these “agents” are able to manage huge verbose of text I dont seem to have a good argument. So what exactly is happening? And what is the limit of AI non theortical limit of AI?

Edit

  • brucethemoose@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    13 hours ago

    Others have explained it well; splitting calls up into parallel subjobs, and programatic prompt engineering.

    And what is the non theortical limit of AI?

    Shrug.

    But practically, transformer models are kinda hitting an “innovation” wall. Big companies aren’t taking risks to try and fix (say) the necessity of temperature to literally randomize outputs, or splitting instructions/context/output, or self correction (like an undo token), adaptation on the fly, anything.

    All this has been explored in research papers, yet they aren’t even trying it at larger scales. They’re simply scaling up what they have, or (in the case of the Chinese labs) focusing on lowering resource usage.

    Basically, corporate LLM development is far more conservative than you’ve been lead to believe, and that’s the wall LLMs are smacking into.

    • vaderaj@lemmy.worldOP
      link
      fedilink
      arrow-up
      2
      ·
      9 hours ago

      That’s been my issue, ie somewhere I know all this LLM lead AI is a bubble. But the corporates either increase the context window or release something that does better parallel subjobs after 3 months, and now all of a sudden this LLM lead AI is the “future” and it can perform “agentic” tasks.

      It kinda makes it impossible to make people (friends who are developers, colleagues) look past the marketing gimmicks.

      • brucethemoose@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        edit-2
        8 hours ago

        I mean, even as-is, it’s a very useful tool. Especially as the capabilities we have get exponentially cheaper.

        What people don’t get is AI is about to become a race to the bottom, not to the top. It’s a utility to sift through millions of documents, or run simple bots, or operate work assistants, or makeshift translators or whatever; you know, oldschool language modeling. And that’s really neat as the cost approaches “basically free.”

        Basically, imagine running Claude Code on your iPhone, and Claude Code itself not really changing all that much. Imagine the economic implications for the big AI houses.

        As for the marketing, I want some of what those tech execs are smoking.