I have some data science background, and I kinda understand how LLM parameter tuning works and how model generates text.

Simplifying and phrasing my understanding, an LLM works like - Given a prompt: Write a program to check if input is an odd number (converts the prompt to embedding), then the LLM plays a dice game/probability game of: given prompt, then generate a set of new tokens.

Now my question is, how are the current LLM’s are able to parse through a bunch of search results and play the above dice game? Like at times it reads through say 10 URLs and generate results, how are they able to achieve this? What’s the engineering behind generating such huge verbose of texts? Cause I always argue about the theoretical limitations of LLM, but now that these “agents” are able to manage huge verbose of text I dont seem to have a good argument. So what exactly is happening? And what is the limit of AI non theortical limit of AI?

Edit

  • Mniot@programming.dev
    link
    fedilink
    English
    arrow-up
    2
    ·
    10 hours ago

    The “agents” and “agentic” stuff works by wrapping the core innovation (the LLM) in layers of simple code and other LLMs. Let’s try to imaging building a system that can handle a request like “find where I can buy a video card today. Make a table of the sites, the available cards, their prices, and how they compare on a benchmark.” We could solve this if we had some code like

    search_prompt = llm(f"make a list of google web search terms that will help answer this user's question. present the result in a json list with one item per search. <request>{user_prompt}</request>")
    results_index = []
    for s in json.parse(search_prompt):
      results_index.extend(google_search(s))
    results = [fetch_url(url) for url in results_index]
    summarized_results = [llm(f"summarize this webpage, fetching info on card prices and benchmark comparisons <page>{r}</page>") for r in results]
    
    return llm(f"answer the user's original prompt using the following context: <context>{summarized_results}</context> <request>{user_prompt}</request>")
    

    It’s pretty simple code, and LLMs can write that, so we can even have our LLM write the code that will tell the system what to do! (I’ve omitted all the work to try to make things sane in terms of sandboxing and dealing with output from the various internal LLMs).

    The important thing we’ve done here is instead of one LLM that gets too much context and stops working well, we’re making a bunch of discrete LLM calls where each one has a limited context. That’s the innovation of all the “agent” stuff. There’s an old Computer Science truism that any problem can be solved by adding another layer of indirection and this is yet another instance of that.

    Trying to define a “limit” for this is not something I have a good grasp on. I guess I’d say that the limit here is the same: max tokens in the context. It’s just that we can use sub-tasks to help manage context, because everything that happens inside a sub-task doesn’t impact the calling context. To trivialize things: imagine that the max context is 1 paragraph. We could try to summarize my post by summarizing each paragraph into one sentence and then summarizing the paragraph made out of those sentences. It won’t be as good as if we could stick everything into the context, but it will be much better than if we tried to stick the whole post into a window that was too small and truncated it.

    Some tasks will work impressively well with this framework: web pages tend to be a TON of tokens but maybe we’re looking for very limited info in that stack, so spawning a sub-LLM to find the needle and bring it back is extremely effective. OTOH tasks that actually need a ton of context (maybe writing a book/movie/play) will perform poorly because the sub-agent for chapter 1 may describe a loaded gun but not include it in its output summary for the next agent. (But maybe there are more ways of slicing up the task that would allow this to work.)