I have some data science background, and I kinda understand how LLM parameter tuning works and how model generates text.
Simplifying and phrasing my understanding, an LLM works like - Given a prompt: Write a program to check if input is an odd number (converts the prompt to embedding), then the LLM plays a dice game/probability game of: given prompt, then generate a set of new tokens.
Now my question is, how are the current LLM’s are able to parse through a bunch of search results and play the above dice game? Like at times it reads through say 10 URLs and generate results, how are they able to achieve this? What’s the engineering behind generating such huge verbose of texts? Cause I always argue about the theoretical limitations of LLM, but now that these “agents” are able to manage huge verbose of text I dont seem to have a good argument. So what exactly is happening? And what is the limit of AI non theortical limit of AI?
Edit


I have mixed feelings about it. I wouldn’t give code a full production application but I think it’s sometimes helpful if the LLM is able to generate a prototype or scaffold to get a head start. Removes some of the friction of starting a project.
The fully vibe coded stuff I’ve seen so far were usually unmaintainable dumpster fires.