I have some data science background, and I kinda understand how LLM parameter tuning works and how model generates text.
Simplifying and phrasing my understanding, an LLM works like - Given a prompt: Write a program to check if input is an odd number (converts the prompt to embedding), then the LLM plays a dice game/probability game of: given prompt, then generate a set of new tokens.
Now my question is, how are the current LLM’s are able to parse through a bunch of search results and play the above dice game? Like at times it reads through say 10 URLs and generate results, how are they able to achieve this? What’s the engineering behind generating such huge verbose of texts? Cause I always argue about the theoretical limitations of LLM, but now that these “agents” are able to manage huge verbose of text I dont seem to have a good argument. So what exactly is happening? And what is the limit of AI non theortical limit of AI?
Edit


The LLMs will just predict probabilities for the single next token based on all previous tokens in the context window (it’s own and the ones entered by the user, system prompt or tool calls). The inference engine / runtime decides which token will be selected, usually one with high probably but that’s configurable.
The LLM can also generate (predict) special tokens like “end of imaginary dialogue” to end it’s turn (the runtime will give the user a chance to reply) or to call tools (the runtime will call the tool and add the result to the context window).
The LLM does not really care about if the stuff in the context was put there by a user, the system prompt, a tool or whatnot. It just predicts the next token probabilities. If you configure the runtime accordingly it will happily “play” the role of the user or of a tool (you usually don’t want that).
Some of the tool calls are e.g. web searches etc. and the search results will be added to the context window. The LLM can decide to do more calls for further research, save data in “memory” that can be accessed by later “sessions” or call other tools (new tools pop up daily).
Models tend to get larger context windows with every update (right now it’s usually between 250K - 2M tokens but the model performance usually gets worse with more filled context windows (needle in a hay stack).
To keep the window small agentic tools often “compact” the context window by summarizing it and then starting a new session with the compacted context.
Sometimes a task is split into multiple sessions (agents) that each have their own context window. E.g. one extra session for a long context subtask like analysis of a long document with a specific task and the result is then sent to an orchestrator agent in charge of the big picture.
The fact that everything in the context window regardless of the origin is used to predict the next token is also the reason why it’s so difficult to avoid prompt injection. It all “looks” the same for the LLM and there is no “hard coded” way from excluding anything.
It’s non-deterministic nature is honestly the scariest thing about vibe coding. In it’s early days when I was experimenting with several llms it quickly became apparent that I would spend 10 times as much time cleaning up its code as I would writing it myself because it would just put in completely nonsense code that did nothing.
I have mixed feelings about it. I wouldn’t give code a full production application but I think it’s sometimes helpful if the LLM is able to generate a prototype or scaffold to get a head start. Removes some of the friction of starting a project.
The fully vibe coded stuff I’ve seen so far were usually unmaintainable dumpster fires.
This is the best explanation of prompt injection I’ve seen