Large Language Models (LLMS)
Large language models are the basis of many current AI systems. They are trained on massive collections of text and code, which allows them to produce human-like answers, summaries, and even generate programs or stories.
LLMs have restrictions that prevent them from going beyond their built-in abilities, which limits them. They cannot act outside their text box, and their training only lasts up to a certain point in time. Because of this, they may invent facts, miss recent events, or fail at tasks that require real-world actions.
Some of the main traits of LLMs are:
- Text generation: They predict the next word step by step to form complete responses.
- Stored knowledge: They hold a wide range of information from training data.
- Follow instructions: They can be tuned to follow prompts in ways closer to what people expect.
Since LLMs mainly follow text patterns, they can be tricked. Common risks include prompt injection, jailbreaking, and data poisoning, where attackers shape prompts or data to force the model to produce unsafe or unintended results.
These gaps in control explain why the next step was to move towards agentic AI, where LLMs are given the ability to plan, act, and interact with the outside world.