🤖 Agents

What Are AI Agents?

How autonomous AI systems perceive, plan, use tools, and act.

Intermediate10 min readFebruary 10, 2026ProBotica Editorial

Agents vs Chatbots: A Critical Distinction

A chatbot responds to a message. An AI agent pursues a goal.

This distinction is not semantic. A chatbot is a single-turn or multi-turn language model — it receives text, generates text, and stops. Its effect on the world is limited to the content of its response. An AI agent, by contrast, can search the web, read and write files, execute code, call APIs, send emails, browse websites, and interact with databases. It operates in a loop: observe the world, reason about what to do next, take an action, observe the result of that action, reason again, and continue until the goal is achieved or the task is impossible.

This fundamentally changes the risk profile, the capability ceiling, and the design considerations. A chatbot giving wrong information is annoying. An agent deleting the wrong files, sending the wrong email, or making the wrong API call has immediate real-world consequences. Effective agent design requires explicit consideration of permissions, reversibility, and failure modes.

The ReAct Loop: Reason + Act

The most influential agent architecture is **ReAct** (Reason + Act), introduced by Yao et al. in 2022. The idea is simple: interleave reasoning steps (in natural language) with action steps (tool calls). The LLM thinks through the problem verbosely, decides what tool to invoke, observes the result, thinks again based on that result, and continues.

A concrete example: asked to "summarise the top 5 AI news stories from the last 48 hours," a ReAct agent might: 1. **Think**: I need to find recent AI news. I'll use the web search tool. 2. **Act**: `search("AI news past 48 hours")` 3. **Observe**: [List of search results with titles and URLs] 4. **Think**: I have 8 results. I should read the top 5 articles to get full content. 5. **Act**: `fetch_url("https://example.com/article1")` 6. (Repeat for each article) 7. **Think**: I now have full content for 5 articles. I can synthesise the summary. 8. **Output**: [Synthesised summary]

This pattern allows a single LLM call to orchestrate complex multi-step information gathering and synthesis tasks that would be impossible in a single prompt.

Note

Chain-of-Thought (CoT) prompting, which encourages models to "think step by step," is the precursor to ReAct. ReAct extends CoT by grounding reasoning in real observations from tool results rather than pure internal reasoning.

Tool Use: The Source of Agent Power

The transformative aspect of AI agents is tool use. When a language model can call external tools, it overcomes its most significant limitations: knowledge cut-off, inability to compute precisely, inability to interact with live systems, and inability to take actions.

Common agent tools include:

**Web search**: Retrieves current information, overcoming training data cut-off. Critical for any task requiring recent events, live prices, or current documentation.

**Code execution (sandboxed Python/JavaScript)**: Allows precise computation, data analysis, chart generation, and algorithmic problem solving. Agents with code execution can write a program to solve a mathematical problem rather than attempting to compute it in natural language.

**File system access**: Read, write, and organise documents, spreadsheets, and data files. Enables automation of document-heavy business workflows.

**API calls**: Integrate with CRM systems, email providers, calendar systems, databases, and third-party services. An agent with Salesforce API access can update deal stages, create contacts, and schedule follow-up tasks.

**Memory systems**: Vector databases allow agents to retrieve relevant context from past interactions or large document corpora — overcoming the context window limitation.

Multi-Agent Systems and Orchestration

Individual agents handle one task. Multi-agent systems handle entire workflows.

In a multi-agent architecture, an **orchestrator agent** receives a high-level goal and decomposes it into sub-tasks, delegating to **specialist agents** with appropriate tools and instructions. A business automation system for processing incoming sales inquiries might include: a routing agent that classifies the inquiry; a CRM agent that looks up the prospect's history; a pricing agent that calculates a quote; an email agent that drafts and sends a personalised proposal; and a scheduling agent that books a follow-up call.

Each specialist can be optimised independently — different models, different tools, different prompts. The orchestrator maintains the overall goal and coordinates handoffs. This architecture enables automation of entire business processes that would require teams of human specialists.

Current multi-agent frameworks include LangGraph, AutoGen, CrewAI, and Anthropic's own multi-agent patterns via the Claude API. Each makes different trade-offs between control flow flexibility, observability, and ease of implementation.

Warning

Agent safety consideration: agents with real-world tool access require explicit permission boundaries. Before deploying an agent, define clearly: what actions is it allowed to take? What actions require human approval? What happens if it encounters an error? Irreversible actions (sending emails, deleting records, making purchases) should have confirmation gates.

Key Takeaways

  • An AI agent is a system that perceives input, reasons about goals, plans actions, and executes them — often in a loop.
  • Modern LLM-based agents use a "Reason + Act" (ReAct) loop: think, decide what tool to call, observe result, repeat.
  • Agents are fundamentally different from chatbots: they take actions with real-world consequences.
  • Tool use — web search, code execution, API calls, file systems — is what makes agents transformatively powerful.
  • Multi-agent systems, where specialised agents collaborate, enable automation of entire workflows.