What are AI Agents?

What Are AI Agents?

In previous courses, you learned how to use AI tools, write prompts, and understand generative AI. Now we are entering the next level: AI agents, systems that don't just respond, but autonomously act, plan, and complete tasks. Welcome to the paradigm shift of 2025/2026.

But what exactly distinguishes an AI agent from a regular chatbot? And why is everyone suddenly talking about "Agentic AI"? In this lesson, you'll get the foundations you need to fully understand this entire topic.

Did you know? The term "agent" comes from computer science and means "an entity that acts." It has been used in AI research since the 1990s. But it wasn't until 2025 that the concept became practical thanks to better language models, tool-use capabilities, and longer context windows. In 2026, AI agents are the dominant topic in the tech industry. Gartner has declared "Agentic AI" a top trend.

Chatbot vs. AI Agent

To understand AI agents, a clear comparison helps. A traditional chatbot – whether rule-based or LLM-powered, works on the principle of prompt in, response out. You ask a question, the model answers. Done. Each interaction stands on its own.

An AI agent goes fundamentally further: it receives a goal, breaks it down into sub-steps, uses tools (APIs, databases, web search, code execution), verifies its results, and self-corrects. It acts autonomously, within defined boundaries.

Traditional Chatbot:

You ask: "What will the weather be like tomorrow in Zurich?" – The chatbot answers based on its training data (which may be outdated) or says: "I don't have access to current weather data."

AI Agent:

You say: "Plan my outdoor day tomorrow in Zurich." – The agent calls the weather API, checks your calendar availability, searches for suitable outdoor activities, considers the weather forecast, and creates a concrete day plan with time slots and alternative suggestions in case of rain.

The Three Core Characteristics

Every true AI agent possesses three fundamental properties that distinguish it from a simple LLM chat:

1. Autonomy: The agent makes independent decisions about its next steps. You define the goal, not the path to get there. It chooses which tools to use, in what order to proceed, and when it's finished.

2. Goal Orientation: Instead of reacting to individual prompts, the agent pursues an overarching goal. It maintains context across multiple steps and systematically works toward the result.

3. Tool Use: The agent can employ external tools: call APIs, search the internet, execute code, read and write files, query databases. This extends its capabilities far beyond pure text knowledge.

Practical Tip: If you want to evaluate whether a system is a true AI agent, ask these three questions: Can it independently plan multiple steps? Can it use external tools? Can it review and correct its own results? If all three are answered with yes, you're dealing with an agent.

The Perception-Reasoning-Action Loop

AI agents operate in a continuous cycle called the Perception-Reasoning-Action Loop:

Perception: The agent takes in information: your task, the results of its previous actions, error messages, new data from APIs. It "sees" where it stands.

Reasoning: Based on what it perceived, the agent considers: What is my goal? What have I achieved so far? What is the best next step? This is where the LLM's strength comes into play: logical thinking, planning, prioritization.

Action: The agent performs a concrete action: an API call, code execution, a database query, a web search. The result feeds back into perception, and the loop begins again.

This cycle repeats until the goal is achieved or the agent recognizes it cannot proceed and needs human help.

Types of AI Agents

Not all agents are the same. In research and practice, three main types are distinguished:

Reactive Agents

React directly to inputs without long-term planning. They follow predefined rules: If X, then do Y. Fast and predictable, but not very flexible.

Example: A customer service bot that categorizes incoming emails by keywords and sends pre-written responses.

Strengths: Fast, predictable, easy to debug.

Weaknesses: Cannot handle unexpected situations.

Deliberative Agents

Plan ahead, create strategies, and adapt their plans based on new information. They use chain-of-thought reasoning to break down complex tasks.

Example: A research agent creating a market analysis, it plans its research strategy, collects data from various sources, analyzes them, and creates a structured report.

Strengths: Can solve complex, multi-step tasks.

Weaknesses: Slower, higher resource consumption, harder to control.

Hybrid Agents

Combine reactive and deliberative elements. For routine tasks they react quickly by rules, for complex situations they switch to planning mode.

Example: Claude Code, for simple code changes it acts quickly and directly, for larger refactorings it plans a multi-step strategy with tests and validation.

Strengths: Flexible, efficient, practical.

Weaknesses: More complex to develop and test.

Real-World Examples (2025/2026)

AI agents are not science fiction, they are in use today:

  • Devin (Cognition): An AI software developer that independently writes code, debugs, creates tests, and opens pull requests. It works in its own development environment and can work autonomously on features for hours.
  • Claude Code (Anthropic): A coding agent that runs directly in the terminal, reads and writes files, executes Git commands, and can implement complex software projects.
  • AutoGPT / BabyAGI: Early open-source agents (2023) that showed what was possible, even though they were still unreliable. They laid the groundwork for today's agent movement.
  • OpenAI Operator: An agent that can control web browsers and complete online tasks, from orders to research.
Example: Imagine asking an AI agent: "Analyze our last 100 customer reviews and create an improvement plan." The agent would: (1) Load the reviews from your database, (2) Perform sentiment analysis, (3) Identify recurring themes, (4) Prioritize themes by frequency and severity, (5) Propose concrete improvement measures, (6) Format everything as a structured report. A chatbot would have said: "Please paste the reviews here."

Why the Breakthrough Now?

The concept of AI agents has existed for decades. Why do they only work now? Three factors came together in 2025/2026:

Better Foundation Models: GPT-5, Claude 4.5/4.6, Gemini 3, current LLMs are intelligent enough to reliably handle multi-step reasoning tasks. Earlier models made too many planning errors.

Tool Use as Standard: All major providers (OpenAI, Anthropic, Google) have integrated native tool-use interfaces into their models. Agents can now reliably use external APIs, databases, and code execution environments. Additionally, the Model Context Protocol (MCP) has established itself as an open standard for connecting AI agents to external tools and data sources, comparable to a "USB-C for AI applications."

Longer Context Windows: From 4,000 tokens (2023) to 200,000+ tokens (2026). Agents can retain more context, process larger files, and keep longer task chains in "memory."

Warning: AI agents are powerful but not infallible. They can get stuck in loops, make wrong decisions, or misuse tools. Never give an agent uncontrolled access to critical systems (production databases, financial systems, customer email). Always set boundaries and implement safety mechanisms, more on this in later lessons.
What fundamentally distinguishes an AI agent from a traditional chatbot?
Correct! The three core characteristics of an AI agent are autonomy (acting independently), goal orientation (multi-step planning), and tool use (APIs, code, databases). This fundamentally distinguishes it from a chatbot's prompt-response pattern.
Not quite. Model size or target audience are not the decisive differences. AI agents are characterized by three core features: autonomy (acting independently), goal orientation (multi-step planning), and tool use (APIs, code, databases). A chatbot only reacts to individual prompts, an agent pursues a goal across multiple steps.
Key Takeaways:
  • AI agents act autonomously, pursue goals, and use external tools, unlike chatbots that only react to individual prompts.
  • The Perception-Reasoning-Action Loop is the core mechanism: perceive, reason, act, in a loop until the goal is reached.
  • There are three agent types: Reactive (rule-based), Deliberative (planning), and Hybrid (combined).
  • The 2025/2026 breakthrough was enabled by better models, native tool-use interfaces, and larger context windows.
  • AI agents are powerful but need safety boundaries, uncontrolled autonomy is risky.