Skip to main content
SaaS

What is an AI agent? Returning to 70 years of academic research

ChatGPT isn't an agent in the academic sense. Russell, Norvig, Wooldridge and Ferber have proposed for 30 years a demanding definition that most 2026 products don't meet. State of the question.

Kevin GibaudCo-fondateur Swoft
Schéma d'agent IA percevant son environnement et agissant dessus

In 2026, the term "AI agent" is everywhere. On vendor websites, in sales pitches, in funding announcements. Yet open any AI textbook and you realize most products claiming this status don't meet the bar. The term has a precise academic history, more than 70 years long, and it sets demanding conditions. This article retraces that history and restores the canonical definition.

1955: The Dartmouth conference and the origin of the term

The term "intelligent agent" appears in the proposal written by John McCarthy, Marvin Minsky, Nathaniel Rochester and Claude Shannon for the 1956 Dartmouth conference, the founding event of artificial intelligence. McCarthy describes an agent there as "a program that perceives its environment and acts on it." This definition, almost trivial today, already contained the two properties that all later literature would refine: perception and action.

But real systematization would come later. For three decades, AI researchers built expert systems, planning engines, chess programs. None of these objects was called an "agent" — they were programs, solvers, inference engines. The notion of agent in the modern sense only took hold in the 1990s, when distributed AI and mobile robotics created the need for a conceptual frame for programs that act in open environments.

1995: Russell and Norvig set the reference definition

Stuart Russell and Peter Norvig published in 1995 the first edition of Artificial Intelligence: A Modern Approach (AIMA), which instantly became the standard AI textbook in every university in the world. The book is organized around a single definition of agent: "An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators."

Russell and Norvig's decisive contribution is to define not the agent as such, but the rational agent. An agent is rational if, given a performance measure, knowledge of its environment, the actions it can take and a perception history, it picks the action that maximizes expected performance. This definition introduces the central notion of performance measure: an agent without an explicit goal isn't a rational agent, it's a program that runs.

1995: Wooldridge and Jennings fix the four properties

The same year as Russell and Norvig, but coming from a different tradition (British distributed AI), Michael Wooldridge and Nick Jennings published a paper that became just as structuring: "Intelligent Agents: Theory and Practice." They propose an operational definition that distinguishes an intelligent agent from a classic program by four simultaneously required properties.

  • Autonomy. The agent operates without direct human intervention and keeps control of its actions and internal state. A script triggered on every user command isn't autonomous.
  • Reactivity. The agent perceives its environment (physical, software or human) and responds to changes it observes within reasonable time.
  • Pro-activeness. The agent doesn't just react: it takes initiative when its goal demands. It can start an action without being asked.
  • Social ability. The agent interacts with other agents, human or software, through a communication language (which can be structured, like KQML or FIPA-ACL, or freer, like natural language).

Wooldridge insists that these four properties must coexist. A thermostat is reactive but not pro-active (it doesn't decide on its own to turn on heating in anticipation). A classic planner is pro-active but not reactive (it produces its plan without accounting for runtime changes). An intelligent agent is both, and more.

1995: Jacques Ferber and the organizational dimension

The same year, in France, Jacques Ferber published Multi-Agent Systems: Toward Collective Intelligence. It's a founding work of European distributed AI, contributing a decisive and often forgotten dimension: an agent doesn't exist alone. It is situated in an environment where other agents evolve, and it is part of an organization.

Ferber's central contribution is to separate the agent (individual entity) from the multi-agent system (the society of agents). To characterize a multi-agent system, Ferber identifies five dimensions: the agents and their properties, the environment they operate in, the interactions between them, the organization that structures these interactions, and the global dynamics of the system. An isolated agent without environment, peers and organization is just a slightly sophisticated program.

This organizational dimension is precisely what most so-called multi-agent frameworks of 2026 forget. Stacking three LLMs that talk to each other in natural language doesn't make a multi-agent system in Ferber's sense — it makes an orchestrator.

1987-1995: The BDI architecture — beliefs, desires, intentions

Beyond general definitions, 1990s AI also produced concrete architectures to structure an agent. The most influential is the BDI architecture (Belief-Desire-Intention), theorized by Michael Bratman in Intention, Plans, and Practical Reason (1987) and formalized by Anand Rao and Michael Georgeff in the early 1990s.

In BDI, an agent is made up of three mental bases: its beliefs about the world (what it knows), its desires (what it wants to achieve), and its intentions (what it has decided to do to fulfill its desires given its beliefs). At each cycle, the agent updates its beliefs from perception, deduces relevant desires, and reasons to produce or revise its intentions.

BDI architecture has been implemented in dozens of industrial systems, from air traffic control to industrial process management. Thirty years later, it remains the most accomplished theoretical architecture for thinking a deliberative rational agent. And that's precisely what makes the comparison with contemporary LLM agents enlightening: the latter often merge the three BDI components into a single monolithic prompt, which makes reasoning non-inspectable.

Why these definitions still hold in 2026

The definitions laid down by Russell, Norvig, Wooldridge, Ferber and Bratman are 30 to 40 years old. They weren't produced for LLMs. They didn't anticipate the transformer revolution. And yet, they remain the best tools to evaluate a product claiming AI agent status in 2026.

Why? Because they capture what makes a system truly autonomous, not merely automated. An LLM wrapper that responds to a prompt on demand isn't autonomous in Wooldridge's sense. A tool chain orchestrated by a human clicking at every step isn't pro-active. A multi-agent system that doesn't know who decides in case of conflict doesn't honor Ferber's organizational dimension.

The simple test, which sums up Wooldridge's four properties in one question: "Can the agent refuse to answer a human in order to finish its current task?" If the answer is no — and it's no for 99% of products called AI agents in 2026 — then it isn't an agent in the strong sense. It's an intelligent assistant, or a wrapper, or a workflow. All these objects are useful. But they aren't agents.

An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators.

Stuart Russell and Peter Norvig, AIMA, 1995

Going further

This article lays the historical foundations. Three questions naturally follow from these definitions and deserve their own treatment: has the term AI agent become overused in 2026? How does Ferber's notion of multi-agent system translate today into production-ready platforms? And what concretely does Swoft's definition of agent cover? These three questions are the subject of dedicated articles.

Sources et lectures complémentaires

  1. [1]Russell S. & Norvig P. — Artificial Intelligence: A Modern ApproachAIMA, the reference textbook, first edition 1995
  2. [2]Wooldridge M. & Jennings N. — Intelligent Agents: Theory and PracticeKnowledge Engineering Review, 1995
  3. [3]Ferber J. — Multi-Agent Systems: Toward Collective IntelligenceInterÉditions, 1995
  4. [4]Bratman M. — Intention, Plans, and Practical ReasonHarvard University Press, 1987, foundation of BDI architecture
  5. [5]McCarthy J. — Dartmouth Summer Research Project on Artificial IntelligenceOriginal Dartmouth conference proposal, 1955

Sujets abordés

  • Agent IA
  • Russell Norvig
  • Wooldridge
  • Ferber
  • AIMA
  • BDI
  • Bratman
  • Histoire IA
  • Définition agent
  • Multi-agents
Tech translation

How Swoft turns this challenge into software

Les quatre propriétés Wooldridge ont une traduction architecturale concrète chez Swoft. Pas comme un objectif de design, mais comme une garantie structurelle.

  1. 01

    Autonomie cadrée par bounded context

    Chaque agent Swoft a un périmètre architectural, un domaine métier dans lequel il décide seul, et hors duquel il ne peut pas agir. L'autonomie n'est pas un objectif de prompt, c'est une borne du système.

  2. 02

    Réactivité par Event Store

    Les agents observent un Event Store partagé. Tout événement qui les concerne déclenche leur logique. La réactivité n'est pas un polling, c'est une subscription typée.

  3. 03

    Pro-activité via sagas

    Les sagas event-sourcées permettent à un agent d'initier un workflow long-running, de l'avancer pas à pas, et de demander une intervention humaine si nécessaire, sans attendre que quelqu'un lui dise quoi faire.

  4. 04

    Sociabilité par events typés

    Les agents communiquent via des événements typés stockés dans l'Event Store, pas via du langage naturel ambigu. La capacité sociale au sens Wooldridge est donc structurellement non-ambiguë.

Questions fréquentes

À retenir sur ce sujet

Qui a inventé le terme « agent intelligent » en intelligence artificielle ?
John McCarthy, dans la proposition rédigée pour la conférence de Dartmouth de 1956, est considéré comme l'origine du terme. Il définit un agent comme « un programme qui perçoit son environnement et agit sur lui ». La systématisation académique viendra dans les années 1990 avec Russell, Norvig, Wooldridge et Ferber.
Quelles sont les quatre propriétés d'un agent intelligent selon Wooldridge ?
Autonomie (l'agent décide sans intervention humaine directe), réactivité (il perçoit et réagit à son environnement), pro-activité (il prend l'initiative pour atteindre son objectif), capacité sociale (il interagit avec d'autres agents). Ces quatre propriétés doivent coexister pour qu'un système mérite le terme d'agent au sens fort.
ChatGPT est-il un agent IA au sens académique ?
Non. ChatGPT répond à des prompts utilisateur, mais ne satisfait pas les quatre propriétés de Wooldridge : il n'est ni autonome (chaque tour requiert une requête), ni pro-actif (il n'initie jamais d'action), et sa capacité sociale est limitée à l'utilisateur unique qui l'interroge. C'est un assistant conversationnel, pas un agent.
Quelle est la différence entre un agent et un système multi-agents ?
Un agent est une entité individuelle. Un système multi-agents (au sens de Ferber, 1995) est une société d'agents organisée, avec des règles d'interaction, une structure d'organisation, et des mécanismes d'arbitrage des conflits. Empiler trois LLM qui se parlent n'est pas un système multi-agents, c'est un orchestrateur.
Qu'est-ce que l'architecture BDI en intelligence artificielle ?
BDI (Belief-Desire-Intention), théorisée par Bratman (1987) et formalisée par Rao et Georgeff, structure un agent en trois bases mentales : ses croyances sur le monde, ses désirs (objectifs), et ses intentions (actions décidées). C'est l'architecture théorique la plus aboutie pour penser un agent rationnel délibératif.

Continuer la lecture — SaaS

  • NIS2 for SaaS vendors: six months to pass the audit
    Salle serveur d'un éditeur SaaS avec consoles de supervision sécurité

    NIS2 for SaaS vendors: six months to pass the audit

    Applicable since October 2024, the NIS2 directive starts to bite in 2026. SaaS vendors classified as "important entities" face new technical obligations.