Skip to main content
SaaS

What Swoft means by "AI agent": Wooldridge plus architectural constraints

The Swoft definition strictly applies Wooldridge's four properties and adds three architectural constraints that make the agent reliable in production: bounded scope, native traceability, organizational identity.

Kevin GibaudCo-fondateur Swoft
Architecture d'un agent IA avec périmètre borné et traçabilité

The four properties laid down by Michael Wooldridge in 1995 — autonomy, reactivity, pro-activeness, sociability — remain the best grid to evaluate an AI agent. But they are not enough for production. An agent that satisfies Wooldridge can still hallucinate, act outside its perimeter, take untraceable decisions, or create unarbitrable conflicts with other agents. The Swoft definition adds three structural constraints that turn the Wooldridge agent into a production-ready component.

The Swoft definition

A Swoft AI agent is a program that satisfies Wooldridge's four properties within a perimeter bounded by an explicit business domain, whose every decision is traced as an immutable event, and which has an organizational identity recognized by the system. This definition combines academic rigour with three requirements that production in critical environments imposes.

The four Wooldridge properties applied

Autonomy

The Swoft agent decides within its perimeter without asking a human at each turn. When a business event reaches it, it evaluates, reasons and acts. The human only steps in if the agent escalates, which is itself a decision of the agent, not an obligation. This autonomy is measurable: you can trace how many decisions an agent takes alone versus how many it escalates.

Reactivity

The Swoft agent observes the system's shared Event Store. Any event emitted by another component, another agent, a human or an external system that matches its subscription triggers its evaluation. No polling, no periodic querying. Reactivity is carried by the architecture, not by a polling loop.

Pro-activeness

The agent can initiate a saga, a long-running workflow that spans several steps, several agents, sometimes several days. It decides to start, picks the moment, and advances the saga event by event. If along the way a step requires human validation, it explicitly escalates. That is pro-activeness in Wooldridge's sense: the agent takes initiative when its goal demands.

Sociability

Swoft agents communicate with each other exclusively via typed events stored in the Event Store. No natural-language messages passed from one LLM to another — that is precisely what makes contemporary frameworks fragile. Each inter-agent message is a structured object with a defined schema, validated on write. Social ability is therefore structurally unambiguous.

The three additional Swoft constraints

Constraint 1: scope bounded by bounded context

Each Swoft agent is attached to a bounded context, a precise business domain. The agent in charge of compliance cannot, structurally, modify the credit domain. Not because it has been forbidden in its prompt, but because its architectural perimeter does not contain the credit-domain commands. This boundary is a compile-time constraint, not a runtime rule.

This is what radically distinguishes Swoft from CrewAI- or AutoGen-style multi-agent orchestration frameworks, where the agent's perimeter is defined by the prompt — therefore fragile, bypassable, and hard to audit.

Constraint 2: native traceability via Event Store

Every decision by a Swoft agent is stored as an immutable event in the Event Store. With the author (the agent), the timestamp, the context, the reasoning (the prompt and the LLM response if there is an LLM), and the action triggered. Five years later, you can replay this event and reconstruct exactly why the agent took that decision at that moment.

This traceability is not an audit layer added on top. It is the very infrastructure that stores the actions. And it natively satisfies regulatory requirements — GDPR for automated decisions, EU AI Act for high-risk AI agents, DORA for financial services.

Constraint 3: organizational identity

Each Swoft agent is instantiated in the system with a Party identity — the same data structure that models human users. It has an ID, a name (our personas are called Farnsworth, Lisa, Burns…), a team, a role, permissions. An agent decision therefore carries two actors: the human who authorized the delegation, and the agent that executed.

It is this organizational identity that makes Swoft compliant with Ferber's organizational dimension. When two agents conflict over a resource, the system knows who drives them, who arbitrates, who can dismiss them. It is not a debate, it is an explicit power structure.

Why our definition is more demanding than Wooldridge's

Wooldridge wrote in a 1990s academic context, where agents were research prototypes. The question of an agent's legal liability did not arise — it only acted in simulations. Thirty years later, AI agents take financial, medical, legal decisions. The Wooldridge frame remains valid, but you must add the constraints that production in critical environments imposes.

These constraints — bounded scope, native traceability, organizational identity — are not options. They are the condition for an AI agent to be deployable in a bank, an insurance company, a hospital, a law firm, or any other environment where audit, compliance and accountability are absolute requirements.

What this means for you

When evaluating an AI-agent platform in 2026, you can apply the seven Swoft criteria: the four Wooldridge ones plus the three architectural constraints. If the platform satisfies only four — autonomy, reactivity, pro-activeness, sociability — it is an agent in the academic sense but not a production-ready agent. If it satisfies all seven, then you have a system you can deploy in critical environments without having to rebuild the governance layer on top.

An agent whose decisions cannot be audited is a black box. A black box acting autonomously is an operational risk. Swoft's job is to turn that black box into an inspectable component.

Sujets abordés

  • Agent IA
  • Définition Swoft
  • Wooldridge
  • Bounded Context
  • Event Sourcing
  • Identité organisationnelle
  • Production
  • Conformité
  • Dual attribution

À approfondir dans le glossaire

Tech translation

How Swoft turns this challenge into software

Comment Swoft implémente concrètement les trois contraintes architecturales additionnelles aux quatre propriétés Wooldridge.

  1. 01

    Bounded context comme borne d'autonomie

    Chaque agent Swoft est compilé avec son bounded context. Sortir de ce périmètre serait un appel de fonction qui n'existe pas, pas une violation de règle, une impossibilité.

  2. 02

    Event Store immuable et signé en chaîne

    Chaque décision d'agent est un événement avec hash signé en chaîne. Toute altération a posteriori casse la chaîne. Audit trail réglementairement opposable.

  3. 03

    Identité Party partagée avec les humains

    Les agents IA et les utilisateurs humains sont stockés dans la même table Party, avec le même schéma. Conformité organisationnelle Ferber par construction.

  4. 04

    Dual attribution sur chaque action

    Toute mutation porte deux Party ID : l'humain qui a autorisé, l'agent qui a exécuté. La responsabilité reste assignable même quand 13 agents coopèrent.

Continuer la lecture — SaaS

  • NIS2 for SaaS vendors: six months to pass the audit
    Salle serveur d'un éditeur SaaS avec consoles de supervision sécurité

    NIS2 for SaaS vendors: six months to pass the audit

    Applicable since October 2024, the NIS2 directive starts to bite in 2026. SaaS vendors classified as "important entities" face new technical obligations.