Why real AI agents in production are so rare in 2026
Technical diagnosis of the four structural obstacles that block the transition from demo to real production. And what it takes to clear them.
Everyone is doing AI agent demos. Very few organizations have agents really running in production on high-stakes decisions. That gap is not trivial, and it isn't due to a lack of talent or budget. It is structural. Four technical obstacles block the transition from demo to production.
Obstacle 1: non-reproducibility of reasoning
An LLM is non-deterministic by nature. Ask exactly the same question, in the same context, of the same version of the model, and you may get two different answers. That behaviour is acceptable for an assistant, it is disqualifying for an agent that takes high-stakes decisions.
For a regulator, audit means being able to replay. If you turned down a credit application in March, the customer disputes it in September, and the regulator asks you for explanations the following March, you must be able to reproduce the decision exactly. With an LLM called live, that is impossible. With a system that stores LLM decisions as immutable events, it comes for free.
Obstacle 2: the absence of structured memory
An agent must know what it has done, what it knows, what it observes at time T. Its memory cannot be limited to the LLM's context window, it must be structured, persistent, queryable.
Popular agent frameworks (LangChain, CrewAI, AutoGen) handle memory ad hoc, usually a vector store for similarity plus a relational store for facts. That is not enough. For a professional agent, memory must be a structured Event Store, designed for persistence and audit, not a cache.
Obstacle 3: scope drift
An agent that runs solely on the basis of a system prompt is exposed to prompt injection and behavioural drift. The system prompt is not a security boundary, it is a suggestion. A reasonably creative attacker can convince the agent to step out of its role.
The remedy is architectural: the agent's perimeter must be enforced by infrastructure (bounded context, access control, compile-time validation), not by the prompt. No generic framework does this by default.
Obstacle 4: fragile audit
Agent logs, in the state of 2026 frameworks, are written by developers in free-form, kept for some time and then purged, and barely queryable. For a legal audit, they don't suffice. You need typed domain events, immutable, retained indefinitely, and indexable by query.
DORA, EU AI Act, MiFID II don't merely require a trace, they require a traceable trace. The nuance is technical: it is not enough for the data to exist, it must be queryable along the regulators' criteria and its consistency must be guaranteed over time.
Sujets abordés
- Agents IA
- Production
- Audit
- Reproductibilité
- Architecture
How Swoft turns this challenge into software
L'architecture Swoft est conçue pour franchir les quatre obstacles par construction, pas par bonne pratique. Voici comment.
- 01
Reproductibilité par AI Decisions as Data
Chaque décision LLM est stockée comme événement immuable contenant le raisonnement complet, le modèle utilisé, le score de confiance et le prompt système. Le rejeu donne exactement le même résultat.
- 02
Mémoire structurée par Event Store
Toute observation et toute action sont des événements typés persistés dans System_EventStore. La mémoire de l'agent n'est pas un cache, c'est une base de vérité interrogeable et indéfiniment conservée.
- 03
Périmètre architectural par Bounded Contexts
L'agent est rattaché à un Bounded Context du métamodèle DDD. Toute action hors périmètre est bloquée à la compilation et au runtime, pas par le prompt. Injection de prompt sans effet.
- 04
Audit événementiel par dual attribution
Chaque événement porte authorizedBy (humain qui a autorisé) et executedBy (agent qui a exécuté). L'audit est interrogeable par n'importe quel critère, conservé indéfiniment, et conforme aux exigences DORA et EU AI Act.
Continuer la lecture — SaaS
NIS2 for SaaS vendors: six months to pass the audit NIS2 for SaaS vendors: six months to pass the audit
Applicable since October 2024, the NIS2 directive starts to bite in 2026. SaaS vendors classified as "important entities" face new technical obligations.
EU AI Act articles 8-15: AI SaaS vendors must organize before August 2026 EU AI Act articles 8-15: AI SaaS vendors must organize before August 2026
On 2 August 2026, transparency and governance obligations for high-risk AI become applicable. For SaaS vendors, it's an underestimated workload.