How Swoft AI agents represent your business (and why it changes everything)
Six layers of explicit, structured, verifiable representation. Where an LLM alone has a representation hidden in its weights, Swoft agents know exactly what they know, and can prove it.
Since the origins of artificial intelligence, the fundamental question has never been "how to learn". It is "how to represent what we know". The way a system encodes its knowledge determines what it can do with it — reason, explain, guarantee a result, refuse a forbidden action. This question, long set aside by the deep-learning wave, becomes central again as soon as you entrust consequential decisions to an AI.
The silent problem of LLM agents: implicit representation
Large language models — Claude, GPT, Gemini — have an entirely implicit representation of the world. It is buried in billions of parameters, not inspectable, not verifiable. An LLM "knows" Paris is in France, but that knowledge isn't written down anywhere you can consult. It emerges statistically with each generation.
This property became a problem as soon as LLMs began orchestrating real actions in enterprise systems. Three gaps stand out:
- Hallucinations are structural, not accidental. The model doesn't distinguish what it knows from what it invents.
- No consistency guarantee. Two identical queries can produce contradictory answers.
- No audit possible. Impossible to trace back to the reasoning that produced a decision.
- No perimeter. An agent can take a decision outside its area of expertise with nothing to stop it.
The solution, theorized by Henry Kautz in his Engelmore Memorial lecture at AAAI 2022, is called neurosymbolism — combining the flexibility of neural networks with the rigour of symbolic representations. The major systematic review by Colelough & Regli (arXiv 2025, 167 papers, 2020-2024) notes that production neurosymbolic systems remain rare, and most are confined to narrow domains.
The complementary grid by Sheth, Roy & Gaur (IEEE Intelligent Systems, 2023) classifies approaches by integration direction: family 1 called Lowering (symbolic knowledge is compressed into the network's weights, via knowledge graph embeddings or compressed formal logic), and family 2 called Lifting (the neural component produces representations the symbolic exploits). Inside family 2, two variants: 2a, federated pipeline, where the LLM orchestrates and delegates to symbolic solvers (LangChain + Wolfram Alpha is the archetype), and 2b, intertwined end-to-end differentiable pipeline, where neural and symbolic learn jointly via backpropagation. 2b is the most ambitious, and the rarest in production.
Swoft surpasses 2a pipelines on dimensions critical for the enterprise — explainability, domain constraints on input AND output (not just input), continual learning via Event Store, replay determinism (unique in the literature) — while staying short of 2b since the coupling is not end-to-end differentiable. It is a production-ready compromise: the rigour of formal reasoning without the instability of joint learning.
Six layers of representation, shared by all agents
Where classical multi-agent frameworks (CrewAI, LangGraph, AutoGen) give each agent its own context and force it to dialogue with the others in natural language — with all the ambiguity that implies — Swoft stacks six distinct representation systems that reinforce each other. All agents share the same reading grid of the world.
01 · Structural representation: the business metamodel
The system knows its own domains, its entities, its commands, its events and their relations. It is the equivalent of a knowledge graph specialized in software architecture, but operational: the system generates code by consulting this representation, validates its outputs against it, and automatically detects divergences between what it believes and what is. When an agent has to create a new use case, it doesn't guess the structure: it reads it.
02 · Temporal representation: event-driven memory
Every action, human or AI, is recorded as a timestamped event with its causal context. This total memory enables time-travel debugging — reconstructing the system state at any moment T — and provides the complete audit trail required by GDPR, DORA and NIS2. A decisive detail: agent decisions are stored as data, not recomputed on replay. Determinism is guaranteed even when the underlying LLM is non-deterministic.
03 · Procedural representation: declarative business logic
Business rules — conditions, operators, data sources, event policies — are declared in configuration and evaluated at runtime. Modifying a business rule does not require recompilation. A rule change is a data change. That sounds trivial; it is in fact fundamental: agents can see and reason about the business logic, not just the code that implements it.
04 · Vector representation: knowledge by similarity
Each persona has a knowledge base in vector embeddings. Cosine-similarity search lets an agent retrieve analogous cases, past decisions, relevant analyses — useful where it is relevant (knowledge retrieval), strictly constrained by agent perimeters. It is the only distributed layer of the system: it complements the other five without replacing them.
05 · Organizational representation: executable Conway's Law
A routing table associates each business domain with an owner agent and a backup, with confidence thresholds determining whether the agent can decide alone or must escalate. The system knows who is responsible for what. Agents cannot act outside their perimeter — not by convention, but because the system structure makes it impossible. This is the representation that competing multi-agent frameworks lack, and it is the one that makes all the difference on critical workflows.
06 · Visual representation: constrained design system
UI components form a bounded visual vocabulary. Static validation rules reject any UI that strays from the vocabulary — not after the fact, at compile time. Agents that generate screens pick from this vocabulary; they cannot invent free pixels. This constraint, which seems like a limitation, is in fact what makes generating usable interface at scale possible without coherence drift.
The same foundation for your systems
The production neurosymbolic systems cited in the literature — AlphaGeometry, Plato-3, FAOS — share a common limitation: they are single-domain. AlphaGeometry solves geometry. Plato-3 handles prostate cancer in nuclear medicine. FAOS, the most generalist, remains confined to orchestrating 21 verticals via a frozen ontology. Building a new NeSy system for another domain means starting from a blank page.
The Swoft platform, by contrast, is generic. The metamodel describes the form of a business domain, not a particular domain. That is what allows Swoft to use it to generate its customers' software: the same neurosymbolic infrastructure underpins delivered applications. Your AI agents, your auditable decisions, your architectural constraints inherit by construction the six layers of representation described above. You don't deploy a custom LLM wrapper, you obtain a system of the same class as the one Kautz spoke about, in your business.
Why this architecture solves the four neurosymbolic gaps
The 2025 systematic literature review of neurosymbolism identifies four unsolved bottlenecks in production systems. Swoft addresses all four.
- Replay determinism. LLMs are non-deterministic, so replaying the same pipeline produces different results. Swoft's temporal layer stores each AI decision as data — on replay, you re-read the decision, you don't re-run it.
- Generic vs specific. Existing ontologies are hand-built for narrow domains. Swoft's metamodel is generic: it describes the form of a business domain, not a particular domain.
- Symbolic scalability. Classical symbolic reasoners (Prolog, Answer Set Programming) hit combinatorial bottlenecks. Swoft sidesteps the problem by localizing reasoning by bounded context, not globally.
- Multi-agent accountability. When several agents cooperate, responsibility becomes fuzzy. Swoft's dual-attribution pattern requires every action to be traced by two actors: the human who authorizes, the agent who executes.
What this changes for you
For an executive watching what AI produces in their company, the question isn't "did it write nice code". The question is: is the system deciding on my behalf inspectable, contestable, and bounded? If the answer is no, the risk is invisible — until the day it becomes an incident.
Giving AI agents an explicit, verifiable representation of the business is not an academic refinement. It is the condition for entrusting them with consequential decisions without giving up compliance, audit, or control over what they do.
Swoft does not build an AI that knows everything. It builds a system that knows exactly what it knows, and can prove it.
Sujets abordés
- Neurosymbolisme
- Agents IA
- Représentation des connaissances
- Knowledge graph
- Conway's Law
- Event Sourcing
- DDD
- Kautz
- Sheth Roy Gaur
- Plateforme générique
How Swoft turns this challenge into software
Comment ces six couches deviennent une réalité opérationnelle dans la plateforme Swoft.
- 01
Métamodèle DDD comme knowledge graph opérationnel
Domaines, bounded contexts, aggregates, événements, commandes et leurs relations sont stockés et requêtables. Les agents génèrent du code en lisant le métamodèle ; toute génération qui le viole est rejetée au build.
- 02
Event Store et déterminisme du replay
Chaque événement est immuable, horodaté et causalement chaîné. Les décisions IA sont stockées comme données, pas recalculées. Audit trail RGPD / DORA / NIS2 par construction.
- 03
Conway's Law exécutable
Routage agent ↔ domaine avec seuils de confiance. Un agent qui tente d'agir hors de son périmètre est bloqué structurellement, pas par un prompt « ne fais pas ça ».
- 04
Dual attribution sur chaque action
Toute mutation porte deux acteurs : l'humain qui autorise et l'agent qui exécute. La responsabilité reste traçable même quand 13 agents coopèrent sur une saga complexe.
À retenir sur ce sujet
- Qu'est-ce qu'une représentation explicite pour un agent IA ?
- Une représentation explicite est une structure de données inspectable qui décrit la connaissance d'un agent, les domaines métier, les entités, les événements, les rôles. Elle s'oppose à la représentation implicite des LLM, enfouie dans des milliards de paramètres et non vérifiable.
- Pourquoi un LLM seul a-t-il une représentation insuffisante pour la production ?
- Un LLM ne distingue pas ce qu'il sait de ce qu'il invente, ne garantit pas la cohérence entre deux réponses, ne permet pas d'auditer son raisonnement, et n'a pas de périmètre d'action borné. Ces quatre lacunes le rendent fragile pour des décisions à conséquence en environnement régulé.
- Combien de couches de représentation Swoft donne-t-il à ses agents ?
- Six couches : structurelle (métamodèle DDD), temporelle (Event Store), procédurale (logique métier déclarative), vectorielle (embeddings personas), organisationnelle (Conway's Law exécutable), visuelle (design system contraint).
- Swoft est-il un système neurosymbolique ?
- Oui. Selon la taxonomie de Kautz (AAAI 2022), Swoft est proche du type Neuro[Symbolic], le neural (LLM) est contraint par le symbolique (métamodèle), avec une propriété supplémentaire : le symbolique immobilise les décisions du neural via Event Store, garantissant le déterminisme du replay.
Continuer la lecture — SaaS
NIS2 for SaaS vendors: six months to pass the audit NIS2 for SaaS vendors: six months to pass the audit
Applicable since October 2024, the NIS2 directive starts to bite in 2026. SaaS vendors classified as "important entities" face new technical obligations.
EU AI Act articles 8-15: AI SaaS vendors must organize before August 2026 EU AI Act articles 8-15: AI SaaS vendors must organize before August 2026
On 2 August 2026, transparency and governance obligations for high-risk AI become applicable. For SaaS vendors, it's an underestimated workload.