Skip to main content
SaaS

EU AI Act articles 8-15: AI SaaS vendors must organize before August 2026

On 2 August 2026, transparency and governance obligations for high-risk AI become applicable. For SaaS vendors, it's an underestimated workload.

Équipe SwoftPôle veille sectorielle
Tableau de bord SaaS avec gouvernance IA et traçabilité des décisions

Regulation (EU) 2024/1689 on artificial intelligence (the EU AI Act), published in June 2024, is the world's first horizontal AI-regulation framework. Its entry into force is staggered: prohibited AI bans in February 2025, obligations for foundation GPAI models in August 2025, and — the pivot for most SaaS vendors — obligations on high-risk AI systems on 2 August 2026. For a SaaS embedding an AI feature in a use case classified as high-risk (HR, credit scoring, access to an essential service, border control), 2026 is the compliance year.

The high-risk perimeter, in practice

Annex III of the regulation lists high-risk use cases: biometrics, critical infrastructure, education and training, employment (sourcing, candidate selection, evaluation, promotion, dismissal), access to essential services (credit, insurance, social benefits), law enforcement, migration and border control, justice and democratic processes. For a French B2B SaaS vendor, the actually concerned domains are mainly: HR (recruitment, evaluation), credit scoring (banking, fintech, BNPL), insurance (pricing, claims management), and certain education uses.

An important point often misunderstood: AI is not high-risk by its technology (LLM, computer vision, classical ML). It is high-risk by its use case. The same classification algorithm can be high-risk in one context (credit scoring) and not high-risk in another (e-commerce product recommendation).

The seven key obligations of articles 8-15

Risk management system (art. 9)

Identification, analysis and continuous mitigation of risks associated with each high-risk AI. The system must be documented, updated at every major evolution, and tested. For a fast-iterating SaaS (frequent releases), risk-management governance is an organizational topic, not just a technical one.

Data and data governance (art. 10)

Training, validation and test datasets must be relevant, representative, free of errors, and complete to the extent possible. Biases must be identified and mitigated. For a SaaS training its models on customer data, this requires governance distinguishing production data from training data, with explicit procedures.

Technical documentation (art. 11)

A complete technical file per high-risk AI system, with: system description, purpose, training method, performance, known limitations, mitigation measures, evaluation procedures. The format follows Annex IV of the regulation — about 30-50 pages of documentation per AI system.

Log retention (art. 12)

High-risk AI systems must automatically retain logs of operations they perform, for at least 6 months (unless national law provides otherwise). This enables decision traceability and post-incident investigation.

Transparency and user information (art. 13)

The AI system must provide deployers (SaaS customers) with clear instructions for use: purpose, performance, operating conditions, human supervision resources, expected lifespan. This goes through structured user documentation, not a summary README.

Human oversight (art. 14)

Organizational and technical measures to allow a human to effectively supervise the system, understand outputs, detect anomalies, intervene when needed. For a SaaS delivering an automated decision (e.g., credit score), human oversight must be designed into the product, not bolted on at the end.

Accuracy, robustness, cybersecurity (art. 15)

The system must reach an appropriate level of accuracy, robustness and cybersecurity. The levels must be documented in the user notice. Robustness implies in particular resistance to adversarial attacks (data poisoning, model evasion, prompt injection for LLMs).

Three practices that structure forward-looking vendors

The internal AI register

Living inventory of AI systems deployed in the product: model, purpose, training dataset, test dataset, performance, high-risk classification or not. Every new AI feature goes through a review ("AI Council") before deployment. This register is the basis of the compliance file.

The data passport per model

For each trained model, a "data passport" documents: sources, time perimeter, demographics represented, biases identified and measured, training method, hyperparameters, performance by segment. This passport is shared with deployers (customers) who need it for their own compliance.

Shadow mode for high-risk deployments

Before any production deployment on a high-risk case, the model runs in "shadow mode": it computes its outputs in parallel with the current human decision, without exposing them to users. This validates performance, robustness, bias before actual deployment. Typical shadow period is 3 to 6 months.

The 2027-2028 horizon

The full EU AI Act calendar plans the gradual entry into force of other obligations: CE marking for high-risk AI (August 2027), harmonized European standards (as adopted by CEN/CENELEC), supervision by national authorities (in France, the AI Office and the CNIL). Targeted controls will probably start in 2027, focused on actors that had notified incidents or were identified via market-placement declarations.

For a SaaS vendor embedding AI, the 2026 program is: (1) map AI usages and identify those tipping into high-risk, (2) build the AI register and data passports, (3) structure the Annex IV technical file per system, (4) integrate human oversight into product design, (5) test in shadow mode before any high-risk deployment. It is a structuring investment, and a commercial argument with large enterprise customers that have their own AI obligations.

Sujets abordés

  • EU AI Act
  • IA haut risque
  • Conformité IA
  • Articles 8-15
  • Gouvernance IA
Tech translation

How Swoft turns this challenge into software

Industrialiser la conformité EU AI Act, c'est connecter le registre IA, les data passports, l'audit log des décisions, et la supervision humaine dans un système qui rend la conformité naturelle. Voici comment Swoft équipe les éditeurs SaaS qui embarquent de l'IA.

  1. 01

    Registre IA avec classification haut risque automatique

    Inventaire vivant des modèles IA déployés : modèle, version, finalité, classification haut risque (déclenchée par le cas d'usage). Workflow de revue obligatoire avant déploiement d'un nouveau modèle haut risque. Le registre alimente automatiquement le dossier technique annexe IV.

  2. 02

    Data passport et traçabilité des datasets

    Pour chaque modèle, un passeport documenté : sources des données, périmètre temporel, biais mesurés (par segment démographique), performance par segment, hyperparamètres. Versionning du dataset + versionning du modèle, avec lien immuable. En cas de réquisition autorité, la chaîne se reconstitue en quelques minutes.

  3. 03

    Audit log des décisions IA et supervision humaine intégrée

    Chaque sortie IA en production est enregistrée avec son contexte (input, modèle, version, score de confiance) pendant 6 mois minimum. La supervision humaine est intégrée au flow produit : pour les décisions à fort impact, l'humain valide ou invalide, et la décision finale est tracée distinctement de la sortie modèle.

Continuer la lecture — SaaS

  • NIS2 for SaaS vendors: six months to pass the audit
    Salle serveur d'un éditeur SaaS avec consoles de supervision sécurité

    NIS2 for SaaS vendors: six months to pass the audit

    Applicable since October 2024, the NIS2 directive starts to bite in 2026. SaaS vendors classified as "important entities" face new technical obligations.