Case Study

EU AI Act Governance & Target Operating Model Implementation

To design and implement a pragmatic, organization-wide AI governance framework enabling compliance with the EU AI Act by August 2026, while safeguarding ethical, transparent and responsible AI usage.

Client

Flemish Public Employment Service

Situation

The client had already established an AI Governance Framework through its AI Center of Excellence (CoE). However:

  • Legal validation of compliance with the EU AI Act was not yet performed
  • AI systems outside the CoE scope were not fully governed
  • No organization-wide risk classification existed
  • Business ownership responsibilities under the AI Act were not embedded
  • AI use across departments lacked centralized visibility
  • Procurement processes did not systematically assess AI compliance

The EU AI Act (entered into force August 2024) introduced mandatory obligations with a compliance deadline of August 2026. Non-compliance could lead to:

  • Regulatory sanctions
  • Reputational damage
  • Operational risk exposure
  • Fundamental rights violations

The organization needed a structured, cross-departmental approach to avoid fragmented compliance and governance blind spots

Assignment

Design and operationalize an AI Act governance program covering:

  • Legal interpretation and translation of the EU AI Act into operational requirements
  • Risk classification of all AI systems (developed and procured)
  • Development of an AI Target Operating Model (TOM)
  • Governance structure across business, IT and support functions
  • Implementation roadmap toward phased compliance
  • Change management, communication and awareness

The assignment explicitly excluded operational AI development activities.

EU AI ACT

#AIAct #AIGovernance #PublicSectorInnovation #RiskClassification #TargetOperatingModel #EthicalAI #ComplianceTransformation #DigitalGovernance

Details

Challenge

Key Objectives

1- Legal & Risk Foundation

  • Perform formal AI Act interpretation and legal validation
  • Classify all AI systems (high-risk / non-high-risk)
  • Establish structured AI inventory and risk register

2- Target Operating Model (TOM)

Design a future-state AI governance model including:

  • Roles & responsibilities (AI CoE, Business Owners, DPO, Legal, etc.)
  • Governance bodies (AI Steering Committee, Ethics Council, Architecture Board)
  • End-to-end AI lifecycle processes
  • Procurement integration
  • Risk management mechanisms

The TOM served as the transformation blueprint from current (“as-is”) to compliant future state (“to-be”).

3- Organization-Wide Governance

  • Embed AI governance beyond the AI CoE
  • Formalize accountability of business owners
  • Integrate DPO, security, legal and procurement
  • Create escalation and reporting mechanisms

4- Implementation Roadmap

Three-phase approach:

  • Phase 1: AI Act analysis & AI register (Apr-Jul 2025)
  • Phase 2: Target Operating Model (Jun-Dec 2025)
  • Phase 3: Implementation & Compliance (Jan-Aug 2026)

Pragmatic ambition level selected (705 estimated man-days).

Results, delivered:

AI Governance Program Design

  • Full AI Act compliance roadmap
  • Structured risk classification methodology
  • AI inventory registration framework
  • Risk management lifecycle for high-risk AI

Target Operating Model

  • Clear ownership model (AI CoE vs Business vs Legal vs DPO)
  • RACI definitions for AI lifecycle processes
  • Defined escalation levels
  • Governance board integration
  • Procurement compliance checkpoints

Risk & Compliance Architecture

  • High-risk AI system governance process
  • Periodic re-assessment framework
  • Go-live checklists for high-risk systems
  • Risk acceptance and reporting mechanism

Change & Adoption

  • Organization-wide awareness strategy
  • Training integration (HR involvement)
  • Communication plan (monthly updates, internal channels)
  • Ambassador model

Strategic Alignment

  • Alignment with digital transformation strategy
  • Strengthened ethical AI positioning
  • Increased transparency toward stakeholders

Our involvement:

  • Co-designed and validated the AI Target Operating Model
  • Translated EU AI Act legal requirements into operational processes
  • Structured the AI risk classification approach
  • Facilitated cross-functional alignment between IT, Legal, Business, Security and Data teams
  • Defined governance mechanisms and reporting flows
  • Contributed to phased compliance strategy (pragmatic ambition level)
  • Supported change management and organizational embedding

Our role focused on bridging law, governance, operations and transformation, ensuring that compliance became structurally embedded rather than a theoretical exercise.

Total Mandays

(Project- & Change Management)

160

Get In Touch

kris@factter.be