EU AI act compliance made clear 

As the EU AI Act moves from legislation to implementation, organizations across Europe face a pressing challenge: how do we translate high‑level legal obligations into concrete and verifiable technical activities? 
A person on their laptop. Blue tint on the image with the letter M appearing on the side.

Written by: Nishat I Mowla, Kabir Fahira

While the EU AI Act defines requirements for high‑risk AI systems, it has, until now, left much ambiguity about how these requirements can be addressed. This uncertainty has led to inconsistent readiness across Member States and frustration among developers, compliance officers, and policymakers alike.

To answer this question, Swedish AI Factory Mimer, Luxemburg AI Factory MELUXINA-AI, and Citcom.AI got together yielding the following work, “Assessing High-Risk AI Systems under the EU AI Act: From Legal Requirements to Technical Verification,” (https://arxiv.org/abs/2512.13907) which proposes a practical and structured solution to this problem. This work extends our previous work, From AI Act to Structured Testing of AI Systems”.

A two-dimensional verification framework 

This work offers a comprehensive framework that organizes AI Act compliance verification along two essential dimensions: 

  1. Verification Method Type: Distinguishing between control-based approaches and testing-based approaches. 
  1. Assessment Target: Covering data, model, processes, and the final AI product.  

This structured approach provides clarity not only for AI developers and auditors, but also for policymakers seeking consistent enforcement strategies across Europe. 

From EU AI Act’s regulatory text to operational reality 

A key innovation in this work is a systematic decomposition of the AI Act’s high-level requirements into operational sub‑requirements, each matched with specific verification activities. This mapping is grounded in authoritative standards and established best practices, ensuring applicability across the AI lifecycle. 

The result is a reusable reference model that reduces guesswork and supports: 

  • Consistent interpretation of obligations 
  • Technology‑agnostic verification 
  • Alignment between technical, regulatory, and organizational perspectives 

A path forward for trustworthy AI

As AI continues to advance, ensuring trustworthy development is no longer optional—it is a regulatory, ethical, and societal imperative. This framework helps bridge the long‑standing divide between policy aspirations and technical implementation, strengthening both compliance readiness and AI governance maturity across the EU. 

For researchers, industry practitioners, and regulators alike, this work provides a foundation for more consistent and transparent assessment of high‑risk AI systems—an important step toward operationalizing the AI Act in a practical and harmonized way. 


Get access to AI development support and infrastructure!

Mimer offers both expert support for your AI projects and GPU infrastructure. Browse to see what we offer or contact us!