On 1 August 2024, Regulation (EU) 2024/1689 entered into force — the EU AI Act. The world's first EU-wide uniform regulation for Artificial Intelligence. Since then, law firms have been writing guides, consultancies have been selling compliance packages, the press and LinkedIn are full of checklists.
This page attempts something different. It explains the AI Act in a way that makes the legal text understandable — and then takes a further step: what does this regulation actually want? What ethical stance underlies it? And what does compliance alone not achieve?
We are the Center for AI and Ethics (Europe), a non-profit association based in Vienna. Our work lies where regulation meets practice — in training, guidance, assessment. This page is not an advisory offering but public education.
What the AI Act is
The EU AI Act is not a ban on AI. It is a framework for how AI may be deployed in the EU — depending on how high the risk a system poses to people, fundamental rights and safety.
The logic: a spam filter needs no deep regulation. A system that decides who gets a loan or who is flagged for law enforcement does. The AI Act does not distinguish by technology, but by effect.
Bound are not only AI developers, but all roles along the chain: providers (who places a system on the market), deployers (who uses it), importers and distributors (who brings it into the EU or distributes it) and authorised representatives. Those who use AI without developing it themselves also have obligations.
Important: the AI Act does not replace existing laws. The GDPR, product safety, employment, anti-discrimination and sector-specific rules continue to apply in parallel. The AI Act adds its own layer on top.
The timeline
The AI Act was not switched on all at once. Obligations apply in stages:
- 1 August 2024
- Regulation enters into force.
- 2 February 2025
- Prohibitions of unacceptable AI practices apply (e.g. social scoring by public authorities, certain biometric categorisation).
- 2 August 2025
- Obligations for providers of General-Purpose AI models (GPAI) — affects large language models such as GPT, Claude, Gemini.
- 2 August 2026
- Key date: most obligations for high-risk AI systems become applicable. This is when the AI Act becomes concrete for most organisations.
- 2 August 2027
- Final transitional deadlines for certain legacy systems and high-risk AI in regulated products.
Anyone only now starting to engage with the AI Act still has time — but not unlimited. Building robust governance structures, documentation and human oversight takes months, not weeks.
The four risk classes
The AI Act assigns every AI system to one of four categories. This classification determines the entire scope of the obligations.
- Unacceptable practices (prohibited). These include social scoring by public authorities, certain forms of real-time biometric identification in public spaces, manipulation by subliminal techniques, and emotion recognition in the workplace and educational settings (with narrow exceptions).
- High-risk systems. AI in safety-critical products or in sensitive areas — see section 4.
- Systems with transparency duties. Chatbots must identify themselves as AI, deepfakes must be labelled, synthetic content must be recognisable.
- Minimal risk. The vast majority of AI applications — e.g. spam filters, recommendation systems in online shops. Only general provisions apply here, no specific obligations.
In addition, General-Purpose AI models (foundation models) are subject to their own tiered obligations — depending on whether a model is classified as posing a systemic risk.
High-risk AI
High-risk AI systems are those built into regulated safety-critical products (such as medical devices, vehicles, lifts) — and AI systems in eight sensitive application areas:
- Biometric identification and categorisation
- Critical infrastructure (electricity, water, transport)
- Education and vocational training
- Employment, worker management, access to self-employment
- Access to essential private and public services (credit, social benefits)
- Law enforcement
- Migration, asylum, border control
- Administration of justice and democratic processes
Providers of such systems carry the bulk of the obligations: risk management across the entire lifecycle, data quality assurance, technical documentation, transparency towards deployers, logging, human oversight and quality management. Before being placed on the market, a conformity assessment must be completed.
Not every AI in these areas is automatically high-risk: Article 6(3) provides narrow exceptions — e.g. for systems performing only a narrow procedural task, merely improving the result of human activity, detecting decision-making patterns without replacing human assessment, or performing only preparatory tasks.
FRIA — the ethical core
If the AI Act had to be reduced to a single concept, it would be FRIA.
FRIA (Fundamental Rights Impact Assessment) is a mandatory assessment under Article 27 of the EU AI Act. It must be carried out before certain high-risk AI systems are put into use — and not by the provider, but by the deployer: the organisation that actually uses the AI.
The FRIA documents:
- In which process and for what purpose the AI is used
- For how long, how frequently, affecting which groups of people
- Which concrete harm risks exist for fundamental rights (discrimination, privacy, freedom of expression, human dignity)
- Which measures mitigate the risk
- Which human oversight is in place, including means of objection and correction
Obliged — as of today — are four groups:
- Bodies governed by public law (authorities, ministries, schools, universities, public hospitals).
- Private deployers providing public services (e.g. private educational institutions with a public mandate, private social services).
- Banks and credit institutions in creditworthiness assessments and scoring.
- Insurers in risk and pricing assessments for life and health insurance.
Why FRIA is the ethical core: the Data Protection Impact Assessment under the GDPR (DPIA, Art. 35) asks whether data processing is permissible. FRIA goes further — it asks what happens if the system is wrong. Who bears the consequences? Which groups may be systematically disadvantaged? How is this noticed — and how corrected?
This is precisely where compliance ends and ethics begins. You can tick the FRIA off as a checklist. Or you can take it for what it actually is: an opportunity to think before the decision, not only afterwards.
Providers and deployers
The AI Act divides responsibility between two main roles:
Providers
Responsible for the system as it reaches the market:
- Conformity assessment before market entry
- Technical documentation
- Risk and quality management over the lifecycle
- Transparency towards deployers (instructions, limits)
- Post-market monitoring
- Reporting of serious incidents
Deployers
Responsible for the system in actual use:
- Use as intended
- Ensure human oversight
- Choose input data appropriately (quality, relevance)
- Report incidents
- In certain cases: carry out FRIA (Art. 27)
- Inform affected individuals where required
The clean separation matters: anyone who only uses an AI (deployer) does not automatically take on provider obligations. But deployers are not off the hook either — and in certain constellations (e.g. substantial modification of a system) they can themselves become providers.
Penalties
The penalty framework is modelled on the GDPR and is substantial:
- Prohibited AI practices: up to EUR 35 million or 7 % of global annual turnover — whichever is higher.
- Other violations (high-risk obligations, transparency, etc.): up to EUR 15 million or 3 %.
- False information to authorities: up to EUR 7.5 million or 1 %.
Lower amounts may apply to SMEs and start-ups. National supervision lies with different authorities depending on the Member State — in Austria it is coordinated by the AI Service Desk at the RTR.
Beyond compliance
The AI Act requires compliance. But it actually asks for more — and this is precisely where our work as an ethics association begins.
You can read the AI Act as a mandatory exercise: tick boxes, file documentation, satisfy the supervisory authority. That is legitimate and, for many organisations, the first step. But it is not what the legislator is actually aiming at.
The regulation states its own purpose in its first recital: trustworthy AI — with the protection of fundamental rights, health, safety and democracy as the guiding principle. That is not a legal formula but a stance. And a stance cannot be produced through compliance alone.
Three questions that a good FRIA asks — and that no checklist answers by itself:
- Who does this AI affect when it gets things wrong — and do those people have a voice in its development?
- What is the purpose of this system — and is it worth accepting the possible side effects?
- Who can object when the system makes a decision that has consequences for someone?
These are questions to the organisation, not to the technology. They cannot be answered by any audit tool. But anyone who takes them seriously fulfils the AI Act not only formally — but in substance.
That is where CAIE works. Not as a replacement for legal advice or technical implementation, but as a space where these questions can be asked before the AI goes live. Guidance, not sign-off. Clarity, not paragraph jargon.
Frequently asked questions
01 What is the EU AI Act?
The EU AI Act (Regulation (EU) 2024/1689) is the first EU-wide uniform regulation for Artificial Intelligence. It follows a risk-based approach: the obligations depend on the potential impact of an AI system on fundamental rights, safety and health — ranging from outright prohibition of certain practices to pure transparency duties. The regulation applies directly in all Member States and binds providers, deployers, importers, distributors and authorised representatives of AI systems.
02 When does the EU AI Act apply?
The regulation entered into force on 1 August 2024 and applies in stages. The prohibitions of unacceptable practices have applied since 2 February 2025, the obligations for General-Purpose AI models since 2 August 2025. The main requirements for high-risk AI systems become applicable on 2 August 2026. Some transitional periods extend until 2027.
03 What risk classes does the EU AI Act distinguish?
The regulation defines four categories: unacceptable practices (prohibited), high-risk systems (extensive obligations), systems with specific transparency duties (e.g. chatbots, deepfakes) and AI applications with minimal risk. General-Purpose AI models are subject to their own tiered obligations.
04 What is a high-risk AI system?
High-risk AI systems are those built into safety-critical products or deployed in sensitive areas — employment, education, access to essential services, law enforcement, migration or justice. Providers must establish risk management, safeguard data quality, maintain technical documentation, ensure human oversight and operate a quality management system.
05 What is FRIA (Fundamental Rights Impact Assessment)?
FRIA is a mandatory assessment under Article 27 of the EU AI Act. It must be carried out before certain high-risk AI systems are put into use and documents: in which process the AI is deployed, which groups of people are affected, what fundamental rights risks exist, which measures mitigate the risk and how human oversight is organised. Obliged entities are: bodies governed by public law, private deployers providing public services, banks (creditworthiness assessment) and insurers (risk and pricing in life and health insurance).
06 What obligations do providers and deployers have?
Providers are responsible for conformity, technical documentation, risk and quality management, transparency towards deployers and ongoing market monitoring. Deployers must use systems as intended, ensure human oversight, choose input data appropriately and report incidents. In specific cases, the FRIA obligation applies (Art. 27).
07 What are the penalties for violating the EU AI Act?
For prohibited AI practices: up to EUR 35 million or 7 % of global annual turnover — whichever is higher. Other violations: up to EUR 15 million or 3 %. False information to authorities: up to EUR 7.5 million or 1 %. Lower amounts may apply to SMEs and start-ups.