High-risk AI Fundamental Rights Assessment 2026
Table of Contents
- Why High-risk AI Demands Fundamental Rights Scrutiny
- Understanding the EU AI Act Risk-Based Framework
- Defining AI Systems Under the AI Act
- High-risk AI Classification and the Annex III Categories
- The Article 6(3) Filter and Its Fundamental Rights Implications
- Fundamental Rights Impact Assessments Under Article 27
- Key Fundamental Rights Risks Identified by the EU FRA
- Risk Management and Mitigation Strategies for AI Compliance
- Oversight, Enforcement, and the Role of Rights-Protection Bodies
- FRA Opinions and Recommendations for Effective Implementation
📌 Key Takeaways
- First comprehensive AI regulation: The EU AI Act (entered into force 1 August 2024) uses a risk-based approach requiring the strictest safeguards for high-risk AI systems that affect fundamental rights.
- Broad AI definition needed: The EU FRA recommends interpreting the definition of “AI system” broadly to prevent simpler systems from escaping regulation despite causing real harm to individuals.
- Classification filter risks: The Article 6(3) filter allowing providers to self-exclude from high-risk rules could create dangerous loopholes if not narrowly applied and actively monitored.
- FRIA is mandatory: Deployers of most Annex III high-risk systems must conduct Fundamental Rights Impact Assessments before deployment, covering privacy, non-discrimination, and effective remedies.
- Oversight needs resources: Effective implementation requires well-resourced supervisory bodies with fundamental rights expertise, not just technical or market-surveillance capabilities.
Why High-risk AI Demands Fundamental Rights Scrutiny
Artificial intelligence continues to reshape how governments, businesses, and institutions make decisions that profoundly affect people’s lives. From determining asylum claims to screening job applicants, AI systems are increasingly embedded in high-stakes processes where errors or biases can violate fundamental rights. The European Union Agency for Fundamental Rights (EU FRA) has published a landmark 2025 report — Assessing High-risk Artificial Intelligence: Fundamental Rights Risks — that provides the most detailed analysis to date of how the EU AI Act can protect citizens from AI-driven harm.
The stakes are immense. According to a recent Eurobarometer survey, 83% of EU citizens believe public authorities must shape AI and digital technologies to respect fundamental rights and values. Yet the FRA’s research, drawing on 38 semi-structured interviews with AI providers, deployers, and experts across Germany, Ireland, the Netherlands, Spain, and Sweden, reveals that most organisations developing or using high-risk AI systems do not yet perform structured assessments that comprehensively address fundamental rights.
This guide breaks down the FRA report’s findings, examines the critical provisions of the AI Act, and explains what organisations must do to achieve compliance while genuinely protecting the rights of individuals affected by AI-driven decisions. For those navigating the evolving landscape of AI governance frameworks, this analysis provides an essential foundation.
Understanding the EU AI Act Risk-Based Framework
The EU AI Act, adopted in 2024 and entered into force on 1 August 2024, represents the world’s first comprehensive region-wide regulation of artificial intelligence. Its central architecture is a risk-based approach that categorises AI systems into four tiers, each subject to different levels of regulatory scrutiny:
- Prohibited AI practices: Systems that pose unacceptable risks, including social scoring by governments, real-time remote biometric identification in public spaces (with narrow exceptions), and AI that exploits vulnerabilities of specific groups.
- High-risk AI systems: Systems used in sensitive areas listed in Annex I (product safety) and Annex III (specific use-cases) that must comply with stringent requirements including risk management, data governance, transparency, human oversight, and conformity assessment.
- Limited-risk systems: AI applications such as chatbots and deepfake generators subject to transparency obligations requiring users to be informed they are interacting with AI.
- Minimal-risk systems: All other AI applications not covered by specific provisions, which remain largely unregulated under the Act.
The FRA report focuses squarely on high-risk AI systems — the category where regulatory requirements are most demanding and where fundamental rights protections are most critical. As the European Commission’s AI policy framework makes clear, the high-risk category is designed to ensure that AI systems operating in sensitive domains undergo rigorous assessment before and during deployment.
Understanding how EU AI Act compliance works in practice is essential for any organisation developing or deploying AI in regulated sectors.
Defining AI Systems Under the AI Act
A foundational question underpinning the entire AI Act is deceptively simple: what counts as an “AI system”? Article 3(1) defines it as “a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”
The FRA’s first and arguably most important opinion is that this definition must be interpreted broadly. The agency warns that excluding simpler systems — such as logistic regression models or traditional rule-based software — from the AI system definition would create dangerous gaps in protection. The reasoning is compelling: a simple statistical model used to determine whether someone qualifies for social benefits can cause just as much harm to fundamental rights as a complex neural network performing the same task.
After the FRA’s fieldwork, the European Commission published guidelines (C(2025) 924 final) on the definition of an AI system. The FRA notes that these guidelines exclude some simpler methods historically used in decision-making, and criticises this approach because such systems “may perform equally well and still cause harms” to individuals. This tension between technical sophistication and real-world impact lies at the heart of effective AI regulation.
Transform complex regulatory reports into interactive experiences your team can actually engage with.
High-risk AI Classification and the Annex III Categories
The AI Act establishes two main routes through which an AI system can be classified as high-risk. The first involves AI systems used as safety components of products covered by existing EU product safety legislation listed in Annex I, which require third-party conformity assessment. The second — and more relevant to the FRA’s analysis — involves AI systems deployed in specific use-case areas listed in Annex III.
Annex III covers eight critical domains where AI systems interact most directly with fundamental rights:
- Biometrics: Remote biometric identification and categorisation of natural persons.
- Critical infrastructure: Management and operation of critical digital infrastructure, road traffic, and energy supply.
- Education and vocational training: Systems determining access to educational institutions, evaluating learning outcomes, or monitoring student behaviour.
- Employment and workers’ management: Recruitment tools, promotion decisions, task allocation, performance monitoring, and termination decisions.
- Access to essential services: Creditworthiness assessments, risk assessment and pricing for life and health insurance, evaluation of eligibility for public benefits and services.
- Law enforcement: Individual risk assessments, polygraph tools, evidence reliability evaluation, crime analytics, and profiling for detection and investigation.
- Migration, asylum, and border management: Polygraph and assessment tools, risk indicators, identity verification, and applications processing.
- Administration of justice and democratic processes: AI systems used to assist judicial authorities in researching, interpreting, and applying the law.
The FRA’s research specifically examined AI use-cases in five of these domains — asylum, education, employment, law enforcement, and public benefits — across five EU Member States. The findings reveal a landscape where AI deployment is advancing rapidly, but structured fundamental rights assessment remains the exception rather than the rule.
The Article 6(3) Filter and Its Fundamental Rights Implications
Perhaps the most consequential provision the FRA examines is the Article 6(3) “filter” — a mechanism allowing AI providers to exclude their Annex III systems from high-risk classification if they meet any of four conditions. A system can be excluded if it performs only a narrow procedural task, improves the result of a previously completed human activity, detects decision-making patterns without replacing or influencing human assessment, or performs a preparatory task for an assessment relevant to the Annex III use-cases.
The FRA raises serious concerns about this filter. Because its application relies primarily on provider self-assessment, there is a significant risk that providers will interpret the conditions broadly to avoid the cost and burden of high-risk compliance. The report provides a compelling example: a language assessment AI used in asylum procedures to inform country-of-origin determinations. Even if characterised as a “preparatory” task under the filter, an erroneous or biased language classification could materially affect an asylum decision if the human decision-maker does not effectively correct the AI’s output.
The FRA’s second and third opinions directly address this risk. The agency calls on the European Commission and the AI Board to provide a clear, narrow understanding of the filter conditions. Furthermore, national competent authorities should proactively monitor how providers apply the filter, paying special attention to systems in law enforcement and migration — areas where the EU database has limited registration transparency due to exceptions under Article 49(4).
If evidence shows that providers are interpreting the filter too broadly, the Commission has the power under Article 6(7) to delete filter conditions entirely via delegated acts. The FRA explicitly recommends that this power be exercised if fundamental rights protection is being undermined.
Fundamental Rights Impact Assessments Under Article 27
Article 27 of the AI Act introduces the Fundamental Rights Impact Assessment (FRIA) — a mandatory evaluation that certain deployers of high-risk AI systems must conduct before putting those systems into use. The FRIA requirement applies to deployers of most Annex III high-risk systems, with the notable exception of critical infrastructure operators.
The FRIA is designed to be a structured, forward-looking assessment of how an AI system may affect the fundamental rights of individuals and groups. According to the FRA, which assisted the European AI Office in developing the FRIA template (still under development at the time of the report’s publication), an effective assessment must cover three core cross-cutting rights:
- Privacy and data protection: How the system collects, processes, and stores personal data, and whether it may reveal private information about individuals.
- Equality and non-discrimination: Whether the system may produce biased outputs that disadvantage certain groups based on protected characteristics such as race, gender, age, disability, or ethnicity.
- Access to effective remedies: Whether individuals affected by AI-driven decisions have meaningful pathways to challenge those decisions and obtain redress.
Beyond these three pillars, the FRA emphasises that FRIAs must also address rights specific to each high-risk domain. In education, this includes the rights of the child. In employment, workers’ rights to fair conditions. In asylum procedures, the right to international protection. The FRA’s fourth opinion calls for guidance that provides practical examples tailored to each Annex III area.
The FRA’s research findings are sobering: current practice among AI providers and deployers focuses predominantly on data protection compliance (largely driven by GDPR obligations) and technical risk management. Some organisations conduct bias testing, but this is inconsistent and rarely embedded in comprehensive fundamental rights frameworks. The gap between what the AI Act requires and what organisations currently do is substantial.
Make AI compliance documentation accessible and engaging — turn dense regulatory content into interactive learning experiences.
Key Fundamental Rights Risks Identified by the EU FRA
Drawing on its interviews with providers, deployers, and experts, as well as focus groups with rights holders (18 members of the public), the FRA identifies several categories of fundamental rights risks that high-risk AI systems pose.
Privacy and Data Protection Risks
AI systems operating in high-risk areas inevitably process large volumes of personal data, often including sensitive categories such as health information, biometric data, or data revealing ethnic origin. The risk extends beyond data breaches — AI systems can infer private information about individuals from seemingly innocuous data points, creating new privacy violations that traditional data protection frameworks were not designed to address.
Discrimination and Bias Risks
The FRA highlights that AI systems can perpetuate and amplify existing societal biases. In employment screening, systems trained on historical hiring data may systematically disadvantage women or ethnic minorities. In credit scoring, algorithms may produce outcomes that disproportionately deny access to financial services for vulnerable communities. The insidious nature of algorithmic bias is that it can operate at scale, affecting thousands or millions of decisions simultaneously.
Vulnerability and Power Asymmetry
Vulnerable groups — including asylum seekers, children, people with disabilities, and those relying on public benefits — face disproportionate risks from AI-driven decision-making. These individuals often have the least capacity to understand, challenge, or seek remedies for AI decisions that affect them. The FRA’s research in asylum contexts demonstrates how language assessment AI, if flawed, can have devastating consequences for individuals seeking international protection.
Opacity and Explainability Challenges
Many AI systems operate as opaque decision-making tools. When individuals cannot understand why a decision was made, their ability to exercise the right to an effective remedy is fundamentally compromised. The FRA emphasises that transparency and explainability are not merely technical requirements — they are prerequisites for the meaningful exercise of fundamental rights.
Risk Management and Mitigation Strategies for AI Compliance
Article 9 of the AI Act requires providers of high-risk AI systems to implement comprehensive risk management systems. The FRA report examines current mitigation practices and identifies both strengths and critical gaps.
Human Oversight: Necessary but Insufficient
Human oversight (required under Articles 14 and 26(2)) is frequently cited as a primary mitigation measure. However, the FRA raises a crucial warning: the effectiveness of human oversight depends entirely on its design and implementation. Research on automation bias demonstrates that human decision-makers tend to over-rely on AI outputs, especially under time pressure or when they lack the expertise to evaluate AI recommendations critically.
The FRA argues that organisations cannot simply claim “a human reviews all decisions” as an adequate mitigation. Effective human oversight requires trained personnel who understand both the AI system’s limitations and the relevant fundamental rights context, clear protocols for when and how to override AI recommendations, and adequate time and resources for meaningful human review.
Bias Testing and Technical Safeguards
The FRA found that some organisations conduct bias testing of their AI systems, but this practice is inconsistent and varies significantly across sectors and Member States. Standardised bias testing methodologies are still lacking, making it difficult to compare or benchmark performance. The agency’s sixth opinion calls for significant investment in research and testing facilities — including dedicated bias testing capabilities — as part of public AI investment programmes.
Data Governance and Quality
Ensuring high-quality, representative training data is a fundamental prerequisite for mitigating discrimination risks. The AI Act imposes specific data governance requirements on high-risk system providers, but the FRA notes that compliance remains challenging, particularly when systems are trained on historical data that may embed past discrimination patterns.
Organisations seeking to build robust AI risk management strategies must go beyond technical compliance and embed fundamental rights considerations into every stage of the AI lifecycle.
Oversight, Enforcement, and the Role of Rights-Protection Bodies
The FRA dedicates significant attention to the institutional architecture needed for effective AI Act implementation. The report identifies a critical tension: the AI Act’s conformity assessment regime for many Annex III systems relies on internal control — essentially self-assessment by providers — under Article 43.
This self-assessment approach raises fundamental questions about accountability. Without robust external oversight, there is a risk that providers will interpret their obligations narrowly, particularly given the commercial pressures to minimise compliance costs. The FRA argues that self-assessment must be complemented by effective, independent oversight by bodies with both technical capabilities and fundamental rights expertise.
The EU Database for High-risk AI Systems
Article 71 establishes an EU-wide database for the registration of high-risk AI systems, designed to increase public transparency. However, the FRA notes important limitations: not all documentation must be made public, and Article 49(4) provides exceptions for systems used in law enforcement, migration, asylum, and border management — precisely the areas where fundamental rights risks may be greatest and external scrutiny most needed.
Leveraging Existing Rights-Protection Infrastructure
The FRA recommends leveraging existing institutional infrastructure for AI oversight. Data protection authorities, already experienced in technology regulation through GDPR enforcement, are given market surveillance responsibilities in certain areas under Article 74(8). Equality bodies, national human rights institutions, ombudspersons, and consumer protection agencies each bring relevant expertise.
Article 77 enables public bodies responsible for protecting fundamental rights to access AI Act documentation and request that market surveillance authorities organise technical testing of high-risk systems. This provision is potentially powerful but requires that these bodies have adequate resources to exercise it effectively.
The FRA’s seventh and final opinion emphasises that implementation will significantly increase the workload of oversight bodies, many of which are already resource-constrained (as documented in FRA’s 2024 GDPR in Practice report). Without additional financial, human, and technical resources, the AI Act’s oversight provisions risk remaining aspirational.
Transform regulatory intelligence into interactive experiences — help your team stay ahead of AI compliance requirements.
FRA Opinions and Recommendations for Effective Implementation
The FRA report culminates in seven formal opinions that together form a comprehensive roadmap for effective AI Act implementation from a fundamental rights perspective:
- Broad interpretation of AI system definition: Include lower-complexity systems that can still cause fundamental rights harm. The Commission should encourage market surveillance authorities to apply an inclusive interpretation.
- Narrow filter application: The Commission and AI Board must provide clear, restrictive guidance on the Article 6(3) filter to prevent providers from self-excluding systems that genuinely affect fundamental rights.
- Proactive filter monitoring: Authorities should monitor filter application using multiple evidence sources, with special attention to areas with limited transparency. If broad interpretation persists, the Commission should exercise its delegated act power to delete filter conditions.
- Comprehensive FRIA guidance: Standards under Article 9 and the FRIA template under Article 27 must cover privacy, equality, effective remedies, plus domain-specific rights with practical examples.
- Practical implementation guidance: Any proposals to simplify rules must be evidence-based and must not reduce fundamental rights protection.
- Investment in evidence base: The Commission and Member States should fund research on how AI affects fundamental rights and effective mitigation practices, including bias testing facilities as part of public AI investments.
- Resourced independent oversight: Self-assessment must be complemented by bodies with fundamental rights expertise and adequate resources. Existing rights-protection institutions must be strengthened.
These recommendations carry significant weight. As an EU agency with a mandate to provide evidence-based advice on fundamental rights, the FRA’s opinions inform both EU institutions and national governments. Organisations preparing for AI Act compliance would be well-advised to treat these opinions as an authoritative interpretation of where the regulatory emphasis will fall in the years ahead.
Frequently Asked Questions
What is a high-risk AI system under the EU AI Act?
Under the EU AI Act, a high-risk AI system is one that falls into specific use-case categories listed in Annex III (such as biometrics, law enforcement, education, employment, and access to essential services) or is a safety component of products covered by Annex I requiring third-party conformity assessment. These systems are subject to the most stringent regulatory requirements because they pose the greatest potential risks to fundamental rights.
What is a Fundamental Rights Impact Assessment (FRIA) and who must conduct one?
A Fundamental Rights Impact Assessment (FRIA) is a structured evaluation required under Article 27 of the AI Act. Certain deployers of high-risk AI systems listed in Annex III must conduct FRIAs before putting systems into use. The assessment evaluates how the AI system may affect fundamental rights including privacy, non-discrimination, and access to effective remedies, and identifies mitigation measures to address identified risks.
How does the Article 6(3) filter affect high-risk AI classification?
The Article 6(3) filter allows providers of AI systems in Annex III categories to exclude their systems from high-risk classification if they meet certain conditions, such as performing only narrow procedural tasks or not materially influencing decision outcomes. The EU FRA warns this filter could create loopholes if interpreted too broadly, and recommends narrow application with active monitoring by the European Commission and national authorities.
What fundamental rights are most at risk from AI systems?
According to the EU FRA report, the fundamental rights most at risk include the right to privacy and data protection, the right to non-discrimination and equality, the right to an effective remedy, and rights specific to certain contexts such as the right to asylum, rights of the child in education, workers rights in employment, and access to essential public services. Vulnerable groups face disproportionate risks from AI-driven decision-making.
When does the EU AI Act take full effect for high-risk systems?
The EU AI Act entered into force on 1 August 2024 as a directly applicable EU regulation. Its provisions are being phased in, with prohibitions on certain AI practices taking effect first, followed by requirements for high-risk AI systems. Full compliance obligations for high-risk systems, including risk management, conformity assessments, and fundamental rights impact assessments, are being progressively implemented through 2025-2027 as guidance, standards, and templates are developed.