Section E · Entity Type
What is your organisation's primary role in relation to the AI system?
Choose the role that best describes how your organisation relates to this AI system. Note: if you are a deployer who has significantly modified the system's purpose or functionality, you may be reclassified as a Provider under Article 25.
Provider
We develop or build the AI system and place it on the market or put it into service
Deployer
We use an AI system built by someone else within our own operations or for our customers
Distributor or Importer
We make a third-party AI system available on the EU market without modifying it
Product Manufacturer
We embed AI into a physical product (e.g. medical device, vehicle, machinery)
Section S · Geographic Scope
Does your AI system reach users or produce effects within the European Union?
The EU AI Act applies regardless of where your company is based. It covers any system whose output is used in the EU or affects EU residents — including SaaS products, APIs, and decision tools used by EU clients.
Yes — we operate in or serve users in the EU
→ EU AI Act applies in full
Partially — some EU exposure or planned EU expansion
→ Obligations apply to the EU-facing portion
No — exclusively outside the EU with no EU user impact
→ Currently out of scope — monitor if you expand
Section GPAI · General Purpose AI
Is your system a general-purpose AI model — a model trained on large amounts of data capable of performing a wide range of tasks?
This covers large language models (LLMs), foundation models, multimodal models, and image/code/audio generation models. Examples: building your own LLM, fine-tuning an open-source foundation model for release, or offering a model API to other developers. GPAI obligations under Articles 51–55 have applied since 2 August 2025.
Yes — we develop or release a general-purpose AI model
→ GPAI obligations under Art. 51–55 apply now
No — we use AI models built by others, or build task-specific AI
→ Continue to use-case classification
Section S · Scope Exclusions
Does your AI system fall into any of the following excluded categories?
The AI Act excludes certain uses from its main obligations. Select all that apply, or choose "None of the above" to continue to the full classification.
✦ Select all that apply
Scientific research & development only
System is exclusively used for R&D purposes and not placed on the market
Open-source model (weights freely available)
Weights are publicly released under an open licence — note: prohibited practices and GPAI systemic-risk rules still apply
Personal or purely private non-professional use
Used solely by an individual for their own personal, non-commercial purposes
Military, national defence, or national security
Exclusively for military or national security purposes by a Member State
None of the above
Our system doesn't fall into these exclusions
Section R · Article 5 · Prohibited Practices
Do any of the following describe your AI system's functions?
Most AI systems don't fall into these categories. Article 5 prohibited practices have been enforceable since 2 February 2025. Select any that apply — or choose "None of the above" to continue.
✦ Select all that apply
Subliminal manipulation causing psychological harm
Techniques that operate below conscious awareness to distort a person's behaviour in a harmful way
⚠ Article 5(1)(a) — Prohibited
Exploiting vulnerabilities of specific groups
Targeting children, elderly, or people with disabilities to distort their behaviour harmfully
⚠ Article 5(1)(b) — Prohibited
Social scoring of individuals by public or private entities
Evaluating persons over time across unrelated contexts leading to detrimental treatment
⚠ Article 5(1)(c) — Prohibited
Predicting criminal risk based solely on profiling or personality
AI-based individual risk assessment for criminal behaviour without objective, verifiable facts
⚠ Article 5(1)(d) — Prohibited
Untargeted scraping of facial images to build recognition databases
Mass collection from internet or CCTV to expand facial recognition datasets
⚠ Article 5(1)(e) — Prohibited
Emotion recognition in workplaces or educational institutions
Inferring employees' or students' emotional states in professional or academic settings
⚠ Article 5(1)(f) — Prohibited
Biometric categorisation to infer protected characteristics
Deducing race, political opinions, religion, sexual orientation, or trade union membership from biometric data
⚠ Article 5(1)(g) — Prohibited
Real-time remote biometric identification in public spaces
Live identification of people in publicly accessible areas (narrow law-enforcement exceptions apply — legal review required)
⚠ Article 5(1)(h) — Prohibited except narrow exceptions
None of the above
Our system does not perform any of these functions
→ Continue to use-case classification
Section HR · Annex III High-Risk Use Cases
Does your AI system perform any of the following functions?
These use cases are explicitly listed in Annex III of the EU AI Act as high-risk. Selecting any triggers the full Article 9–15 obligations. Narrow exceptions apply for AI performing preparatory or administrative tasks. Select all that apply.
✦ Select all that apply
Biometric identification or categorisation of persons
Remote identity verification, face matching, or biometric categorisation systems
→ Annex III 1 — High-risk
Critical infrastructure management
AI used in digital infrastructure, energy grids, water supply, transport, or financial markets
→ Annex III 2 — High-risk
Education and vocational training
Determining access to educational institutions, evaluating students, monitoring behaviour
→ Annex III 3 — High-risk
Employment, HR, and workforce management
Recruitment, CV screening, interview evaluation, performance monitoring, task allocation, contract termination
→ Annex III 4 — High-risk
Access to essential private services
Credit scoring, loan or insurance decisions, welfare benefit eligibility, emergency service routing
→ Annex III 5 — High-risk
Law enforcement
Risk assessment for criminal behaviour, polygraphs, forensic evidence evaluation, crime analytics
→ Annex III 6 — High-risk
Migration, asylum, and border control
Risk assessment for irregular migration, document verification, visa or asylum application processing
→ Annex III 7 — High-risk
Administration of justice and democratic processes
AI assisting courts, applying the law, or influencing elections or democratic decision-making
→ Annex III 8 — High-risk
None of the above
Our system does not perform any Annex III functions
Section HR · Annex I — Regulated Products
Is your AI system a safety component of, or itself placed on the market as, a regulated product under EU law?
Regulated products include: medical devices, in vitro diagnostics, civil aviation equipment, motor vehicles, marine vessels, railway systems, lifts, machinery, toys, and personal protective equipment governed by EU sectoral safety legislation.
Yes — our AI is part of a regulated product (Annex I)
High-risk AI rules apply from 2 August 2027 for these systems
→ Annex I — High-risk (later timeline: Aug 2027)
No — standalone AI application, not embedded in a regulated product
Unsure — we may have some overlap with regulated products
Section R · Article 11 — Technical Documentation
How would you describe your current technical documentation and compliance records?
Article 11 requires a technical file covering system design, training data, risk management, and performance metrics. Article 10 requires documented data governance. This question helps calibrate your readiness score.
Full documentation — technical file and data governance records exist
We have structured, up-to-date records meeting Article 11 requirements
→ Strong compliance posture
Partial documentation — some records exist but gaps remain
We have internal docs but they're incomplete or not structured for regulatory review
→ Article 11 documentation gap
No formal documentation — nothing regulator-ready
We track things informally or not at all
→ Critical Art. 11 gap — high exposure
Not applicable — we are a pure deployer; the provider holds the documentation
We rely on the provider's technical file and have not modified the system