Compliance

EU AI Act Risk Categories: Each Category Explained

eu ai act risk categories

The EU AI Act is the most prominent legislation regulating businesses’ use of AI within the EU. If your business falls under its jurisdiction, you have likely heard of the law's risk classification system for AI systems. To help prepare the law when it goes into effect, you must understand each of the EU AI Act risk categories.

The different categories under the EU AI Act designate AI systems into one of four categories: prohibited, high risk, limited risk, and minimal risk. The law utilizes a set of predetermined criteria to establish a system’s risk level, but the method used to classify is not yet entirely complete.

To help your business better understand these levels of risk, we will cover each category in detail, along with how your business can determine the risk of an application you use.

Let’s dive in.

Key Takeaways

  • The EU AI Act regulates businesses within the EU that utilize AI in their operations and services. The law has set obligations businesses must follow based on the AI system's risk level.
  • The EU AI Act categorizes AI software into four categories: prohibited, high risk, limited risk, and minimal risk, based on their potential to manipulate, cause harm to, or exploit consumers.
  • Your business can identify which risk category your AI system(s) belongs to based on the type of business you are and using the example applications listed in the text of the law as a reference.

What is the EU AI Act & Why is it Important

The EU AI Act is the world’s first large-scale law regulating businesses' use of artificial intelligence and AI systems in their operations. The law was proposed in April of 2021, and in December of 2023, the Council of the EU, the European Parliament, and lawmakers were able to accept and agree on the contents.

The European Commission will officially enforce the law, with the expected full effectiveness date around June 2026. It will take so long to come into law due to the nuance of the use of AI applications within businesses.

The general purpose of the EU AI Act is to create an established system of AI regulations and standards for the usage of AI within the EU. The law will protect consumers’ rights and information as they interact with the AI businesses use.

Some of the major provisions of the law are as follows:

  • Risk-based classification: Dividing AI systems into four categories (prohibited, high risk, limited risk, minimal risk) based on their potential to violate a consumer’s rights.
  • Transparency obligations: Businesses must explain and detail their use of AI and a system’s purpose to consumers.
  • Data governance: Sufficient data protection practices, including data privacy, proper disposal of old data, and clear explanations of all procedures.

The scope of the EU AI Act is still being determined but will likely be on a similar scale to frameworks like the General Data Protection Regulation (GDPR) and be quite broad. The law will assumingly cover all businesses that utilize AI systems within the EU. The only exceptions are military, legal, or law enforcement AI systems.

Prohibited AI Explained

The first classification of AI systems under the EU AI Act is the “prohibited” category. Systems or applications in this category are not allowed under any circumstances.

If your business uses these systems despite the ban, you can face fines of up to 35 million euros or 7% of your annual turnover from the previous year, whichever is higher.

Examples of applications of AI that are strictly prohibited are as follows:

  1. Applications that seek to distort a person or group’s behaviour in a way that causes them significant harm
  2. Exploits a person or group’s vulnerabilities to alter behaviour and cause substantial harm
  3. Biometric categorization systems that utilize a person’s attributes or characteristics as a base
  4. Social scoring or referring to the classification of people based on their behaviour or characteristics that result in unfavourable treatment;
  5. Risk assessing people to predict a potential criminal or administrative offense
  6. Risk assessing people to predict a second offense
  7. Creating facial recognition databases using images from the internet or CCTV
  8. Further adding to existing facial recognition databases using images from the internet or CCTV
  9. Inferring emotions of people in law enforcement, border management, or in workplace and education settings
  10. Analyzing recorded footage of public spaces with biometric identification, unless done as part of a criminal investigation for a serious offence

These systems or applications are categorized as unacceptable because they have been deemed:

  • Easy to use in manipulating people or groups through subconscious messaging
  • Possible to exploit people’s vulnerabilities, such as age or disabilities
  • Possible to evaluate people based on their behavior or appearance

The prohibited category covers a large section of AI systems but has yet to finalize some definitions of the specific uses of AI the law intends to ban.

The only exceptions to using prohibited AI systems are for law enforcement and prosecuting serious crimes. This exception refers explicitly to real-time and post biometric identification software that can only be used with court approval.

High Risk AI Under the EU AI Act

A step under prohibited AI systems is the “high risk” AI category. The EU AI Act declares high risk systems as AI that negatively affect consumers’ safety and fundamental rights or deal with consumers' sensitive personal information.

Although these systems are not banned outright, businesses that utilize them are subject to heavy regulation under the EU AI Act. Businesses can face fines of up to 15 million euros or 3% of their annual turnover from the previous year, whichever is bigger, for failing to comply with their use of high risk systems.

AI applications are determined to be high risk if they fit into one of the following two categories.

  1. Any AI system under the umbrella of the EU’s product safety legislation. This includes AI in toys, cars, airplanes, and medical equipment.
  2. Any of the following AI applications explicitly listed in the EU AI Act. The following systems must be registered in an EU database and assessed before use:
  • Biometric identification
  • Critical infrastructure management
  • Education/Vocational training
  • Employment and HR-related
  • Access to self-employment
  • Access to essential private and public services and benefits
  • Law enforcement
  • Migration, border control, asylum management
  • Interpretation of the law
  • Assistance in administering justice or application of the law

Any business that utilizes an application of AI in the high risk category must follow several obligations. These obligations demonstrate your business’s efforts to responsible AI and to ensure your system does not infringe on consumers’ fundamental rights or threaten their safety.

The obligations are as follows:

  • Register your system in a public EU database
  • Establish a quality management system
  • Regularly conduct Fundamental Rights Impact Assessments (FRIAs)
  • Ensure accurate record-keeping
  • Data governance/Ensure proper disposal of old data
  • Ensure adequate accuracy and security standards
  • Businesses that use interactive AI must inform consumers they are interacting with an AI system
  • Businesses that use biometric ID AI must obtain explicit consent unless the system is used in a law enforcement context
  • Businesses that use generative AI must inform consumers the content was created or altered artificially
  • Businesses that develop AI systems must implement a reporting system for serious incidents or any incident that could have potentially caused significant harm to a consumer

Exceptions to using high risk AI have yet to be declared officially. Still, in addition to law enforcement purposes, the European Parliament proposed a case-by-case basis for using these systems. Some instances, like search platform algorithms, could be permitted, while other cases of biometric ID may not.

Limited Risk AI Under the EU AI Act

Unlike high risk AI systems, limited risk AI is considered to have only a small potential for manipulation or for causing harm to a consumer and their rights. Limited risk AI is classified as systems that still possess characteristics that could be used to exploit or harm but not to the standard of high risk AI.

Some examples of limited risk AI are:

  • Chatbots/interactive chats
  • “Deepfakes” or artificially generated content (picture, voice, video)
  • Artificially edited or altered content
  • Biometric ID system that consumers consent to
  • Emotional ID system that consumers consent to

The only requirements of businesses that utilize applications within the limited risk category are transparency obligations. These obligations are as follows:

  • Businesses that use interactive AI must inform consumers they are interacting with an AI system
  • Businesses that use biometric ID AI must obtain explicit consent unless the system is used in a law enforcement context
  • Businesses that use generative AI must inform consumers the content was created or altered artificially

After your business has informed the consumer of their interaction with AI, they can decide whether to continue the interaction.

The penalty for failing to meet these obligations can reach up to 15 million euros or 3% of your annual turnover from the previous year, whichever is higher.

Minimal Risk AI Explained

The final risk category of AI systems is minimal risk AI. Minimal risk AI is considered to pose no threat and has no potential to cause harm, manipulate, or exploit a consumer.

AI systems are categorized into minimal risk if they do not fit into any of the higher three. The risk they pose is insignificant or non-existent. Many of the applications that are considered minimal risk are already commonly used by consumers.

Some examples of minimal risk AI are as follows:

  • Spam filters
  • Ad blockers
  • Inventory management AI utilized in videogames

Given that minimal risk AI poses no threat to consumers’ safety, businesses can use these applications without obligations or requirements under the EU AI Act.

How Can I Verify Which Risk Category an Application is in?

If you are unsure what category your business’s AI system(s) fall into, we have some general criteria you can use to find out.

The first step is determining your obligations based on your business type and how you utilize your AI system(s) within your operations. For example, providers of AI software or applications have different requirements than deployers/developers of AI.

Next is to evaluate the risk level of your AI system by matching it up to the specific examples and instances listed in the EU AI Act. You can use this compliance checker to verify where your business’s applications fall and the corresponding obligations you must follow.

After determining what risk category your AI system(s) falls into, the next step is to prepare. Although the EU AI Act won’t be fully enforced for a few years, certain parts of the act may start taking effect very soon, so your business should get a headstart by utilizing the help of compliance professionals like Captain Compliance.

We have a team of compliance experts and a full suite of services to help your business prepare for the EU AI Act and ensure you remain fully compliant with all obligations and requirements.

FAQs

What are the key provisions of the EU AI Act?

The key provisions of the EU AI Act are data governance, risk-based classification of AI systems, required EU database registration, risk/quality management, transparency, security, and accuracy.

See our comprehensive overview of the EU AI Act here!

Is there a grace period for the EU AI Act?

The EU AI Act will have a two-year grace period until it is fully enforced to allow businesses to implement the required processes and regulations. However, prohibited AI systems will face a much shorter window to cease operation before facing penalties.

Click here to learn about the official enactment of the EU AI Act.

Who will the EU AI Act apply to?

The EU AI Act will affect any business that operates and utilizes AI systems in the EU or provides/deploys/distributes/imports/manufactures/represents AI systems to consumers or businesses in the EU.

Contact us and learn about our expertise in navigating compliance frameworks in the EU!

How will the EU AI Act be enforced?

The European Commission primarily enforces the EU AI Act. However, each EU member state will be granted authority to enforce, control, and fine businesses within their territories.

Learn more about the methods of enforcing the EU AI Act here.

What is the max fine for violating the EU AI Act?

The maximum fine for violating the EU AI Act is issued for using an AI system that falls into the prohibited risk category. These fines can range to 35 million euros or 7% of a business's annual turnover from the previous year, whichever is higher.

See our detailed description of EU AI Act fines here!

How Can Captain Compliance Help?

For any business using AI systems or dealing with AI systems working with consumers or businesses within the EU, it is essential to understand the EU AI Act. You must know what risk category your AI system falls into to properly prepare your business for the act’s official enforcement and avoid significant fines.

At Captain Compliance, we can not only help your business identify the applicable risk but also provide the necessary services to ensure you remain compliant. We also bring years of compliance experience and extensive knowledge to help you meet all obligations and requirements under the EU AI Act and any other relevant laws applicable to your business.

Get in touch with us today to schedule your complimentary consultation and learn how we can help your business prepare for anything the EU AI Act throws your way.