Compliance

EU AI Act Compliance Checklist: Everything You Must Know

eu ai act compliance checklist

By now, you're probably aware that European lawmakers have debated and negotiated a comprehensive framework for the EU Artificial Intelligence Act. When this Act is fully enforced, your business must ensure that it meets the correct level of EU AI Act compliance. So, how does your business do this?

The world of data privacy and AI protection laws can be overwhelming for businesses to navigate, especially with new acts popping up left and right, like the EU AI Act (and their subsequent amendments).

That’s why we made this EU AI Act compliance checklist for your business to follow when the AI Act becomes fully enforceable.

Let's dive in.

Key Takeaways

  • The EU AI Act is a privacy law that will be used to regulate how businesses use artificial intelligence and provide businesses with a set of AI best practices in business.
  • The European Commission classifies AI systems as prohibited if they threaten a person's fundamental rights, as found in Article 2 of the Treaty of European Union.
  • Businesses that use high-risk AI systems must register with the EU public database, establish a quality management system and maintain appropriate levels of cyber security.

EU AI Act Explained

The new EU AI Act is a historic landmark law that will be used to regulate how businesses use artificial intelligence and provide businesses with a set of ethical AI best practices in business.

This law has been in the making for a long time now, with its first proposal back in April 2021, and only saw an agreement between lawmakers, the Council of the EU, and the European Parliament on December 9, 2023.

The AI Act is still not fully enforced as some definitions in the final text still need to be finalized, so businesses should expect the EU AI Act's full effective date to follow after a transition period of 18 months, which means this will likely be in 2026.

The EU AI Act will likely be enforced by the European Commission, which will be part of a centralized European Artificial Intelligence Office.

The scope of the AI Act is expected to be rather broad, covering both public and private sectors that make use of AI systems. Provisions of the AI Act can be summed up into three parts:

  • Risk-based approach
  • Transparency
  • Data governance

Overall, the AI Act plans to classify AI systems into four different categories of risk, with "minimal/low" the least amount of threat and "unacceptable" being the most dangerous type of threat. The level of threat will determine the severity of the rules.

Along with stricter rules regarding the use of artificial intelligence, the EU AI Act will enforce that businesses are transparent about what AI systems they are using and must be able to explain the AI's logic in a clear and accessible manner.

Prohibited AI Under the EU AI Act

A large portion of the EU AI Act is to provide AI regulation to AI systems that are deemed a threat and are prohibited under the scope of this Act.

The European Commission classifies AI systems as a threat if they are a threat to the fundamental rights of a person as outlined by Article 2 of the Treaty of European Union.

On January 21, 2024, the latest draft of the AI Act was released, which listed AI systems considered to be unacceptable and therefore prohibited from use.

These systems included:

  • AI systems that deploy subliminal, manipulative, or deceptive techniques: which can distort and impair decision-making
  • AI systems that exploit vulnerabilities: using a person's age, sex, disability or socio-economic circumstances to distort or impair their decision-making
  • AI systems that use biometric categorization systems: deducing a person's sensitive attributes like race, political afflictions, religious or philosophical beliefs and sexual orientation from biometric systems
  • AI systems that use social scoring: classifying a person based on personal traits and social behavior
  • AI systems that assess the risk of an individual committing criminal offenses: assigning risk that was based on a person's personal traits or profiling
  • AI systems that use compiling facial recognition databases: scraping facial images from CCTV footage or the internet in an untargeted manner
  • AI systems that infer emotions in workplaces or educational institutions: recognizing emotions in the workplace or educational setting to create a surveillance environment
  • AI systems that use real-time remote biometric identification (RBI) in publicly accessible spaces for law enforcement

There are some exemptions to the above prohibited artificial intelligence systems if they are used exclusively for military or defense purposes, research and innovation purposes, or law enforcement purposes.

While AI systems that use real-time remote biometric identification (RBI) in public spaces are prohibited, there are a number of exceptions for law enforcement.

These exemptions include:

  • Using RBI to search for missing, abducted, trafficked or sexually exploited persons
  • Using RBI if there is a threat to life or the public's safety
  • Using RBI when no considerable damage will occur
  • Using RBI before deployment. However, the EU AI Act does require that a fundamental rights impact assessment is taken beforehand.

EU AI Act Compliance for High-Risk AI

While unacceptable AI systems are prohibited, the EU AI Act does allow businesses to make use of high-risk AI systems, but there are strict AI regulations that your business must follow to remain compliant with the law.

According to the Act, high-risk AI systems are systems that improve the results of an activity previously completed by a human, detect decision-making patterns, and profile a person using personal data to assess their work performance, health, interests, movements and behavior.

If your business is using a high-risk AI system, you will need to ensure you are following the requirements set out by the European Commission.

Register in a Public EU Database

The European Commission will enforce all businesses that use high-risk AI to register their systems in the public EU database in keeping with their transparency scope.

Not all high-risk AI systems need to be registered. The first category, which falls under the EU's product safety legislation, like toys, cars and medical devices that use AI applications, does not need to be registered.

However, the second category, which falls under the following areas, will need to be registered in the database:

  • Operation of critical infrastructure
  • Educational training
  • Employment, self-employment and worker management
  • Private and public services
  • Legal interpretation
  • Law application
  • Law enforcement
  • Asylum, migration and border control management

Establish a Quality Management System

According to Article 17 of the EU AI Act, businesses will need to establish a quality management system that helps ensure compliance with the AI regulations. This system must be deployed throughout your AI system's entire lifecycle.

The quality management system should include the following:

  • A strategy to ensure your business remains compliant
  • Procedures for any modifications to your high-risk system
  • Procedures and techniques used for the design, design control and design verification of your system
  • Technical specifications about the system
  • Procedures and systems for data management and data protection measures

You can find the full list of requirements in the latest draft.

Perform Fundamental Rights Impact Assessments

Before you can begin using your AI system, the AI Act requires your business to conduct a fundamental rights impact assessment (FRIA)

This is similar to a DPIA required under the GDPR and is used to identify any potential risks your AI poses to fundamental rights or to your data subjects' rights. An FRIA is also used to evaluate how severe the risks are and what the likelihood of them is.

Once these risks have been evaluated, your business will need to implement measures to mitigate those risks and ensure that you are using responsible AI and providing data protection. Make sure you are documenting your FRIA process for compliance.

Maintain Any & All Records

When it comes to compliance, record-keeping is so important, especially with the upcoming AI Act, which requires businesses who are using high-risk AI technology to keep their record-keeping up to date.

The Act requires that businesses must automatically record events that are relevant for identifying national-level risks throughout the AI system's entire lifecycle.

This means keeping all conformity assessment results, steps you've taken for compliance, and records detailing the functionalities of your AI system.

Assign Responsibility

Article 14 of the Act requires that businesses' high-risk AI systems be developed in a way that assigns responsibility to businesses by employing human oversight to prevent and minimize risks to the fundamental rights of persons.

Some high-risk AI systems will need to be verified and confirmed by two humans before deployment to remain compliant with the Act.

In addition, Article 25 requires businesses using high-risk AI systems to appoint an authorized representative who assumes responsibility for the system and cooperates with competent authorities.

Maintain appropriate levels of accuracy and cybersecurity

Your business should ensure that it is using a high-risk AI system that was developed to maintain appropriate levels of accuracy, robustness and cybersecurity throughout its entire lifecycle to ensure that data privacy is maintained and fundamental rights are not violated.

This is especially important if your business is dealing with sensitive personal information.

Article 15 states the levels of accuracy of the AI system must be declared in the instructions of use as well as include fail-safe plans.

EU AI Act Compliance for Limited Risk AI

While there is not as much risk with limited-risk AI systems, the EU AI Act still has some requirements that businesses need to remain compliant with.

Limited-risk AI systems also include general-purpose and generative AI. You'll typically find that this AI involves talking with bots and is something that most people encounter daily. Most of the requirements revolving around these systems are about transparency.

Establish transparency obligations

Businesses that use limited-risk AI systems, like systems that generate or manipulate images (ChatGPT) or chatbots, need to establish transparency obligations.

These obligations should inform users that they are interacting with AI systems, how the AI system works and what types of data are trained and used to operate the system.

Inform and Obtain Consent

The EU AI Act requires businesses that use limited-risk AI systems to not only inform users or customers that they are interacting with AI systems but also give them the option to give consent to continue using the systems.

When obtaining consent, your business needs to be clear and open on what consent includes, as well as options on how consent can be withdrawn.

EU AI Act Compliance for Minimal Risk AI

Unlike the other three categories, the EU AI Act allows free use of minimal-risk AI systems because they have undergone a conformity assessment before they were deployed for public use on the market.

Minimal-risk AI systems are systems that we encounter often on a daily basis without even thinking about it. Some common examples include games (mobile or console) and spam filters that websites and applications use.

If your business is using minimal-risk AI, then there are no requirements you need to comply with within reason in the EU AI Act. Even if the AI Act does not apply to your business, you should always strive to ensure your AI systems are safe and do not harm a person's fundamental rights.

Penalties for Non-Compliance with the EU AI Act

Should your business be found non-compliant by the EU AI Act's law enforcement, your business will likely be handed EU AI Act penalties, which could be costly.

The fine and penalty structure will be divided into levels depending on the severity of the regulation breach. For example, using a prohibited high-risk AI system can set your business back by a staggering €35 million and even 7% of the company's global turnover. This is stricter than the GDPR's highest penalties.

Non-compliance with data governance and transparency requirements can result in fines of up to €15 million or 3% of the company's global turnover. If your business is caught supplying misinformation to the authorities, you can expect fines of up to €7.5 million or 1.5% of a company's global turnover.

The European Commission will consider all relevant circumstances of the business and the infraction before making their final decision on the penalty amount.

FAQs

Who will the EU AI Act apply to?

The AI Act applies to businesses, providers, and developers of AI systems located inside or outside the EU if their AI systems affect users in the EU.

Learn more about what to expect from the EU AI Act.

Is the EU AI Act finalized?

Not yet. The text of the EU AI Act has been finalized and approved. However, the rules will need to be approved in April 2024.

Stay updated on the EU AI Act's effective date.

What is the EU AI Act for US businesses?

The EU AI Act will have far-reaching influence, including the US. According to the Act, American businesses that use AI systems of EU residents will need to comply as well.

The US has its own data privacy laws, which you can learn more about.

What are the final goals of the EU AI Act?

The EU AI Act was ultimately created to level the playing field by creating a single market for AI systems. This will provide improved control over how these systems are used, enhance trust in AT systems and better protect the fundamental rights of EU citizens.

Before deploying AI systems, your business should be doing a Fundamental Rights Impact Assessment.

How Can Captain Compliance Help?

With the EU AI Act imposing stricter penalties for non-compliance, now is the time to start getting your business compliant to avoid paying hefty penalties. Whether you're using high-risk or limited-risk AI systems, it is a good idea to start preparing for when the Act is fully enforced.

You don't have to navigate the complex world of AI laws by yourself. Choose a global compliance service provider like Captain Compliance to help your business prepare for the world's first AI regulation laws.

Get in touch with Captain Compliance today for a complimentary consultation so you can find out your next steps for effortless compliance.