International

12-12-2023

In the making: the world’s first Artificial Intelligence Act

Artificial Intelligence Act
The European Parliament and Council have recently reached a provisional agreement on the draft Artificial Intelligence Act (AI Act), which was initially proposed in 2021, and now looking to be adopted in early 2024. Such act then can be expected to be fully enforceable within 2 years upon its adoption. The regulation is a legal framework regulating the sale and use of AI within the EU. Its purpose is to ensure the proper functioning of the EU single market by setting a consistent standard for AI technologies in EU member states. This regulation is the world’s first regulation on AI and the EU is seeking to set a global standard on the AI industry.

Jurisdiction
The definition of AI within the regulation states “AI systems that are placed on the market, put into service, or used in the EU,” which shall include developers, deployers, and global vendors selling or making their systems available to users in the EU. However, the regulation does not include AI systems exclusively developed for military, defense, and national security purposes, or those for the sole purpose of research and innovation.

Regulation Mechanism
AI systems are regulated based on the level of risk they pose to the health, safety and fundamental rights of a person. Via a criteria and assessment mechanism the AI systems are classified in different categories. Developers determine their AI system’s risk category themselves. Similarly, and with a few exceptions, developers may self-assess and self-certify the conformity of their AI systems and governance practices with the requirements described in the AI Act. They can do so either by adopting forthcoming standards or by justifying the equivalency of other technical solutions. Mis-categorizing an AI system and/or failing to comply with the respective provisions is subject to a fine ranging from EUR 35 million or 7% of global turnover to EUR 7.5 million or 1.5% of turnover, depending on the infringement and the size of the company. However, there are other appropriate caps on administrative fines applicable.

Different categories
Unacceptable Risk
The AI systems categorized into this bracket, exhaustive list, are outright prohibited. Which include:

  1. Biometric categorization systems that use sensitive characteristics (ex: political, religious, philosophical beliefs, sexual orientation, race)
  2. Untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases
  3. Emotion recognition in the workplace and educational institutions
  4. Social scoring based on social behavior or personal characteristics
  5. AI systems that manipulate human behavior to circumvent their free will
  6. AI used to exploit the vulnerabilities of people (due to their age, disabilities, social or economic situation)

Exemptions:

  • “Post-remote” Biometric identification systems (RBI) in publicly accessible spaces for law enforcement purposes, given prior judicial authorization and for strictly defined lists of crime.
  • “Real-time” RBI is limited to the purpose of:
    • Prevention of a specific and present terrorist threat
    • Targeted searches of victims (abduction, trafficking, sexual exploitation)
    • Localization or identification of a person suspected of having committed one of the specific crimes (ex: terrorism, trafficking, sexual exploitation, murder, kidnapping, rape, armed robbery, participation in a criminal organization, environmental crime)

High Risk
These systems will be carefully regulated, where the AI systems need compliance with certain mandatory requirements and an ex-ante fundamental rights impact assessment. Requirements include high-quality data sets, establishing appropriate information management to increase traceability, providing sufficient information for users, and adequate human-centric control. AI developers can choose the way to meet these requirements by taking into account the state-of-the-art and technological progress in the field; meaning, they may have internal control themselves, and have ex-post supervision by authorities.

2 categories:

  • System is a safety component or a product subject to existing safety standards and assessments, such as toys or medical devices
  • System is used for a specific sensitive purpose. Exhaustive list:
    • Biometrics
    • Critical infrastructure
    • Education and vocational training
    • Employment, workers management and access to self-employment
    • Access to essential services
    • Law enforcement
    • Migration, asylum and border control management
    • Administration of justice and democratic processes

Limited and Minimal Risk
Limited risks are systems that have a limited potential for manipulation. These systems are required to have people informed of circumstances and to disclose the content generation process, including anything that interacts with natural persons (emotion recognition or biometric categorizations). Residual risks all fall under “minimal risk” where they don’t have any specific obligations and duties, but simply some advice by the commission to construct a code of conduct.

General-purpose AI systems
The most of Generative AI models (GPAI), are obligated to comply with transparency requirements and ensure other safeguarding mechanisms. These requirements may be a detailed summary of the content and mechanism of the system itself. Furthermore, for high-impact GPAI with systemic risk, they are instructed stricter rules; they are to conduct model evaluation and adversarial testing, assess and mitigate systemic risks, ensure cybersecurity, and report on their energy efficiency.

Governance
An AI Office to oversee the AI models within the community will be set up. They will manage the fostering standards and testing practices, and enforce the common rules in all member states of the EU. Additionally, an AI Board, comprising representatives from each EU member state, will function as a coordination platform and advisory body to the Commission. Finally, an individual may submit a complaint to the relevant market surveillance authority concerning non-compliance with the AI act and will receive decisions based on the impact on their rights.

By Yuma Amano and Paul A. Josephus Jitta.

Key contacts

Paul Josephus Jitta

Partner | Lawyer
Send me an e-mail
+31 (0)20 333 8395

Follow us! 
Subscribe newsletter LinkedIn

Related news & updates