European Union Artificial Intelligence Act Enters into Force

European Union Artificial Intelligence Act Enters into Force

Overview

Artificial intelligence (“AI”) systems, which have recently become increasingly widespread in almost every sector, are machine-based computer systems designed to operate with varying levels of autonomy and demonstrate adaptability after deployment. AI systems process inputs to achieve explicit or implicit goals and produce outputs such as predictions, content, recommendations or decisions that can affect physical or virtual environments.

The European Union (“EU”) Artificial Intelligence Act (“AI Act”), the world’s first comprehensive legislation aimed at regulating the sector, was adopted by the European Council on May 21, 2024 after long periods of negotiation and legislative preparation and entered into force (directly for EU member states) on August 1, 2024. Harmonization transition periods are envisaged to last two years and will vary according to associated risk levels. AI systems presenting an “unacceptable risk” will be banned within 6 months of the Act’s implementation, while certain obligations for “General Purpose Artificial Intelligence” (“GPAI”) systems, as defined below, will apply after 12 months. Implementation of provisions for high-risk AI systems will begin on August 2, 2027.

The AI Act covers all stages of “AI systems, from development to use and aims, to ensure AI is a regulated and trusted mechanism and to encourage investment and innovation in the field. Within the framework of the AI value chain[1], it applies to providers (developers and companies instructing development), importers, distributors, manufacturers and deployers of certain types of artificial intelligence systems.

The AI Act’s obligations vary according to roles played in the AI value chain with a focus on providers, distributors, importers and manufacturers of AI systems. Depending on the level of threat posed to humans, AI systems are categorized as presenting “unacceptable risk”, “high-risk” and “limited risk”. Violations are subject to fines at the following scales: (i) €35,000,000 or 7% of worldwide annual turnover (unacceptable risk); (ii) €15,000,000 or 3% of worldwide annual turnover (high risk); and (iii) €7,500,000 or 1% of worldwide annual turnover (limited risk).

Who are the actors under the new Act?

Obligations under the new law will differ according to the following roles in the “AI value chain”, defined by the Act as “operators”, with most falling on providers of AI systems:

Role Description
Provider A natural or legal person, public authority, agency or other body that develops an AI system or a general purpose AI model, or that has an AI system or a general purpose AI model developed and places them on the market or puts the system into service under its own name or trademark, whether for payment or free of charge.
Deployer An entity which uses an AI system for a professional purpose or activity (personal, non-professional users are not deployers).
Importer Any natural or legal person located or established in the EU that places an AI system on the market in the EU or that that bears the name or trademark of an entity outside the EU.
Distributor Any natural or legal person in the supply chain, other than the provider or the importer, who makes an AI system available in the EU market.

 

The European Commission will delegate its powers for effective implementation of AI Act obligations to the AI office to be established (without prejudice to agreements between EU Member States and the EU) which will take all actions necessary for effective implementation.

Are only EU companies affected?

The AI Act will employ extra-territorial jurisdiction similar to the European Union’s General Data Protection Regulation (“GDPR”):

  • Businesses supplying AI systems within the EU (regardless of whether they are established in the EU or not) and;
  • Businesses located outside of the EU if the output of their AI system is used within the EU.

The AI Act imposes direct obligations on providers (developers of AI systems), deployers, importers, distributors and manufacturers of AI systems operating within the EU in connection with the EU market but also applies extraterritorially to non-EU companies. Therefore, providers must comply with its provisions when placing their AI systems or GPAI models on the market, or putting them into service in the EU, regardless of their location. Importers, distributors and manufacturers serving the EU market will also be affected:

  1. providers placing or servicing AI systems in the EU market or placing GPAI models in the EU market;
  2. deployers of AI systems who have a place of establishment or are located in the EU and;
  • providers and deployers of AI systems in non-EU countries where the output produced by the AI system is being used in the EU.

What are the risk levels?

In order to implement the Act’s provisions proportionately and effectively, a risk-based approach has been adopted under which AI systems have been categorized according to the scope of the risks each may pose.

Systems that present unacceptable risk
  • Description: Generally, the AI Act bans certain AI practices across the EU which it considers harmful, abusive and contrary to EU values.[2]
  • Discrimination: AI applications having the purpose or effect of significantly distorting human behaviour that deliberately use manipulative or deceptive techniques will be prohibited.
Systems that present high-risk

 

  • Description: Systems that present a “high risk” fall into two categories:

i.         AI systems used as the safety component of a product (or otherwise subject to EU health and safety harmonization legislation) and;

ii.         AI systems used in areas such as education, employment, access to basic public and private services, law enforcement, migration and justice administration.[3]

  • Foreseeable Obligation: The AI Act prescribes varying obligations for actors in high-risk AI systems in greater detail than for other categories. These are accordingly analysed under a separate heading below.
Systems that present limited risk
  • Description: AI systems that are not considered high-risk, do not pose a serious threat to the health, safety and fundamental rights of individuals, and do not affect decision-making processes will be considered limited risk.
  • Foreseeable Obligation: For AI systems considered limited risk providers and distributors should provide transparency by disclosing that the interaction, content, decision, output etc is generated by AI.
Systems that present minimal risk
  • Description: AI systems not covered by any of the above risk categories are not subject to the AI Act but are encouraged to comply with voluntary AI codes of conduct.
  • Foreseeable Obligation: Many AI systems will be considered “minimal risk” and not subject to significant obligations under the AI Act itself though existing regulatory frameworks (such as data protection, employment, financial services, and competition) continue to apply.

 

Obligations for High-Risk AI Systems

The AI Act imposes a wide range of obligations on providers for AI systems deemed to be “high risk” including risk assessments; maintaining governance and documentation; public registration and compliance assessments; and prevention of unauthorized modifications.

Deployers are required to comply with significantly more limited obligations such as implementation of technical and organizational measures to comply with usage restrictions and ensuring competent human oversight.

High-risk AI systems must also undergo a conformity assessment before being placed on the market or put into service. This process includes the establishment of a monitoring system and reporting of serious incidents to market surveillance authorities.

They are also subject to comprehensive compliance requirements covering seven main areas:

  1. Risk management systems. These must be “established, implemented, documented and maintained”. Such systems must be developed iteratively and planned and run throughout the system’s entire lifecycle while subject to systematic review and updates.
  1. Data governance and management practices. Training, validation and testing data sets for high-risk AI systems must be subject to appropriate data governance and management practices. The AI Act sets out specific practices which must be considered including data collection processes and appropriate measures to detect, prevent and mitigate biases.
  1. Technical documentation. A requirement to draw up relevant technical documentation before a high-risk AI system is placed on the market or “put into service” (i.e. installed for use) and ensuring it is kept up to date.
  2. Record keeping/logging. High-risk AI systems must “technically allow for” recording of automatic logs over the lifetime of the system to ensure system functioning maintains an appropriate level of traceability for its intended purpose.
  3. Transparency / provision of information to deployers. High-risk AI systems must be designed and developed to ensure sufficient transparency to allow deployers to interpret output and utilise it appropriately. Instructions for use must be provided in a format appropriate for the AI system.
  1. Human oversight. “Natural persons” must oversee high-risk AI systems with a view to preventing or minimising risks to health, safety, or the fundamental freedoms that may arise from their use.
  2. Accuracy, robustness and cybersecurity. High-risk AI systems must be designed and developed to achieve an appropriate level of accuracy, robustness and cyber security and to perform in accordance with these levels throughout their lifecycle.

Providers based outside the EU are required to appoint an authorized representative based in the EU.

Providers must ensure their high-risk AI systems comply with the requirements set out above and must demonstrate compliance when reasonably requested to do so by a national competent regulator (providing information that is reasonably requested).  Providers and deployers of high-risk AI systems must also:

Role Description
Provider
  • Undertake a conformity assessment, draw up a declaration of conformity, and affix a CE marking (each as prescribed under the AI Act) for the AI system prior to rollout.
  • Include their name, registered trade name, or registered trademark and address on the high-risk AI system itself or, where this is not possible, on the packaging.
  • Retain defined categories of information regarding the AI system for a minimum of ten years from its instalment or offering for sale in the EU to allow for regulatory review.
  • Retain any automatic logs within its control for at least six months.
  • Register with the EU AI database.
  • Implement a quality management system to ensure compliance with the AI Act.
  • “Immediately take necessary corrective action” to rectify non-conformance with the AI Act and inform distributors, importers, deployers and any authorised representatives accordingly; and “immediately take necessary corrective measures” to rectify any non-compliance in the AI system and inform distributors, importers, dealers and authorized representatives accordingly.
  • Ensure that the AI system complies with certain EU disability accessibility regulations.
Deployer
  • Take appropriate technical and organisational measures to ensure that the AI system is used in accordance with instructions for use.
  • To the extent the deployer exercises control over the high-risk AI system, ensure human oversight by competent, trained and supported personnel.
  • To the extent the deployer exercises control over the high-risk AI system, ensure input data is relevant and sufficiently representative of the AI system’s intended purpose.
  • Comply with monitoring and record-keeping obligations.
  • Notify relevant employees/employee representatives they will be subject to AI systems.

 

General Purpose Artificial Intelligence (GPAI) Models

The AI Act sets out a specific section for the classification and regulation of GPAI models which are defined as “AI model(s) that exhibit significant generality, including when trained with large amounts of data using large-scale self-supervision, and capable of competently performing a wide range of tasks, regardless of how the model is released, and which can be integrated into various downstream systems or applications (with the exception of AI models used for research, development or prototyping activities prior to release)”.

Though GPAI models are essential components of AI systems, they cannot constitute AI systems on a standalone basis. Examples include “models that can be released in various forms including libraries, application programming interfaces (APIs), direct download or physical copy”.

Risk Category Obligations
All GPAI models[4]
  • Technical information. Preparation and maintenance of technical documentation that can be provided to the competent regulatory authorities on request (including information regarding the model’s known or estimated energy consumption).
  • Compliance information for AI system providers. Make technical documentation, and any other information required to enable “a good understanding of the model’s capabilities and limitations” and “to comply with their obligations” under the AI Act, available to system providers intending to use the GPAI model.
  • Copyright protection. Implementation of a policy to ensure compliance with EU copyright legislation under national law.
  • Summary of training data. Publication of a sufficiently detailed summary of the data used to train the GPAI model in a predetermined template form (to be released by the newly established AI Office).
GPAI models posing a “systemic risk”[5]
  • EU Commission notification. Immediate notification to the EU Commission, and always within two weeks, in the event of the GPAI system presenting a systemic risk.
  • Risk mitigation evaluation. Conducting a model evaluation in order to mitigate systemic risks.
  • Additional risk mitigation steps. Assessment and mitigation of systemic risks at EU standards.
  • Serious incidents. Logging of serious incidents; monitoring and reporting without delay to the new AI Office; and, where appropriate, nationally competent regulators.
  • Cybersecurity. Ensuring cybersecurity protection for the GPAI model and its physical infrastructure.

 

Fines

Significant fines will be imposed on businesses which contravene the new legislation. Similar to the GDPR, these will be capped at a percentage of the previous financial year’s global annual turnover or a fixed amount (whichever is higher):

  • €35 million or 7% of global annual turnover for non-compliance with prohibited AI system rules;
  • €15 million or 3% of global annual turnover for violations of other obligations and;
  • €7.5 million or 1% of global annual turnover for supplying incorrect, incomplete or misleading information required under the AI Act.

The AI Act states that penalties will be effective, act as a deterrent, and proportionate. Accordingly, the circumstances and economic viability of SMEs and start-ups will be considered.

 

 

[1] The general name given to transactions related to services or products offered by an entity or type of activity.

[2] For example, AI systems presenting unacceptable risks to ethical values, such as emotion recognition systems or inappropriate use of social scoring, are prohibited.

[3] Systems where intended use is limited to (i) performing narrow procedural tasks; (ii) making improvements to the results of previously completed human activities; (iii) detecting decision-making patterns or deviations from previous decision-making patterns without changing or influencing human judgments; and (iv) performing only preparatory tasks for risk assessment.

[4] The general transparency requirements, as they apply to all GPAI models, impose various obligations on GPAI model providers.

[5] GPAI models will be considered to pose a “systemic” risk where they are “widely diffusible” and have a “high impact” along the AI value chain. These are subject to obligations in addition to the transparency requirements above.

For detailed information, you may reach us:

ÖZGÜR AYDIN

CAN AYDİŞ

SEE More