Artificial Intelligence Act  
2021/0106(COD) - 21/04/2021  

PURPOSE: to lay down a uniform legal framework in particular for the development, marketing and use of artificial intelligence in conformity with EU values (Artificial Intelligence Act).

PROPOSED ACT: Regulation of the European Parliament and of the Council.

ROLE OF THE EUROPEAN PARLIAMENT: the European Parliament decides in accordance with the ordinary legislative procedure and on an equal footing with the Council.

BACKGROUND: faced with the rapid technological development of AI and a global policy context where more and more countries are investing heavily in AI, the EU must act as one to address challenges of AI. It is in the EU's interest to be a world leader in the development of human-centred, sustainable, safe, ethical and trustworthy artificial intelligence.

Some Member States have already explored the adoption of national rules to ensure that artificial intelligence is safe and is developed and used in compliance with fundamental rights obligations. Differing national rules may lead to fragmentation of the internal market and decrease legal certainty for operators that develop or use AI systems. A consistent and high level of protection throughout the EU should therefore be ensured.

Following on from the White Paper on AI - "A European Approach to Excellence and Trust", the legislative proposal aims to ensure a high and consistent level of protection across the EU.

The European Parliament resolution on a framework for ethical aspects of artificial intelligence, robotics and related technologies specifically recommends that the Commission propose legislative measures to exploit the opportunities and benefits of AI, but also to ensure the protection of ethical principles.

CONTENT: against this background, the Commission presents the proposed regulatory framework on Artificial Intelligence with the following specific objectives:

- ensure that AI systems placed on the Union market and used are safe and respect existing law on fundamental rights and Union values;

- ensure legal certainty to facilitate investment and innovation in AI;

- enhance governance and effective enforcement of existing law on fundamental rights and safety requirements applicable to AI systems;

- facilitate the development of a single market for lawful, safe and trustworthy AI applications and prevent market fragmentation.

In order to achieve these objectives, the proposal lays down the following:

Harmonised risk-based approach

The proposal sets harmonised rules for the development, placement on the market and use of AI systems in the Union following a proportionate risk-based approach. It proposes a single future-proof definition of AI.

The risk-based approach differentiates between uses of AI that create:

Unacceptable risk

AI systems considered a clear threat to the safety, livelihoods and rights of people will be banned. This includes AI systems or applications that manipulate human behaviour to circumvent users' free will (e.g. toys using voice assistance encouraging dangerous behaviour of minors) and systems that allow ‘social scoring' by governments.

Specific restrictions and safeguards are proposed in relation to certain uses of remote biometric identification systems for the purpose of law enforcement.

High-risk

AI systems identified as high-risk include AI technology used in, inter alia:

- critical infrastructures (e.g. transport), that could put the life and health of citizens at risk;

- educational or vocational training, that may determine the access to education and professional course of someone's life (e.g. scoring of exams);

- safety components of products (e.g. AI application in robot-assisted surgery);

- law enforcement that may interfere with people's fundamental rights (e.g. evaluation of the reliability of evidence);

- migration, asylum and border control management (e.g. verification of authenticity of travel documents).

The proposal sets out the legal requirements for high-risk AI systems in relation to data and data governance, documentation and recording keeping, transparency and provision of information to users, human oversight, robustness, accuracy and security.

Low-risk

This proposal allows the free use of applications such as AI-enabled video games or spam filters. The vast majority of AI systems fall into this category. The draft Regulation does not intervene here, as these AI systems represent only minimal or no risk for citizens' rights or safety.

Governance

The Commission proposes that national competent market surveillance authorities supervise the new rules, while the creation of a European Artificial Intelligence Board will facilitate their implementation, as well as drive the development of standards for AI. In addition, voluntary codes of conduct are proposed for non-high-risk AI, as well as regulatory sandboxes to facilitate responsible innovation.

Market monitoring and surveillance

The Commission will be in charge of monitoring the effects of the proposal. It will establish a system for registering stand-alone high-risk AI applications in a public EU-wide database. This registration will also enable competent authorities, users and other interested people to verify if the high-risk AI system complies with the requirements laid down in the proposal and to exercise enhanced oversight over those AI systems posing high risks to fundamental rights.

Moreover, AI providers will be obliged to inform national competent authorities about serious incidents or malfunctioning that constitute a breach of fundamental rights obligations as soon as they become aware of them, as well as any recalls or withdrawals of AI systems from the market.

The Commission will publish a report evaluating and reviewing the proposed AI framework five years following the date on which it becomes applicable.

Budgetary implications

Member States will have to designate supervisory authorities in charge of implementing the legislative requirements. Their supervisory function could build on existing arrangements, for example regarding conformity assessment bodies or market surveillance, but would require sufficient technological expertise and human and financial resources.