Select your language

extract from: https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai

Regulatory framework proposal on artificial intelligence

The Commission is proposing the first-ever legal framework on AI, which addresses the risks of AI and positions Europe to play a leading role globally.

The regulatory proposal aims to provide AI developers, deployers and users with clear requirements and obligations regarding specific uses of AI. At the same time, the proposal seeks to reduce administrative and financial burdens for business, in particular small and medium-sized enterprises (SMEs).

The proposal is part of a wider AI package, which also includes the updated Coordinated Plan on AI. Together, the Regulatory framework and Coordinated Plan will guarantee the safety and fundamental rights of people and businesses when it comes to AI. And, they will strengthen uptake, investment and innovation in AI across the EU.

Why do we need rules on AI?

The suggested AI regulation aims to establish confidence among Europeans regarding the capabilities of AI. While numerous AI systems present minimal or no risks and play a role in resolving societal issues, specific AI systems pose potential risks that require mitigation to prevent unfavorable consequences.

An instance of concern lies in the difficulty of understanding the rationale behind an AI system's decision-making process, predictions, and subsequent actions. Consequently, this opacity might hinder the evaluation of whether someone has faced unjust treatment, such as in recruitment processes or applications for public benefit schemes.

Even though current legislation offers some safeguards, it inadequately caters to the distinct challenges that AI systems may introduce.

The proposed rules will:

  • address risks specifically created by AI applications;
  • propose a list of high-risk applications;
  • set clear requirements for AI systems for high risk applications;
  • define specific obligations for AI users and providers of high risk applications;
  • propose a conformity assessment before the AI system is put into service or placed on the market;
  • propose enforcement after such an AI system is placed in the market;
  • propose a governance structure at European and national level.

 

The Regulatory Framework defines 4 levels of risk in AI:

  • Unacceptable risk
  • High risk
  • Limited risk
  • Minimal or no risk

 

Unacceptable risk
All AI systems considered a clear threat to the safety, livelihoods and rights of people will be banned, from social scoring by governments to toys using voice assistance that encourages dangerous behaviour.

High risk
AI systems identified as high-risk include AI technology used in:

  • critical infrastructures (e.g. transport), that could put the life and health of citizens at risk;
  • educational or vocational training, that may determine the access to education and professional course of someone’s life (e.g. scoring of exams);
  • safety components of products (e.g. AI application in robot-assisted surgery);
  • employment, management of workers and access to self-employment (e.g. CV-sorting software for recruitment procedures);
  • essential private and public services (e.g. credit scoring denying citizens opportunity to obtain a loan);
  • law enforcement that may interfere with people’s fundamental rights (e.g. evaluation of the reliability of evidence);
  • migration, asylum and border control management (e.g. verification of authenticity of travel documents);
  • administration of justice and democratic processes (e.g. applying the law to a concrete set of facts).

High-risk AI systems will be subject to strict obligations before they can be put on the market:

  • adequate risk assessment and mitigation systems;
  • high quality of the datasets feeding the system to minimise risks and discriminatory outcomes;
  • logging of activity to ensure traceability of results;
  • detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance;
  • clear and adequate information to the user;
  • appropriate human oversight measures to minimise risk;
  • high level of robustness, security and accuracy.

All remote biometric identification systems are considered high risk and subject to strict requirements. The use of remote biometric identification in publicly accessible spaces for law enforcement purposes is, in principle,  prohibited.

Narrow exceptions are strictly defined and regulated, such assuch as when necessary to search for a missing child, to prevent a specific and imminent terrorist threat or to detect, locate, identify or prosecute a perpetrator or suspect of a serious criminal offence.

Such use is subject to authorisation by a judicial or other independent body and to appropriate limits in time, geographic reach and the data bases searched.

Limited risk
Limited risk refers to AI systems with specific transparency obligations. When using AI systems such as chatbots, users should be aware that they are interacting with a machine so they can take an informed decision to continue or step back.

Minimal or no risk
The proposal allows the free use of minimal-risk AI. This includes applications such as AI-enabled video games or spam filters. The vast majority of AI systems currently used in the EU fall into this category.

 


The AI Act promises a proportionate risk-based approach that imposes regulatory burdens only when an AI system is likely to pose high risks to fundamental rights and safety. Targeting specific sectors and applications, the regulation has shifted from the binary low-risk vs. high-risk framework proposed in the Commission’s White Paper on AI to a four-tiered risk framework, which classifies risk into four categories:

  • ‘unacceptable risks’ that lead to prohibited practices;
  • ‘high risks’ that trigger a set of stringent obligations, including conducting a conformity assessment;
  • ‘limited risks’ with associated transparency obligations; and
  • ‘minimal risks’, where stakeholders are encouraged to build codes of conduct—irrespective of whether they are established in the EU or a third-country.

The systems considered as having high or unacceptable levels of risk – for example systems used in social scoring or those that interact with children in the context of personal development or education, respectively, will be one key issue under consideration by the Parliament and the Council. The hope is that this approach will limit regulatory oversight to only sensitive AI systems—resulting in fewer restrictions on the trade and use of AI within a single market.The AI Act promises a proportionate risk-based approach that imposes regulatory burdens only when an AI system is likely to pose high risks to fundamental rights and safety. Targeting specific sectors and applications, the regulation has shifted from the binary low-risk vs. high-risk framework proposed in the Commission’s White Paper on AI to a four-tiered risk framework, which classifies risk into four categories: ‘unacceptable risks’ that lead to prohibited practices; ‘high risks’ that trigger a set of stringent obligations, including conducting a conformity assessment; ‘limited risks’ with associated transparency obligations; and ‘minimal risks’, where stakeholders are encouraged to build codes of conduct—irrespective of whether they are established in the EU or a third-country. The systems considered as having high or unacceptable levels of risk – for example systems used in social scoring or those that interact with children in the context of personal development or education, respectively, will be one key issue under consideration by the Parliament and the Council. The hope is that this approach will limit regulatory oversight to only sensitive AI systems—resulting in fewer restrictions on the trade and use of AI within a single market.

 

 


 

Cookies user preferences
We use cookies to ensure you to get the best experience on our website. If you decline the use of cookies, this website may not function as expected.
Accept all
Decline all
Unknown
Unknown
Accept
Decline
Marketing
Set of techniques which have for object the commercial strategy and in particular the market study.
Quantcast
Accept
Decline
Save