On April 21, 2021, the European Commission (EC) published a proposal for a European Regulation on Artificial Intelligence (AI). This document establishes a new legal framework for businesses with a risk-based approach. It also details the measures proposed to protect citizens' privacy when using AI-based technology. The Commission also affirms Europe's ambition to liberalize artificial intelligence.

Since the first discussions within European institutions in 2017, Artificial Intelligence has been seen both as an opportunity and a risk. The work that led to the proposal of this Regulation was launched to establish a European approach to AI aimed at "ensuring a high level of data protection, digital rights, and ethical standards." Therefore, this draft Regulation is the result of a long period of consultation within European institutions. It still needs to be approved by the European Parliament and the EU Council before being definitively adopted. What will change in practice? What should be anticipated now?

 

AI at the European Level – Challenges and Threats

The European Commission's goal is to "make Europe the global hub for trustworthy Artificial Intelligence (AI)." Behind this ambition are two major challenges, one external and the other internal:

  • Strengthening Europe's competitiveness against other digital powers: the United States and China.
  • Liberalizing data while ensuring the security and fundamental rights of citizens and businesses.

According to the Commission, one cannot be achieved without the other, and this is the vision shared by Thierry Breton, European Commissioner for the Internal Market, who sees a "common data market" as an opportunity to make the EU a leading power in the AI field. Mobility and energy are key target sectors because large-scale data sharing would enable AI to be at the heart of these future topics.

 

A Risk-Based Approach with Three Levels of Risk

However, this broad ambition is accompanied by the definition of a legal framework. For this regulation proposal, the Commission has adopted a risk-based approach and proposes different obligations depending on three levels of risk:

 

1/ Models Considered as “Unacceptable Risk”

Models classified as "Unacceptable Risk" are prohibited. This includes models that "contradict the values of the Union," such as manipulation of the subconscious, exploiting vulnerabilities of individuals, or establishing a social score.

 

2/Models Considered as “High Risk”

Models deemed "High Risk," "which pose a high risk to health, safety, or fundamental rights of individuals," are subject to four specific obligations:

  • The data that feeds them must be of high quality.
  • The models must be documented and traceable.
  • Human oversight must be applied to the results produced by these models.
  • These models must demonstrate precision and robustness in their results.

 

3/ Models Considered as “Limited Risk”

Other models, which do not fall into the first two categories, are considered "Limited Risk" and are not subject to specific obligations. However, the Commission encourages the development of codes of conduct, inspired by the above obligations, which could apply at the company or professional level.

Finally, a transparency obligation could apply to all models that interact directly with the human user, analyze emotions, define social categories, or generate, manipulate, and alter content. This includes deepfakes, candidate sorting algorithms, etc. The use of these models must be explicitly revealed and explained to the user. Furthermore, the user can request that the result produced by the model be reviewed by a human.

 

Our Recommendations

In light of this regulatory project, we recommend that all businesses that have already turned to AI begin a risk analysis to identify the models that may be targeted by the legislation. This analysis should result in a roadmap for bringing existing models into compliance and establishing governance to implement new models and maintain compliance over time.

For other businesses, it appears that now is the time to invest in AI, seizing the opportunities that will arise with this European-level push and being in a position, in the medium term, to participate in the common data market. This investment should also be seen as an opportunity to consider the EU's obligations from the design stage, as they are ultimately best practices recommended for the development of AI..