GDPR

Adatvédelem mindenkinek / Data protection for everyone

Fundamental rights impact assessment regarding the use of AI systems

2023. június 07. 14:00 - poklaszlo

In mid-May, a new version of the draft AI Act was published in the form of a compromise text adopted by the European Parliament's committees (IMCO, LIBE). The European Parliament's plenary will vote on the draft in mid-June, after which negotiations with the Council may start on the AI Act.

The compromise text that is to be discussed by the European Parliament contains a number of novelties compared to the original proposal of the Commission (and also compared to the text of the Council´s general approach). In this post, I will briefly discuss one of the new elements introduced in the compromise text, the so-called fundamental rights impact assessment (FRIA) for high-risk AI systems.

1. What is included in the compromise text regarding FRIA?

Based on the proposal (see Article 29a of the compromise text), "prior to putting a high-risk AI system as defined in Article 6(2) into use, with the exception of AI systems intended to be used in area 2 of Annex III, deployers shall conduct an assessment of the systems’ impact in the specific context of use."

The proposal imposes the obligation of carrying out a FRIA on the deployer of the AI system. The term "deployer" is proposed in the compromise text instead of the term "user" in the original Commission´s proposal. The term 'deployer' is defined as "any natural or legal person, public authority, agency or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity". (See Article 3(4). The term "deployer" was originally used by the Commission in the White Paper on AI to refer to the user of the AI system.)

The fundamental rights impact assessment should be carried out prior to putting a high-risk AI system into use for the first time and the deployer may, in similar cases, draw back on previously conducted fundamental rights impact assessment or existing assessment carried out by providers. If, during the use of the high-risk AI system, the deployer considers that the criteria to be assessed in the course of the FRIA are not longer met, it shall conduct a new fundamental rights impact assessment.

2. What types of AI systems would be subject to FRIA?

Based on the compromise text, FRIA shall be carried out for AI systems classified as high-risk systems according to Article 6(2) of the AI Act (Annex III to the AI Act lists the areas and use cases where AI systems to be used are considered as high-risk systems), except for high-risk AI systems used in the management and operation of critical infrastructure. 

3. What elements should be considered in the FRIA?

According to the current text proposal, the following elements must be included in the impact assessment:

  • a clear outline of the intended purpose for which the system will be used;
  • a clear outline of the intended geographic and temporal scope of the system’s use;
  • categories of natural persons and groups likely to be affected by the use of the system;
  • verification that the use of the system is compliant with relevant Union and national law on fundamental rights;
  • the reasonably foreseeable impact on fundamental rights of putting the high-risk AI system into use;
  • specific risks of harm likely to impact marginalised persons or vulnerable groups;
  • the reasonably foreseeable adverse impact of the use of the system on the environment;
  • a detailed plan as to how the harms and the negative impact on fundamental rights identified will be mitigated;
  • the governance system the deployer will put in place, including human oversight, complaint-handling and redress.

(It is important to emphasise again that the compromise text may change significantly before the final adoption of the AI Act; however, it is nevertheless interesting to see what elements should be covered in a FRIA according to this proposal.) 

4. How should the FRIA be carried out?

Of course, the proposal does not provide detailed methodological guidance on conducting a FRIA, but it does reveal the following aspects: 

  • it lists elements that must be included in the impact assessment (see point 3 above);
  • in the course of the impact assessment, the deployer, with the exception of SMEs, shall notify national supervisory authority and relevant stakeholders and shall, to best extent possible, involve representatives of the persons or groups of persons that are likely to be affected by the high-risk AI system, (such stakeholders include but not limited to: equality bodies, consumer protection agencies, social
    partners and data protection agencies) with a view to receiving input into the impact assessment;
  • a detailed plan to mitigate the risks shall be outlined in the course of the assessment; if such plan cannot be identified, the deployer shall refrain from putting the high-risk AI system into use and inform the provider and the national supervisoryauthority without undue delay;
  • if the user is also required to carry out a data protection impact assessment (DPIA), the FRIA shall be conducted in conjunction with the DPIA and the DPIA shall be published as an addendum.

As regards the methodology of the Fundamental Rights Impact Assessment, the text of the proposal provides little guidance. At the same time, the idea of a fundamental rights impact assessment (or human rights impact assessment) and possible methodological approaches related to such assessment have been available in the academic literature and partly, also in practice. These could serve as a good starting point for the future impact assessments of high-risk AI systems. Some examples of the currently available concepts, sample documentation (and release date) are:

Further useful tools are available on the OECD's AI-related Policy Observatory page, where under the item "Tools & Metrics", an increasing number of fundamental rights impact assessment tools can also be found.

It is also worth studying the EU Fundamental Rights Agency's paper on AI and fundamental rights (2021). To have a broader framework regarding the compliance with fundamental rights in the course of business activities, the UN Guiding Principles on Business and Human Rights (2011) may be the starting point that, in its Point II, deals specifically with the role of business in protecting human rights.    

(Of course, the above provides only a few examples of the ever-expanding sources. Presumably, as the legislative text evolves, more and more applicable solutions will become available. It is worth following developments in this area as well.)

6. Interdependencies between the concpet of FRIA and DPIA

It is also clear from the compromise text that the data protection impact assessment and the fundamental rights impact assessment can be linked on several points. The compromise text explicitly refers to the fact that, where a data protection impact assessment is required on the basis of the GDPR or Directive 2016/680, the FRIA shall be conducted in conjunction with the DPIA and the DPIA shall be published as an addendum to the FRIA. 

The main similarities and differences between the two types of impact assessment are: 

 

Data protection impact assessment (DPIA)

Fundamental rights impact assessment (FRIA)

Legal background

Art. 35 GDPR, Art 27 of Directive 2016/680

Art. 29a of EP Committee compromise text of the AI Act

Why?

To evaluate the origin, nature, particularity and severity of risk. Determining the appropriate measures.

In order to efficiently ensure that fundamental rights are protected. Describing the measures or tools that will help mitigating the risks.

What?

Assessment of the impact of the envisaged processing operations on the protection of personal data.

Assessment of the systems’ impact in the specific context of use.

How?

Systematic, documented, DPO must be involved, consultation with the DPA (if necessary)

Minimum elements defined, documented, detailed plan to mitigate the risks, previously conducted may be used, involvement of stakeholders, conducted in conjunction with the DPIA

When?

Data processing is likely to result in a high risk to the rights and freedoms of natural persons. Prior to commnecing the data processing. In case of changes, a review is necessary.  

Prior to putting a high-risk AI system into use. In case of changes, a review is necessary.  

By whom?

Data controller

Deployer (any person using an AI system under its authority)

Until the adoption of the AI Act, the rules regarding FRIA in the "compromise text" are likely to change; however, it is already worth addressing the obligations and requirements that may affect developers, providers and deployers of AI systems in the future since the application of the upcoming rules can be a serious challenge, especially since they will not be stand-alone rules, but the requirements for AI systems to be included in the upcoming AI Act should be applied in conjuction with other requirements originating from data protection, competition law, consumer protection, etc.

Szólj hozzá!

A bejegyzés trackback címe:

https://gdpr.blog.hu/api/trackback/id/tr4918141116

Kommentek:

A hozzászólások a vonatkozó jogszabályok  értelmében felhasználói tartalomnak minősülnek, értük a szolgáltatás technikai  üzemeltetője semmilyen felelősséget nem vállal, azokat nem ellenőrzi. Kifogás esetén forduljon a blog szerkesztőjéhez. Részletek a  Felhasználási feltételekben és az adatvédelmi tájékoztatóban.

Nincsenek hozzászólások.
süti beállítások módosítása