GDPR

Adatvédelem mindenkinek / Data protection for everyone

Deep dive into the AI Act - Part 4: what does the risk-based approach mean in practice?

2024. június 24. 11:15 - poklaszlo

The AI Act is classified as a regulation with a risk-based approach. But what does this mean in practice? What risks does the AI Act address and how? I address this topic below, also taking into account the risk-based approach of the GDPR and its similarities and possible differences with the AI Act.

1. What does the risk-based approach mean in the AI Act?

A strong argument on the need of regulating AI is that, in addition to the many opportunities that the development and deployment of AI solutions may bring, AI also entails serious risks:

At the same time, depending on the circumstances regarding its specific application, use, and level of technological development, AI may generate risks and cause harm to public interests and fundamental rights that are protected by Union law. Such harm might be material or immaterial, including physical, psychological, societal or economic harm.(See Recital (5), emphasis added)

It is perhaps no coincidence that, in various combinations, "risk" is one of the most frequently used terms in the AI Act. Also among definitions, immediately after the definition of an AI system, the definition of risk ranks second (In the AI Act, risk means "the combination of the probability of an occurrence of harm and the severity of that harm", Article 3, Point (2) of the AI Act).

However, it is also clear that different AI systems and different use cases for AI systems can vary significantly in relation to potential risks and how those risks can and should be managed. The recitals to the AI Act highlights the importance of a risk-based approach (see Recital (26), emphasis added):

In order to introduce a proportionate and effective set of binding rules for AI systems, a clearly defined risk-based approach should be followed.

Accordingly, as it is apparent from Recital (26), this risk-based approach should tailor the type and content of such rules to the intensity and scope of the risks that AI systems can generateThe Recital also immediately points out that the risk-based approach implies that certain AI practices should be prohibited due to unacceptably high risk (prohibited AI practices, Article 5 of the AI Act, e.g. social scoring systems), while other high-risk AI systems require strict requirements and compliance with a number of obligations (high-risk AI systems, Article 6 of the AI Act) and, in principle, other lower-risk AI systems must meet with transparency obligations (Article 50 of the AI Act, e.g. chatbots).

There may be additional AI systems with minimal risk (e.g. spam filters) for which no additional requirements are essentially laid down by the regulation, but their ethical development and use are essential (see below), and certain obligations under the AI Act can also be applied to them through codes of conduct (see Article 95 of the AI Act regarding the codes of conduct).

The Recitals to the AI Act also draws attention to the fact that although the regulation is based on a risk-based approach, AI developments and AI use should comply with ethical guidelines. Taking those ethical principles into account is therefore an important starting point for all AI systems, including those presenting a lower risk, and should, where possible, be translated into the design and use of AI models: "All stakeholders, including industry, academia, civil society and standardisation organisations, are encouraged to take into account, as appropriate, the ethical principles for the development of voluntary best practices and standards." (see Recital (27) of the AI Act)

The Ethics Guidelines for Trustworthy AI of the High-Level Expert Group on Artificial Intelligence (2019) sets out the following 7 principles:

  • human agency and oversight;
  • technical robustness and safety;
  • privacy and data governance;
  • transparency;
  • diversity, non-discrimination and fairness;
  • societal and environmental well-being; as well as
  • accountability.

 

Source: High-Level Expert Group on Artificial Intelligence, "Ethics Guidelines for Trustworthy AI" (2019), p.15

2. How does the risk-based approach appear in practice?

For the risk-based approach to be applied in practice, it is essential to properly classify AI systems, determine exactly which actors (providers, deployers, importers, etc.) are subject to which requirements, by whom and how they should be met.

Once the appropriate classification and role has been defined, it becomes possible to identify what requirements are essential to apply (or, where appropriate, if a prohibited practice is identified, to eliminate or adapt such practice accordingly).

The risk-based approach is also reflected in the legal consequences and, if applicable, the sanctions (fines) to be expected in case of non-or improper fulfilment of certain obligations. For example, non-compliance with the prohibition of the AI practices referred to in Article 5 of the AI Act shall be subject to administrative fines of up to EUR 35 million or, if the offender is an undertaking, up to 7 % of its total worldwide annual turnover for the preceding financial year, whichever is higher (see Article 99 of the AI Act). For other infringements, there is a smaller (but still very significant) maximum fine. (I'll cover the topic of possible sanctions under the AI Act in more detail later in a separate post.)

The requirements for general-purpose AI models are somewhat "outstanding" from the above risk-based classification, they are treated separately by the AI Act. At the same time, additional requirements for general-purpose AI models posing systemic risk also appear in the regulation.

(In further posts of the series "Deep Dive into the AI Act", I will deal in detail with classification criteria, e.g. which practices are defined as prohibited practices in the AI Act and how AI systems can be classified as high-risk AI systems. I will also address the requirements for each risk category, in particular high-risk AI systems and systems with limited risk. I will also discuss the regulation of general-purpose AI models.)

3. How does the risk-based approach under the AI Act compare to similar approaches in other legislation, in particular the GDPR?

The so-called risk-based approach does not appear for the first time in the AI Act, but is a regulatory solution that appears in many pieces of legislation, albeit in the form of slightly different regulatory approaches (see below). A similar approach can be found in cybersecurity rules and also in the EU's General Data Protection Regulation (GDPR), adopted in 2016 and applicable since May 2018. Given that obligations in different regulations may be 'interrelated', i.e. certain cybersecurity rules also apply to AI systems, or personal data is processed in connection with the development and use of several AI systems to which the rules set out in the GDPR apply, it may be interesting to briefly examine how the risk-based approach taken in the AI Act relates to the risk-based approach taken in the GDPR.

Although the term "risk-based approach" is not mentioned in the GDPR, the concept of risk and the management of risks associated with data processing, as well as the need for risk-proportionate measures, appear repeatedly in the text of the GDPR. We must also remember that GDPR is a regulation specifically designed to protect a named fundamental right (protection of personal data), so we must start from this basis when interpreting the provisions.

The Article 29 Working Group (i.e. the "predecessor" of the European Data Protection Board) also discussed the risk-based approach. In their respective statement from 2014, they also draw attention to the fact that this approach was already present in the data protection rules even before the GDPR (in Directive 95/46/EC) and can be found in several points in the data protection regulation. Among other things, there are additional obligations in relation to the protection of sensitive data, data security measures, but some additional obligations (e.g. obligations to carry out data protection impact assessments, consult with supervisory authorities, reporting of data breaches, etc.) can also be attributed to the risk-based approach.

The risk-based approach appearing in the GDPR can perhaps be briefly summarized as the level of measures expected of data controllers and processors (i.e. what efforts they have to make to comply and to prove accountability) primarily depends on the magnitude of the risks associated with the given data processing, but the basic expectation is that personal data must be processed in accordance with the expectations regarding the protection of fundamental rights (this can be clearly seen, for example, from that the principles of data processing must be applied in all cases, regardless of the level of possible risks associated with data processing).

Compared to the risk-based approach in the GDPR, the AI Act belongs more to the family of risk-based regulations where the degree of risk – in terms of human life, safety, fundamental rights, etc. – determines on the one hand what should be regulated at all (i.e. above what level of risk the Regulation should cover a given AI system) and, in particular, what requirements should apply at each risk level.

Based on the above, it can be concluded that in the case of the AI Act, the risk-based approach already serves as a filter for what should be covered by the regulation at all, while in the case of GDPR there is essentially no such risk-based "pre-screening", but the risk-based approach is better reflected in terms of compliance intensity and accountability expectations.

In his study comparing the risk-based approaches of the GDPR and the AI Act, Raphaël Gellert summarises the differences between the two different regulatory solutions as follows: "So, contrary to the GDPR where the risk-based approach serves mainly to determine the intensity of compliance measures, the risk-based approach in the AIA has a much more substantive scope since it determines what gets regulated in the first place, with the risk that some AI systems are unduly categorised as non-high risk [...]." (see Raphaël Gellert, The role of the risk-based approach in the General data protection Regulation and in the European Commission’s proposed Artificial Intelligence Act: Business as usual?, Journal of Ethics and Legal Technologies – Volume 3(2) – November 2021, p. 21, own translation, emphasis added).

What follows from the differences in risk-based approaches? What does this mean in practice?

In many cases, when the AI Act will be applied, compliance with the GDPR must also be ensured with regard to the processing of personal data. In doing so, the two risk-based approaches may also come together when the two sets of rules are applied together. It should be borne in mind that, due to the diversity of approaches and the nature of the risks, there may be situations where the two approaches may lead to 'different' results, i.e. processing considered to be higher risk takes place in relation to an AI system classified as a lower risk system (e.g. processing of sensitive data when using a chatbot), Therefore, data protection measures and the level of effort should be adapted accordingly. (Of course, the reverse can also occur, low-risk data processing with a high-risk AI system under the AI Act.)

At the same time, 'synergies' between the different set of rules should also be exploited to ensure better compliance, for example transparency requirements or data governance obligations in the AI Act could be based on data protection measures. In addition, a risk classification based on one regime may in the future help to classify risks more accurately under the other regime, as the existence of a high risk in line with the criteria of one regulation may suggest caution when applying the another regulation.

Szólj hozzá!

A bejegyzés trackback címe:

https://gdpr.blog.hu/api/trackback/id/tr3918433207

Kommentek:

A hozzászólások a vonatkozó jogszabályok  értelmében felhasználói tartalomnak minősülnek, értük a szolgáltatás technikai  üzemeltetője semmilyen felelősséget nem vállal, azokat nem ellenőrzi. Kifogás esetén forduljon a blog szerkesztőjéhez. Részletek a  Felhasználási feltételekben és az adatvédelmi tájékoztatóban.

Nincsenek hozzászólások.
süti beállítások módosítása