GDPR

Adatvédelem mindenkinek / Data protection for everyone

Deep Dive into the AI Act - Part 6: AI literacy

2024. július 25. 12:00 - poklaszlo

In connection with the AI Act, which will enter into force shortly, on 1 August 2024 (and will become applicable in several steps thereafter), we most often talk about prohibited AI practices, high-risk AI systems and related obligations of service providers and deployers, or even specific rules for general-purpose AI models. However, a concept appears in the AI Act, which may be hidden in the shadow of other rules, but it is also a very important element for the proper functioning of the entire artificial intelligence ecosystem and for the spread of socially useful AI. This is known as AI literacy.

1. What is AI literacy?

The definition in the AI Act makes it clear what the legislator meant under the notion of AI literacy (see Article 3, point 56):

AI literacy means skills, knowledge and understanding that allow providers, deployers and affected persons, taking into account their respective rights and obligations in the context of this Regulation, to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause.

Paragraph 20 of the Recitals to the AI Act provides further clues as to why it is important to ensure AI literacy for service providers, deployers and affected persons. The necessary and contextual knowledge is essential to make informed decisions about AI systems.

What skills and knowledge can AI literacy include?

As mentioned in the Recitals, the skills and knowledge that may fall within the scope depend on the context. We can also understand under the relevant context, who might be involved in the development and/or the operation of the AI system and in what role: as a developer, a deployer or "simply" as a user. As stated in the Recitals (see paragraph 20):

[...] Those notions may vary with regard to the relevant context and can include understanding the correct application of technical elements during the AI system’s development phase, the measures to be applied during its use, the suitable ways in which to interpret the AI system’s output, and, in the case of affected persons, the knowledge necessary to understand how decisions taken with the assistance of AI will have an impact on them.

The topic of AI literacy should be put into a broader context and linked to the concepts of digital literacy and data literacy and the resulting actions, as knowledge about AI should be built on and complement broader digital skills and competences in a technology-specific way.

The topic has been a priority in the EU for some time now, with almost 90% of jobs requiring at least basic digital skills, while around 32% of Europeans lack such skills (for more relevant data and more material about this topic, please see this site). Of course, digital literacy is becoming indispensable knowledge not only in the EU but worldwide, and international organisations such as UNESCO, UNICEF and the World Bank have also called attention to the topic.

2. Who needs to act about AI literacy under the AI Act?

Article 4 of the AI Act sets out obligations – primarily for providers and deployers – in relation to AI literacy as follows:

Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used.

It is worth bearing in mind that these rules, similarly to the provisions on prohibited AI practices, will apply already 6 months after the entry into force of the AI Act, i.e. from 2 February 2025.

Providers and deployers should therefore be ready to comply with AI literacy obligations by February next year. What does this mean in practice? First of all, of course, education, trainings and user manuals adapted to the specifics of the respective AI systems, the emerging risks, the scope of application, the characteristics of the data used, etc. should be considered to meet such requirement. The emphasis - as I mentioned above - is on the context, the stakeholders and the risks inherent in the AI system. It is not enough to describe this obligation in some general way, it must be ensured that knowledge is relevant and skills are genuinely applicable.

Paragraph 91 of the Recitals to the AI Act also refers to this in the context of high-risk AI systems: "[...] Furthermore, deployers should ensure that the persons assigned to implement the instructions for use and human oversight as set out in this Regulation have the necessary competence, in particular an adequate level of AI literacy, training and authority to properly fulfil those tasks. [...]"

Of course, the obligation should not be limited to high-risk AI systems, but should apply to all AI systems. This is also pointed out in Recital 165 of the Preamble: "[...] Providers and, as appropriate, deployers of all AI systems, high-risk or not, and AI models should also be encouraged to apply on a voluntary basis additional requirements related, for example, to the elements of the Union’s Ethics Guidelines for Trustworthy AI, environmental sustainability, AI literacy measures, inclusive and diverse design and development of AI systems [...]"

Although in Article 4 of the AI Act providers and deployers are named as addressees of the AI literacy requirements, other actors are also given tasks related to AI literacy in the AI Act: among the tasks of the European AI Board, it appears that the Board may, in particular "support the Commission in promoting AI literacy, public awareness and understanding of the benefits, risks, safeguards and rights and obligations in relation to the use of AI systems." (Art. 66, Point f))

"The AI Office and the Member States shall facilitate the drawing up of codes of conduct concerning the voluntary application, including by deployers, of specific requirements to all AI systems, on the basis of clear objectives and key performance indicators to measure the achievement of those objectives, including elements such as, but not limited to: [...] promoting AI literacy, in particular that of persons dealing with the development, operation and use of AI. [...]" (Art. 95 (2), Point c))

Szólj hozzá!

A bejegyzés trackback címe:

https://gdpr.blog.hu/api/trackback/id/tr3918454017

Kommentek:

A hozzászólások a vonatkozó jogszabályok  értelmében felhasználói tartalomnak minősülnek, értük a szolgáltatás technikai  üzemeltetője semmilyen felelősséget nem vállal, azokat nem ellenőrzi. Kifogás esetén forduljon a blog szerkesztőjéhez. Részletek a  Felhasználási feltételekben és az adatvédelmi tájékoztatóban.

Nincsenek hozzászólások.
süti beállítások módosítása