GDPR

Adatvédelem mindenkinek / Data protection for everyone

OECD: new definition of Artificial Intelligence systems

2023. november 13. 11:00 - poklaszlo

Regulation concerning artificial intelligence is in the spotlight, as the Executive Order of the President of the United States on AI was published on October 30, and the text of the AI Act is being finalized in the EU in the framework of negotiations (trilogue) between the Parliament, the Council and the Commission.  

When it comes to legal regulation, it is always a key question what the given regulation refers to, what is its object, scope, and what exactly should be understood by the terms used by the regulation. Basic definitions, their precision, or even their vagueness or interpretation can have a significant influence on when and for what a given regulation applies.

It is often difficult to define very simple phenomena precisely, even more so in the case of concepts that involve very complex phenomena. Artificial intelligence is just such a concept, which is very difficult to define or it can be defined in many different ways, since it is basically an umbrella category that brings together many different technologies. 

The AI definitions used by different regulations, drafts and recommendations also show minor or major differences in many cases (a very good comparative selection of the definitions appearing in different documents is available here).

The definition, which was used in the OECD Recommendation adopted in 2019 (and amended on 8 November 2023), has often been a reference for newer regulatory initiatives on AI, and it also serves as a basis for the definition of AI systems proposed by the European Parliament to be used in the AI Act (replacing the definition originally proposed by the Commission). The US President's Executive Order also contains a definition of AI that is very similar to that in the OECD Recommendation. 

The definition set out in the OECD Recommendation 2019 is the followings: 

AI system: a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy. 

Based on the amendment of the above OECD definition from 2019, the new definition of AI system according to the OECD is as follows:

An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment. 

Source: OECD (https://oecd.ai/en/ai-principles)

What has changed?

  • It is not part of the definition that the objectives shall be defined by humans. 
  • It appeared in the definition that the system "works" from the received inputs.
  • On the output side, a reference to "content" (e.g. images, text) appeared in the list of potential outputs. (Obviously, this is also in response to the recent rise of generative AI solutions.) 
  • The term "real environments" was replaced by the term "physical environments". 
  • In addition to a certain degree of autonomy, different degrees of adaptability also appeared as a feature of the systems, after deployment. 

(For a brief description of the reasons for these changes, see the article about the new definition.)

Why is this interesting?

  • On the one hand, the aim of adopting a new definition was to make the definition more "future-proof", as there have also been many advances in technology since 2019 when the former version of the definition was adopted. 
  • On the other hand, with the change, the OECD can also contribute to the EU legislation process, as this was done in time for a corresponding definition to be included in the AI Act.  

OECD also published a paper in October that provides an overview about the implementation of OECD Principles on AI by the different countries after four years from the publication of such principles. The OECD AI Principles set out a framework containing ten principles, including five values-based principles and five recommendations to governments to promote and implement in their policies responsible stewardship of trustworthy AI.

The value-based principles are the followings: 

  1. growth, sustainable development and well-being,
  2. human-centred values and fairness, 
  3. transparency and explainability,
  4. robustness, security and safety, and
  5. accountability.

The recommendations to governments were:

  1. investing in AI Research & Development, 
  2. fostering a digital ecosystem for AI,
  3. fostering an enabling policy environment for AI,
  4. AI skills, jobs and labour market transformation, and
  5. international and multi-stakeholder co-operation on AI.  
Szólj hozzá!

A bejegyzés trackback címe:

https://gdpr.blog.hu/api/trackback/id/tr4818255509

Kommentek:

A hozzászólások a vonatkozó jogszabályok  értelmében felhasználói tartalomnak minősülnek, értük a szolgáltatás technikai  üzemeltetője semmilyen felelősséget nem vállal, azokat nem ellenőrzi. Kifogás esetén forduljon a blog szerkesztőjéhez. Részletek a  Felhasználási feltételekben és az adatvédelmi tájékoztatóban.

Nincsenek hozzászólások.
süti beállítások módosítása