GDPR

Adatvédelem mindenkinek / Data protection for everyone

Executive Order on Artificial Intelligence issued in the US

2023. október 31. 11:30 - poklaszlo

On October 30, 2023, US President Biden signed an Executive Order (EO) on Artificial Intelligence. The adoption of this EO marks an important milestone in regulating AI, even if it is not a comprehensive legislation regarding its subject matter, however, it covers very important aspects of AI development, including AI safety and security and building infrastructure for AI development in the US. (A fact sheet about the main points of the EO is available here.)

1. Definitions of key terms

The EO gives definition of basic terms to be used in the context of AI developments.

Some basic definitions from the EO: 

"Artificial intelligence" or "AI":  a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.  Artificial intelligence systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action.

"AI model": a component of an information system that implements AI technology and uses computational, statistical, or machine-learning techniques to produce outputs from a given set of inputs.

"AI system": any data system, software, hardware, application, tool, or utility that operates in whole or in part using AI.

"Generative AI": the class of AI models that emulate the structure and characteristics of input data in order to generate derived synthetic content.  This can include images, videos, audio, text, and other digital content.

“Machine learning”: a set of techniques that can be used to train AI algorithms to improve performance at a task based on data.

2. Developing guidelines, standards, and best practices for AI safety and security

The EO requires the development of guidelines, standards and best practices. Such guidelines and standards shall cover, among others, the following areas:

  • AI risk management, 
  • secure development practices, especially for generative AI and for dual-use foundation models, 
  • evaluation and auditing of AI capabilities (also with respect to cybersecurity and biosecurity), 
  • AI testing, including AI red-teaming tests (“AI red-teaming” means a structured testing effort to find flaws and vulnerabilities in an AI system, often in a controlled environment and in collaboration with developers of AI.  Artificial Intelligence red-teaming is most often performed by dedicated “red teams” that adopt adversarial methods to identify flaws and vulnerabilities, such as harmful or discriminatory outputs from an AI system, unforeseen or undesirable system behaviors, limitations, or potential risks associated with the misuse of the system.), developing and helping to ensure the availability of testing environments, such as testbeds, as well as to support the design, development, and deployment of associated PETs, 
  • protection against the risks of using AI to engineer dangerous biological materials, 
  • detection of AI-generated content and authentication of official content (see also Point 3 below). 

3. Reducing the risks posed by synthetic content

It is also required to identify the existing standards, tools, methods, and practices, as well as the potential development of further science-backed standards and techniques, for:

  1. authenticating content and tracking its provenance;
  2. labeling synthetic content, such as using watermarking;
  3. detecting synthetic content;
  4. preventing generative AI from producing child sexual abuse material or producing non-consensual intimate imagery of real individuals (to include intimate digital depictions of the body or body parts of an identifiable individual);
  5. testing software used for the above purposes; and
  6. auditing and maintaining synthetic content.

“Synthetic content” means information, such as images, videos, audio clips, and text, that has been significantly modified or generated by algorithms, including by AI. “Watermarking” means the act of embedding information, which is typically difficult to remove, into outputs created by AI — including into outputs such as photos, videos, audio clips, or text — for the purposes of verifying the authenticity of the output or the identity or characteristics of its provenance, modifications, or conveyance.

4. Privacy-related actions

The EO puts also an emphasize on developing standards and guidelines in respect of privacy protection in the context of AI development, with a special regard to  strenghtening privacy by accelerating the development and use of privacy-preserving techniques. The EO also defines the term “Privacy-enhancing technology” (PET) that means any software or hardware solution, technical process, technique, or other technological means of mitigating privacy risks arising from data processing, including by enhancing predictability, manageability, disassociability, storage, security, and confidentiality.  These technological means may include secure multiparty computation, homomorphic encryption, zero-knowledge proofs, federated learning, secure enclaves, differential privacy, and synthetic-data-generation tools.  This is also sometimes referred to as "privacy-preserving technology"

Besides the privacy-related provisions of the EO, the President also calls on Congress to pass bipartisan data privacy legislation to protect all Americans, especially kids. 

5. Promoting innovation and competition

The EO puts a focus also on the promotion of innovation in the field of AI to "unlock the technology’s potential to solve some of society’s most difficult challenges." This shall include the promotion of "a fair, open, and competitive AI ecosystem by providing small developers and entrepreneurs access to technical assistance and resources, helping small businesses commercialize AI breakthroughs, and encouraging the Federal Trade Commission to exercise its authorities."

6. Protecting vulnerable groups

AI can bring benefits to consumers, students and workers, however, it also raises the risk of injuring, misleading, or otherwise harming such groups of people. The relevant risks affecting the different groups shall be mitigated by adequate measures.  

Not only the Executive Order on AI was issued on October 30 but as part of the Hiroshima Process on AIG7 leaders also agreed on International Guiding Principles on Artificial Intelligence and a voluntary Code of Conduct for AI developers.

The following principles were agreed on:

  1. Take appropriate measures throughout the development of advanced AI systems, including prior to and throughout their deployment and placement on the market, to identify, evaluate, and mitigate risks across the AI lifecycle.
  2. Patterns of misuse, after deployment including placement on the market.
  3. Publicly report advanced AI systems’ capabilities, limitations and domains of appropriate and inappropriate use, to support ensuring sufficient transparency, thereby contributing to increase accountability.
  4. Work towards responsible information sharing and reporting of incidents among organizations developing advanced AI systems including with industry, governments, civil society, and academia.
  5. Develop, implement and disclose AI governance and risk management policies, grounded in a risk-based approach – including privacy policies, and mitigation measures, in particular for organizations developing advanced AI systems.
  6. Invest in and implement robust security controls, including physical security, cybersecurity and insider threat safeguards across the AI lifecycle.
  7. Develop and deploy reliable content authentication and provenance mechanisms, where technically feasible, such as watermarking or other techniques to enable users to identify AI-generated content.
  8. Prioritize research to mitigate societal, safety and security risks and prioritize investment in effective mitigation measures.
  9. Prioritize the development of advanced AI systems to address the world’s greatest challenges, notably but not limited to the climate crisis, global health and education.
  10. Advance the development of and, where appropriate, adoption of international technical standards.
  11. Implement appropriate data input measures and protections for personal data and intellectual property.
Szólj hozzá!

A bejegyzés trackback címe:

https://gdpr.blog.hu/api/trackback/id/tr9018247115

Kommentek:

A hozzászólások a vonatkozó jogszabályok  értelmében felhasználói tartalomnak minősülnek, értük a szolgáltatás technikai  üzemeltetője semmilyen felelősséget nem vállal, azokat nem ellenőrzi. Kifogás esetén forduljon a blog szerkesztőjéhez. Részletek a  Felhasználási feltételekben és az adatvédelmi tájékoztatóban.

Nincsenek hozzászólások.
süti beállítások módosítása