GDPR

Adatvédelem mindenkinek / Data protection for everyone

Artificial Intelligence red teaming

2024. január 12. 14:45 - poklaszlo

Red teaming is not a new testing technique, but recently it has received increasing attention in connection with testing the potential vulnerabilities of Artificial Intelligence (AI) systems. Even the Executive Order issued by the President of the United States on October 30, 2023 requires red team testing for certain AI systems.

In the following, I will briefly address the topic of red team testing, with the focus on the application of such testing  technique regarding AI systems.

1. What is red team testing and how does it differ from other testing methodologies?

The essence of red team testing is to look for vulnerabilities and flaws using adversarial methods and techniques, thus bringing the external point of view of the attacker into the testing. These types of testing simulate methods by external attackers to determine how well the attacked organization, its processes, and its technology can resist a targeted attack. Testing can be done manually, but it is also possible to use another AI system as "a testing team". (The latter technique is illustrated in a study published in 2022: Perez et al: "Red Teaming Language Models with Language Models", which examined testing a language model (LM) using another language model.)

Red team testing is not a new approach to IT security or cybersecurity (actually, the roots of the concept go back to military in the 1960s), but it can also be an important element of the risk mitigation toolkit regarding AI systems, especially high-risk AI systems. In addition to the red team, a blue team may also appear, which can play the role of the defensive party against the attack. (More recently, the so-called purple teaming has also gained ground, and such testing method combines the tasks and approaches of red and blue teams.)

Red teaming differs from other testing solutions, especially from penetration testing, in several ways. Penetration tests are also aimed at looking for vulnerabilities, but they typically focus on one system, last for a shorter period of time, work with less diverse toolkit, and do not necessarily incorporate the external attacker's approach and full range of tools into their approach. (For a more detailed comparison, please see, for example, this article or this other short article.)

2. Red team testing and AI

According to the US President's Executive Order (EO) on AI, the term "AI red-teaming" means "a structured testing effort to find flaws and vulnerabilities in an AI system, often in a controlled environment and in collaboration with developers of AI.  Artificial Intelligence red-teaming is most often performed by dedicated “red teams” that adopt adversarial methods to identify flaws and vulnerabilities, such as harmful or discriminatory outputs from an AI system, unforeseen or undesirable system behaviors, limitations, or potential risks associated with the misuse of the system."

The EO does not require AI red-teaming to be carried out regarding all AI systems but only concerning AI systems that meet specific criteria. These systems typically use very high computing power. Based on the presidential decree, National Institute of Standards and Technology (NIST) will be responsible for developing further rules and standards for red team testing of AI systems (primarily concerning generativ AI models and dual-use foundation models). (For more information about NIST's responsibilities with the EO and developments over the past few months, see this NIST's summary.)

Prior to the EO, the White House also hosted an event to conduct red team testing of large language models (LLMs) in order to gain a better picture of the risks posed by using these models. In connection with the event, it was established that red teaming can be a useful tool in detecting new risks related to AI (not only in terms of security, but also in terms of threats of discrimination or violation of privacy), and based also on these experience, this form of testing eventually appeared in the EO as well.

There may be significant differences in the approach between red team testing of generative AI systems and other AI systems, given that generative AI solutions function differently from other AI systems and this difference should be reflected in the testing methodology. Accordingly, red teaming of generative AI systems is best done through "malicious" prompt and input generation. (Please see the following Harvard Business Review article by Andrew Burt, "How to Red Team a Gen AI Model.")

According to reports, based on the political compromise reached last December, the requirement of red team testing for general purpose AI systems may also appear in the EU AI Regulation.

Szólj hozzá!

A bejegyzés trackback címe:

https://gdpr.blog.hu/api/trackback/id/tr5718301223

Kommentek:

A hozzászólások a vonatkozó jogszabályok  értelmében felhasználói tartalomnak minősülnek, értük a szolgáltatás technikai  üzemeltetője semmilyen felelősséget nem vállal, azokat nem ellenőrzi. Kifogás esetén forduljon a blog szerkesztőjéhez. Részletek a  Felhasználási feltételekben és az adatvédelmi tájékoztatóban.

Nincsenek hozzászólások.
süti beállítások módosítása