The Hungarian Parliament has significantly restricted the fundamental right to freedom of assembly in Hungary. In connection with the disproportionate and unnecessary restriction of fundamental rights, a number of important aspects can be highlighted, and great attention has also been paid to the fact that in the event of a possible violation of the prohibition of assembly, the participants will also commit a misdemeanor, for which they can be fined (a fine of up to EUR 500 can be imposed).
In the process of detecting violations and applying legal consequences, facial image analysis (facial recognition) is used to identify, even from a distance, citizens who appear at an unauthorized assembly. In this post, I focus on the use of facial recognition systems, because this is also a very important tool of restriction of the fundamental right of assembly, which is a serious restriction of the freedom of citizens, if only by the mere possibility of using the tool can discourage many people from exercising their democratic rights (presumably this was the purpose of the legislation).
1. What is a facial recognition system and what is it good for?
Facial recognition is a form of biometric identification. Biometric identification can be defined as identification based on biometric characteristics (face, fingerprint, hand pattern, vein pattern, voice, iris, retina, DNA).
The process of biometric identification typically consists of the following main steps: (i) recording raw data: extracting certain characteristics from the raw data with the help of an algorithm, based on which a so-called biometric template is created (biometric template is a series of data that cannot be decrypted, so biometric data cannot be retrieved from it); (ii) storage: the biometric template is stored, while the raw sample is typically not retained; (iii) application: in essence, this is the step in which identification takes place, by comparison with the stored biometric template.
In its Opinion 2/2012, the Article 29 Working Party gave the following definition of facial recognition (see p. 3):
Facial recognition is the automatic processing of digital images which contain the faces of individuals for the purpose of identification, authentication/verification or categorisation of those individuals.
According to the European Union's White Paper on Artificial Intelligence (see p. 21, fn. 56):
In connection to facial recognition, identification means that the template of a person’s facial image is compared to many other templates stored in a database to find out if his or her image is stored there. Authentication (or verification) on the other hand is often referred to as one-to-one matching. It enables the comparison of two biometric templates, usually assumed to belong to the same individual. Two biometric templates are compared to determine if the person shown on the two images is the same person. Such a procedure is, for example, used at Automated Border Control (ABC) gates used for border checks at airports.
Facial recognition is therefore a term covering different ways of using the facial recognition technology (FRT): (i) identification: comparing a biometric template with several or many other templates (i.e. 1:N analysis), (ii) authentication or confirmation: 1:1 comparison, or (iii) classifying (categorisation): division into categories according to some criteria (e.g., age, gender). Identification and authentication basically serve the purpose of ascertaining the identity of the individual (data subject), i.e. individual identification, while categorisation serves to classify the individual (data subject) into a group based on his or her characteristics.
2. What regulations apply to facial recognition in Hungary?
It is important to emphasize that the use of facial recognition (facial analysis) technology itself constitutes a significant restriction of fundamental rights and a serious interference with the privacy of individuals. Such restrictions must be absolutely necessary and proportionate to the purpose.
The Hungarian Data Protection Authority (NAIH) emphasized in its 2015 opinion on Act CLXXXVIII of 2015 on the Facial Image Analysis Register and the Facial Image Analysis System (the "Facial Image Act") that "[...] biometric data processing is one of the data processing technologies that have a significant impact on the privacy of data subjects, or that are difficult to predict due to the nature and purpose of the technology used, and the State has a particular responsibility to ensure that the proliferation of biometric data processing does not undermine privacy and personal data protection rights. As a matter of principle, the use of a central biometric reference register for the whole population should be limited to the narrowest possible scope. Provided that appropriate legal safeguards are in place, the possible purposes of mandatory biometric identification and identity verification may be constitutionally acceptable as being in the fundamental interests of the State, such as law enforcement and national security. [...]" (NAIH/2015/3009/9/J, point 1, pp. 1-2, emphasis added). By the way, the Hungarian DPA formulated these aspects already at the beginning of the legislative process, in connection with the regulatory concept (see NAIH-2264-2/2014/J. In that opinion, the Hungarian DPA also states that "[...] secret information collection in public places based on biometric identification or other mandatory surveillance must not be on a massive scale [...]" (see p. 4, emphasis added). Further opinions of the Hungarian DPA on the subject are available here in Hungarian. Unfortunately, clicking on the link of some resolutions gives us an error message.)
In connection with the Facial Image Act (the text of the Act is available here in Hungarian), the detailed rules of operation of the facial image analysis system are set out in Decree 78/2015 (XII.23.) of the Ministry of Interior (see the text here in Hungarian). The Hungarian Institute of Forensic Sciences (HIFS; in Hungarian: Nemzeti Szakértői és Kutató Központ, NSZKK) was designated as a body carrying out image analysis activities (see Government Decree 350/2016 (XI.18.), Art. 5), which started the analysis activity on 1 January 2017. The operator of the facial image analysis register and facial image analysis system is a state-owned company (IdomSoft Zrt).
Typically, facial image analysis activities carried out on the basis of the Facial Image Act does not take place in real time, but in some cases, within a very short period of time. According to the Decree of the Ministry of the Interior, "if the body entitled to use the facial analysis service declares in its application for the use of the facial analysis service that there is a risk to the safety of a minor, a situation posing a direct and serious threat to public security or national security or a priority law enforcement interest in the proceedings pending before it, the body carrying out the facial analysis service shall act without delay. [...] In the case of an exceptional procedure, the body carrying out the facial image analysis shall, within 24 hours of receipt of the request, provide the requesting body with the technical connecting number of the facial image profile obtained as a result of the evaluation activity and managed in the facial image profile register, as specified in Article 4, or inform the requesting body of the fact of non-conformity or of the ineffectiveness of the evaluation activity with regard to the facial image profile register." (see Decree of the Ministry of Interior, Art. 5 (1) and (4), emphasis added) (By the way, in connection with the tasks related to the measures applicable in the special legal order or the management of the crisis situation caused by mass immigration, the analysis must be carried out even faster, even within 2 hours. See Decree of the Ministry of the Interior, Art. 5 (5))
It is also worth mentioning that the Act on the Police, which covers the application of the possibility of automated comparison within the scope of identification of individuals by the police (Art. 29): "(4) In the event of refusal to provide proof of identity, the person under identification may be detained for the purpose of establishing identity, and in the event of failure to establish identity, if the identity cannot be established by any other means or if there is no credible evidence to prove it, the person may be photographed, fingerprinted and his/her external physical characteristics may be recorded by observation and measurement. (4a) The photograph taken pursuant to paragraph (4) may be checked on the spot for the purpose of establishing identity, using the automated comparison regulated by the Act on the Facial Image Analysis Register and the Facial Image Analysis System, in accordance with the rules laid down therein, by unsing the device provided for that purpose." (emphasis added) [Further details regarding the use of automated comparision by the police is in ORFK Instruction no. 11/2016 (IV. 29.).]
3. How does the facial analysis process look like in Hungary?
Based on the Facial Image Act and the relevant implementing rules, the facial image analysis process is carried out in the following main steps:
- The body entitled to use the service shall contact the Hungarian Institute of Forensic Sciences in connection with a case in order to carry out an analysis. An image (e.g. an image extracted from a public camera recording) is submitted, based on which the identification is requested.
- The analysis is carried out by two analysts of the Facial Image Recognition Analysis Department of the Hungarian Institute of Forensic Sciences, independently of each other, using the software provided for this purpose. (The person performing the facial image analysis performs the analysis without knowing whose photograph he is analyzing and for what purpose.)
- Based on the software analysis, a list is made of the most similar images in the reference database.
- Analysts select the ones from the list above that are most similar to the submitted image.
- Before sending the result, the analysts jointly decide what to send to the requesting body as a result.
- Once the evaluation activity has been completed, only the connection code assigned to the relevant facial image profile will be provided to the requesting body. (The body entitled to request data will receive the personal data related to the connection code provided in a separate procedure.)
Unlike the above process, analysts are not involved in automated comparison. The process of automated identification is as follows, as set out in the Hungarian DPA´s Annual Report for 2019:
- If a person, who is requested by the policy to identify himself is unable to prove his identity, the police officer carrying out the measure will take a photo of the person subject to the measure using a device equipped with the NOVA.mobil application (which is part of the “RoboCop system”).
- The application offers five hits from the central biometric facial image profile register.
- From the hits, the police officer decides which one matches the person subject to identification.
- After that, the application downloads the data necessary for the identification of the person selected on the basis of the facial image from the central personal data and address register.
- Facial image files are automatically deleted on the mobile device used by the police officer.
Sources used to the above description of the analysis process - in addition to the relevant legislation - include:
- Gergely Gárdonyi: Still Image Face Recognition in Hungary (in Hungarian, Belügyi Szemle, 2021/7, pp. 1133-1148)
- Ágota Németh: The role of facial recognition in law enforcement work (in Hungarian, Magyar Rendészet, 2022/2, pp. 171—182.)
- Report of the National Authority for Data Protection and Freedom of Information on its activities in 2019 (in Hungarian, 2020, Chapter V.4., Point 5, p. 137)
4. What does the recently adopted act say about the use of facial recognition technology (facial image analysis)?
The act on the restriction of the right of assembly was passed by the Hungarian Parliament in record time. The bill was submitted to the Hungarian Parliament on 17 March 2025, and the adopted act, Act no. III of 2025 on the amendment of Act LV of 2018 on the Right of Assembly related to the protection of children and related acts, was already published in the Hungarian Official Gazette on 18 March (Magyar Közlöny, Issue 29 of 2025).
Regarding the legislation process, it´s also worth referring to the Hungarian DPA´s opinion quoted above, which states that "[...] a central biometric facial reference database would significantly limit the right to the protection of personal data for a large number of citizens. This is a matter of such magnitude that it requires the opinions of the Hungarian people to be known and respected. Therefore, we recommend that the Ministry of the Interior, before any public decision is taken on the matter, should as soon as possible make the relevant plans public and initiate a public debate on the feasibility of introducing biometric facial profile registration for the entire population of Hungary. In our opinion, the five days allowed for the public debate are not enough for the public to be informed about the draft law, to form an informed opinion and to provide feedback to the ministry preparing the legislation" (NAIH/2015/3009/9/J, Point 10, p. 5, emphasis added)
The amendments affecting the Facial Image Act are as follows:
- Regarding the purposes of keeping the facial image profile register, the following clarification has been made (see Art. 3(3)(w)): "preventing, deterring, detecting and disrupting misdemeanors and bringing offenders to justice."
- The scope of bodies entitled to use facial image analysis activities has also been amended (Art. 9 (18)): "(18) For the purposes set out in point (w) of paragraph (3) of Article 3, the court, the misdemeanor authority and the body conducting the preparatory inquiry in proceedings in respect of a misdemeanor to use the facial image analysis services of the body carrying out the facial image analysis activity in order to establish the identity of the offender and to verify the identity of the offender by using the facial image available to it or recorded by it." (The essence of the amendment here is to extend the possibility of using facial image analysis to all misdemeanors, not only to misdemeanors punishable by imprisonment.)
- The possibility of using automated comparison has also been extended to all misdemeanors (see the change in the title of subtitle 9/A).
The Misdemeanor Act has also been amended in connection with facial image analysis to the extent that the following text has been added to such Act (see Art. 56 (5)): "In order to establish the identity of a person suspected of having committed a misdemeanor, if the offender is unknown, the court, the misdemeanor authority, the body conducting the preparatory procedure may use the face analysis activity of the body conducting the face analysis activity, as defined in the Act on the Facial Image Analysis Register and the Facial Image Analysis System."
Overall, with regard to facial image analysis, the amendment does not mean any substantial change in terms of facial image analysis activities, but the amendment aimed not only at "preventing, deterring, detecting and disrupting misdemeanors punishable by misdemeanor imprisonment, as well as and bringing offenders to justice", but facial image analysis, including automated comparison, could be applicable to ALL misdemeanors. This in itself raises serious concerns, as it makes a very intrusive intervention available in the event of any misdemeanor.
5. What are the main concerns about the use of facial recognition for law enforcement purposes?
There is quite a wide literature on the subject, in which many risks related to FRT and its use for various purposes are drawn. I will highlight only a few of the most important risks in the following:
- Excessive restrictions on fundamental rights (in particular respect for private and family life, protection of personal data).
- Ethical issues related to the use of FRT, issues of fairness, necessity and proportionality. In connection with the application of FRT, the possibility of mass surveillance can also be raised, especially in view of the increasing number of situations in which recordings of individuals can be made, so the interconnection of these systems may have an increasing negative impact.
- Overuse, i.e. use cases where this technology is not required.
- Explicit data protection concerns in general in connection with the use of FRT and, where applicable, in connection with the adequacy of the guarantee measures introduced during the use of such intrusive technology (see e.g., the Hungarian DPA´s opinions in this regard).
- The application of this technology is not sufficiently transparent for those affected.
- In connection with the use of such technology, the question of accuracy and reliability always arises. FRT is evolving, but errors and inaccuracies can still occur to a very significant extent, which in turn can have very serious consequences for the person concerned, especially in the case of law enforcement use. The technical limitations of the system can also play a significant role in terms of accuracy (e.g. poor visibility, inappropriate angle, inadequate resolution, etc.). (It is no coincidence that, as a general rule, two analysts carry out the activity independently of each other, while applying a number of guarantee rules.)
- It is also related to accuracy that there is a very significant risk of discrimination. There is a possibility of a higher proportion of errors and mistakes in case of certain groups (e.g. minorities, older people, etc.).
- The adequate availability and effectiveness of the guarantees and legal remedies related to the procedure based on FRT may be questionable. There may also be concerns about the adequacy of human oversight (especially for automated applications).
- Security concerns due to the sensitivity of data and activity, any data leakage or misuse can be particularly serious.
Some resources on this topic:
- Gültekin-Várkonyi, G. (2024). Navigating data governance risks: Facial recognition in law enforcement under EU legislation (Internet Policy Review, 13(3), https://doi.org/10.14763/2024.3.1798)
- Hafiz Sheikh Adnan Ahmed: Facial Recognition Technology and Privacy Concerns (ISACA, 21.12.2022)
- World Economic Forum: A Policy Framework for Responsible Limits on Facial Recognition Use Case: Law Enforcement Investigations (2022)
6. How does the AI Act affect the use of AI systems for facial recognition activities?
6.1. Purpose of the AI Act
The AI Act states immediately in its Article 1 (emphasis added):
The purpose of this Regulation is to improve the functioning of the internal market and promote the uptake of human-centric and trustworthy artificial intelligence (AI), while ensuring a high level of protection of health, safety, fundamental rights enshrined in the Charter, including democracy, the rule of law and environmental protection, against the harmful effects of AI systems in the Union and supporting innovation.
The above is explained in a little more detail in Recital (1) of the AI Act (emphasis added):
The purpose of this Regulation is to improve the functioning of the internal market by laying down a uniform legal framework in particular for the development, the placing on the market, the putting into service and the use of artificial intelligence systems (AI systems) in the Union, in accordance with Union values, to promote the uptake of human centric and trustworthy artificial intelligence (AI) while ensuring a high level of protection of health, safety, fundamental rights as enshrined in the Charter of Fundamental Rights of the European Union (the ‘Charter’), including democracy, the rule of law and environmental protection, to protect against the harmful effects of AI systems in the Union, and to support innovation. [...]
However, the AI Act does not stop there, Recital (28) states (emphasis added)
Aside from the many beneficial uses of AI, it can also be misused and provide novel and powerful tools for manipulative, exploitative and social control practices. Such practices are particularly harmful and abusive and should be prohibited because they contradict Union values of respect for human dignity, freedom, equality, democracy and the rule of law and fundamental rights enshrined in the Charter, including the right to non-discrimination, to data protection and to privacy and the rights of the child.
6.2 Prohibited AI practices
The AI Act becomes applicable in several stages. The provisions on prohibited AI practices (Article 5 of the AI Act) became applicable 6 months after the entry into force of the AI Act, i.e., on February 2, 2025. The AI Act classifies several practices that pose an unacceptably high risk (e.g. the creation of facial recognition databases by non-targeted retrieval of facial images from the internet or closed-circuit television recordings; social scoring systems, AI systems intended for risk assessment of the likelihood of crimes, etc.).(More info about the definition of AI systems is available here.)
These prohibited AI practices include, as a general rule, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement, unless and in so far as such use is strictly necessary for one of the following objectives:
- the targeted search for specific victims of abduction, trafficking in human beings or sexual exploitation of human beings, as well as the search for missing persons;
- the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or a genuine and present or genuine and foreseeable threat of a terrorist attack;
- the localisation or identification of a person suspected of having committed a criminal offence, for the purpose of conducting a criminal investigation or prosecution or executing a criminal penalty for offences referred to in Annex II and punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least four years.
(*Annex II of the AI Act contains serious crimes such as terrorism, human trafficking, sexual exploitation of children and child pornography, illicit trafficking in drugs or psychotropic substances, etc.)
The reason for the prohibition (and the narrow exceptions for law enforcement purposes) is clear (see Recital (32) of the AI Act, emphasis added):
The use of AI systems for ‘real-time’ remote biometric identification of natural persons in publicly accessible spaces for the purpose of law enforcement is particularly intrusive to the rights and freedoms of the concerned persons, to the extent that it may affect the private life of a large part of the population, evoke a feeling of constant surveillance and indirectly dissuade the exercise of the freedom of assembly and other fundamental rights. [...]
It is clear that this is precisely the aim of the current Hungarian legislation: to discourage the free exercise of freedom of assembly and other fundamental rights, including freedom of expression.
To define the framework for the application of this prohibition, it is necessary to clarify the related concepts. According to the AI Act (see Article 3, paragraphs 35 and 41-43):
"biometric identification" means the automated recognition of physical, physiological, behavioural, or psychological human features for the purpose of establishing the identity of a natural person by comparing biometric data of that individual to biometric data of individuals stored in a database;
"remote biometric identification system" means an AI system for the purpose of identifying natural persons, without their active involvement, typically at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database;
"real-time remote biometric identification system" means a remote biometric identification system, whereby the capturing of biometric data, the comparison and the identification all occur without a significant delay, comprising not only instant identification, but also limited short delays in order to avoid circumvention;
"post-remote biometric identification system" means a remote biometric identification system other than a real-time remote biometric identification system.
Let's look at the individual elements that need to be interpreted in order to examine the possible application of the prohibitionon the use of real-time remote biometric identification systems:
- identification is carried out using a remote biometric identification system,
- identification takes place in real-time,
- in publicly accessible spaces,
- for law enforcement purposes.
Of the above, perhaps the use of remote biometric identification in real-time deserves a little more explanation in the present case, because the other aspects seem relatevily clear (i.e., the use of remote biometric identification system by applying facial image analysis in publicly accessible spaces by law enforcement authorities).
The AI Act does not define the concept of "real-time", but Recital (17) sets out: "[...] In the case of ‘real-time’ systems, the capturing of the biometric data, the comparison and the identification occur all instantaneously, near-instantaneously or in any event without a significant delay. In this regard, there should be no scope for circumventing the rules of this Regulation on the ‘real-time’ use of the AI systems concerned by providing for minor delays. ‘Real-time’ systems involve the use of ‘live’ or ‘near-live’ material, such as video footage, generated by a camera or other device with similar functionality. In the case of ‘post’ systems, in contrast, the biometric data has already been captured and the comparison and identification occur only after a significant delay. This involves material, such as pictures or video footage generated by closed circuit television cameras or private devices, which has been generated before the use of the system in respect of the natural persons concerned." (emphasis added)
The Commission Guidelines on prohibited artificial intelligence practices also addresses this issue, providing further guidance to Recital (17) of AI Act above. Generally speaking, the Guidelines consider "a delay significant at least when the person is likely to have left the place where the biometric data was taken." (see paragraph (310), p. 101) The Guidelines add that "[When a law enforcement authority covertly takes a picture of a person via a mobile device and submits it to a database for immediate search, depending on the circumstances, this may fall under the prohibition of Article 5(1)(h) AI Act." (see paragraph (312), p. 101)
As we have seen above, the new Hungarian legislation provides for the use of facial analysis essentially for post-identification purposes. This means that in principle, remote biometric identification is not carried out in real-time. (Under the Police Act, the identification of an individual by means of automated comparison can be considered as real-time, but in this case the identification is not remote in the sense that it is carried out with the knowledge and cooperation of the person subject to the identification, since the photograph necessary for identification is taken on the spot with the knowledge of the person concerned. See also paragraph (312) of the Commission Guidelines above.)
As pointed out in the Commission Guidelines, "[...] even where an AI system is not prohibited by the AI Act, its use could still be prohibited or unlawful based on other primary or secondary Union law (e.g., because of the failure to respect fundamental rights in a given case, such as the lack of a legal basis for the processing of personal data required under data protection law, discrimination prohibited by Union law, etc.)." (see paragraph (43), p. 14; see also Mario Guglielmetti ´s post on this aspect.)
6.3. Post-remote biometric identification
Of course, the fact that in a given case the facial analysis activity used in connection with the restriction of the right of assembly does not fall within the scope of prohibited AI practices does not mean that the AI Act will not - in the future - apply to similar solutions. The AI Act also contains rules on post-remote biometric identification systems. These systems are typically considered as high-risk AI systems and are therefore subject to enhanced requirements, and Article 26(10) of the AI Act explicitly provides for additional requirements in the case of the use of high-risk AI systems for post-remote biometric identification (the requirements of the AI Act for high-risk AI systems will become applicable from August 2 2026).
It is also worth reading the preamble of the AI Act regarding post-remote biometric identification systems (see Recital 95, emphasis added):
Without prejudice to applicable Union law, in particular Regulation (EU) 2016/679 and Directive (EU) 2016/680, considering the intrusive nature of post-remote biometric identification systems, the use of post-remote biometric identification systems should be subject to safeguards. Post-remote biometric identification systems should always be used in a way that is proportionate, legitimate and strictly necessary, and thus targeted, in terms of the individuals to be identified, the location, temporal scope and based on a closed data set of legally acquired video footage. In any case, post-remote biometric identification systems should not be used in the framework of law enforcement to lead to indiscriminate surveillance. The conditions for post-remote biometric identification should in any case not provide a basis to circumvent the conditions of the prohibition and strict exceptions for real time remote biometric identification.
It is worth mentioning (as Mario Guglielmetti points out in his very good summary of the use of facial recognition under the new Hungarian legislation) that during the discussion of the AI Act, Austria argued that the exceptions to the prohibition on the use of real-time remote biometric identification systems for law enforcement purposes should be narrowed, and that post-remote biometric identification should also be included in the prohibited AI practices (obviously with appropriate exceptions), because the impact on fundamental rights in this case is too high to be categorised as "only" a high-risk AI system.
6.4 Has it already become clear that the new EU AI Act is worthless?
Although contrary to some preliminary views (see e.g. here and here), as seen above, the use of facial recognition in this case is unlikely to fall under the prohibition as set out the AI Act. This does not mean, however, that the AI Act will be completely inapplicable to such cases in the future, as the requirements for high-risk AI systems must also be met, which may be questionable even in the case of retroactive use of facial recognition systems if the purpose of the application is to detect misdemeanors, especially as this also serves as a serious limitation to the functioning of a democratic society, severely restricting freedom of assembly and expression.
7. Is everything fine from a data protection point of view?
In connection with the use of facial recognition systems, it is of great importance that biometric data, i.e. data belonging to a special category of personal data, is processed during facial recognition. According to the GDPR and the Law Enforcement Data Protection Directive (Directive 2016/680 on 2016/680 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data by competent authorities for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, and on the free movement of such data, and repealing Council Framework Decision 2008/977/JHA, "LED"),
‘biometric data’ means personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, which allow or confirm the unique identification of that natural person, such as facial images or dactyloscopic data.
Given that this data processing falls under the scope of the LED, the provisions of the Directive and the Hungarian Data Protection Act ("Infotv", Act CXII of 2011), which implements LED in Hungary, shall apply to the assessment of the lawfulness of data processing.
The preamble to the AI Act (see Recital (94)) explicitly refers to this aspect: "Any processing of biometric data involved in the use of AI systems for biometric identification for the purpose of law enforcement needs to comply with Article 10 of Directive (EU) 2016/680, that allows such processing only where strictly necessary, subject to appropriate safeguards for the rights and freedoms of the data subject, and where authorised by Union or Member State law. Such use, when authorised, also needs to respect the principles laid down in Article 4 (1) of Directive (EU) 2016/680 including lawfulness, fairness and transparency, purpose limitation, accuracy and storage limitation." (emphasis added)
As the above-mentioned opinions of the Hungarian DPA show, and as the preamble of the AI Act refers to, it is of paramount importance that in the case of processing operations posing a particularly high risk to data subjects, the enforcement of data protection rules should not be only taken into account formally, but also particular attention must be paid to the purpose, necessity, proportionality and fairness of the processing. From a data protection point of view, this is the main problem in the present case, namely that the processing goes beyond the limits of necessity and proportionality and thus the fairness of the processing is called into question.
What does the European Data Protection Board (EDPB) say?
The Board adopted Guidelines on the use of facial recognition technology in the area of law enforcement (Guidelines 05/2022). In the Guidelines, the Board details a number of aspects based on which the use of facial recognition systems in the case of misdemeanors, especially for mass identification, is not acceptable from a data protection point of view.
Examining certain theoretical scenarios (see in particular Scenario 5), the EDPB states:
"The scenarios entail the monitoring of every passers-by in the respective public space. Thus, it severely affects the populations’ reasonable expectation of being anonymous in public spaces. This is a prerequisite for many facets of the democratic process, such as the decision to join a civic association, visit gatherings and meet people of all social and cultural backgrounds, participate in a political protest and visit places of all kinds. The notion of anonymity in public spaces is essential to gather and exchange information and ideas freely. It preserves the plurality of opinion, the freedom of peaceful assembly and freedom of association and the protection of minorities and supports the principles of separation of powers and checks and balances. Undermining the notion of anonymity in public spaces can result in a severe chilling effect on citizens. They may refrain from certain behaviours which are well within the remits of a free and open society. This would affect the public interest, as a democratic society requires the self-determination and participation of its citizens in the democratic process." (see Scenario 5, Section 5.3, p. 49, emphasis added)
The Board also adds that "[t]he installation of a system that enables uncovering the very core of the individual’s behaviour and characteristics leads to strong chilling effects. It makes people question whether to join a certain manifestation, thus damaging the democratic process. Also meeting and being seen in public with a certain friend known as having trouble with police or behaving in a unique way might be seen as critical, since all of this would lead to the attraction of the system’s algorithm and thus of law enforcement." (see Scenario 5, Section 53, pp. 49-50, emphasis added)
The Board concludes that '[t]he aforementioned scenarios concerning remote processing of biometric data in public spaces for identification purposes fail to strike a fair balance between the competing private and public interests, thus constituting a disproportionate interference with the data subject’s rights under Articles 7 and 8 of the Charter." (see Scenario 5, Section 5.4, p. 51, emphasis added)
Unfortunately, the Hungarian legislation seems to be implementing the negative scenario set out in the EDPB's guidelines.
8. What can be the basis of the actions against the use of facial recognition in connection with the detection and punishment of misdemeanors?
Apparently, the legal framework has been shaped in such a way that it would be lawful, at least "on paper", to use facial image analysis (facial recognition) in the case of simple misdemeanors, even if the given misdemeanors is not punishable by imprisonment, and also in such a way that it actually promotes the restriction of fundamental rights (such as the right of assembly and the right of expression).
However, in addition to AI Act, as the Commission spokesperson also pointed out, data protection regulation can be a barrier to the abusive use of new technologies, which also violates the principle of fair data processing. As I have already referred to above, the Hungarian DPA and the EDPB have already drawn attention to these aspects. (However, I have not found any evidence that the Hungarian DPA would have expressed an opinion on the amendment that has been adopted.)
It is interesting to mention that the opinion of the Hungarian DPA on the "Data protection requirements related to drones planned to be used by public space supervision" highlights that "[...] the Authority emphatically draws your attention to the fact that in the case of data processing carried out by the public space supervision in public areas for law enforcement purposes falling within the scope of the Infotv (the activities of the public space surveillance aimed at maintaining public safety or crime prevention, exercising the related authority powers and conducting misdemeanor proceedings), the lawfulnes of the processing of personal data that may be realised with the use of drones cannot always be guaranteed [...]". (NAIH-5255-2/2022, Point X, p. 8) This may also be true in the case of mass facial image analysis...
The Charter of Fundamental Rights of the European Union defines a number of fundamental rights, the violation of which may also arise in connection with the restriction of the right of assembly and the related facial image analysis used to detect misdemeanors. Fundamental rights such as respect for private and family life (Article 7), protection of personal data (Article 8), freedom of expression and information (Article 11), freedom of assembly and association (Article 12) may be affected.
In addition to the EU Charter of Fundamental Rights, the European Convention on Human Rights (ECHR) also protects the above fundamental rights, and based on this, a ruling has already been made against Russia in connection with the use of facial recognition technology. In its judgment of the European Court of Human Rights (ECtHR) of 4 July 2023 (Glukhin v. Russia case) found that there had been an infringement of Article 8 (Right to respect for private and family life) and Article 10 (Freedom of expression) of the ECHR in relation to the use of facial recognition technology. The background to the case was that Mr. Glukhin held an unannounced demonstration and the Russian authorities used facial recognition technology to identify and convict the protester. According to the Court,
the use of highly intrusive facial recognition technology in the context of the applicant’s exercising his Convention right to freedom of expression is incompatible with the ideals and values of a democratic society governed by the rule of law, which the Convention was designed to maintain and promote. The processing of the applicant’s personal data using facial recognition technology in the framework of administrative-offence proceedings – firstly, to identify him from the photographs and the video published on Telegram and, secondly, to locate and arrest him while he was travelling on the Moscow underground – cannot be regarded as "necessary in a democratic society". (emphasis added)
Further reading to the topic of FRT and fundamental rights:
European Union Agency for Fundamental Rights (FRA): "Facial recognition technology: fundamental rights considerations in the context of law enforcement" (2020)
9. What can be done?
Based on the above, the use of face recognition technology for the purpose of detecting and punishing misdemeanors, thus restricting the right of assembly and free expression, can be questionable from several aspects. Although, as we have seen, the rules on prohibited AI practices (Article 5 of the AI Act) do not directly exclude the post-remote use of facial recognition technology, at the same time, there are a number of fundamental rights aspects that can make the application of the system unlawful. Thus, the primary way of enforcing rights may be to refer to these violations of fundamental rights.
10. Conclusion
In the above, I have tried to examine from several angles how the use of facial recognition technology may be assessed in connection with the restriction of the right of assembly as a fundamental right. The legislation unnecessarily and disproportionately restricts the rights to respect for private and family life and the protection of personal data in the context of restrictions on freedom of expression and assembly. As we have seen in the case of Glukhin v. Russia, there is already a precedent before the European Court of Human Rights that such use of facial recognition systems, as a highly intrusive technology, cannot be regarded as "necessary in a democratic society".