A/HRC/56/68
6,000 languages in the world have high-quality data resources that can be used to train
artificial intelligence models. To address that gap, companies have begun to develop
multilingual language models. However, multilingual models do not perform as well as
English-language models. The use of large language models in educational settings could
disadvantage students from linguistic backgrounds that are not represented in the underlying
data resources, which may have racially disproportionate impacts.71
48.
There are debates about whether generative artificial intelligence tools based on large
language models should be banned among students rather than integrated into curricula.
There are also steps in some educational settings to try to restrict the use of generative
artificial intelligence tools that rely on large language models among students. Some
educational institutions are using artificial intelligence tools to detect the use of artificial
intelligence by students. The use of such tools, which may contain algorithmic bias, to patrol
cheating may introduce further biases that harm students from marginalized racial and ethnic
groups. Such harm is bound to be exacerbated in cases in which institutions have not set up
equitable appeals processes.72
(d)
Facial recognition in educational institutions
49.
Facial recognition technologies have been introduced in many educational settings
around the world, despite evidence of racial bias in their operation, as described above. Facial
recognition systems are being used to automate attendance-taking, to enhance school
security, to perform examination proctoring functions and even to record the emotions of
children in schools to monitor how much they are learning. This is often without adequate
human rights due diligence or regulatory oversight. For example, in Brazil, an increasing
number of schools are adopting facial recognition tools to streamline operations, track
attendance and enhance security.73 However, it has been reported that neither municipalities
nor states conducted human rights impact assessment studies or analysed the risks of
discrimination associated with facial recognition software before implementing these
projects.74
50.
The use of facial recognition software in educational settings is having racially
discriminatory impacts. There have been cases, including one reported in the Kingdom of the
Netherlands, in which students of African descent have had to shine lights in their faces to
be recognized by the artificial intelligence systems used to mediate access to important
examinations. Such experiences impact students’ equal right to education but also create
friction and exclusion when students from marginalized racial and ethnic groups are given
the impression that the system was not designed for them. The recording and monitoring of
children’s emotions in schools has significant privacy implications for all students and can
perpetuate racial bias. These systems have been found to interpret the facial expressions of
individuals of African descent and white individuals differently, attributing negative feelings,
such as contempt and anger, more frequently to those of African descent. 75
71
72
73
74
75
GE.24-08849
Felix Richter, “The most spoken languages: on the Internet and in real life”, Statista, 21 February
2024; Emily M. Bender, “The #BenderRule: on naming the languages we study and why it matters”,
The Gradient, 14 September 2019; Gabriel Nicholas and Aliya Bhatia, “Lost in translation: large
language models in non-English content analysis”, Center for Democracy and Technology, 23 May
2023; A. Bergman and Mona Diab, “Towards responsible natural language annotation for the
varieties of Arabic”, in The 60th Annual Meeting of the Association for Computational Linguistics:
Findings of ACL 2022 (Association for Computational Linguistics, 2022); and BigScience Workshop,
“A 176B-parameter open-access multilingual language model” (ArXiv, 2022).
See Regina Ta and Darrell M. West, “Should schools ban or integrate generative AI in the
classroom?”, Brookings Institution, 7 August 2023; and Robert Topinka, “The software says my
student cheated using AI. They say they’re innocent. Who do I believe?”, The Guardian, 13 February
2024.
InternetLab submission.
Ibid.
Ibid.
13