A/75/590
refugees, migrants and stateless persons have no or very limited recourse for
challenging this technological experimentation, and the human rights violations that
may be associated with it. Furthermore, it is national origin and
citizenship/immigration status that exposes refugees, migrants and stateless p ersons
to this experimentation, raising serious concerns about discriminatory structures of
vulnerability.
39. One submission drew attention to iBorderCtrl (the Intelligent Portable Border
Control System), part of the European Union’s Horizon 2020 Programm e, which
“aims to enable faster and thorough border control for third -country nationals
crossing the land borders of European Union member States”. 106 iBorderCtrl uses
hardware and software technologies that seek to automate border surveillance. 107
Among its features, the system undertakes automated deception detection. 108 The
European Union has piloted this lie detector at airports in Greece, Hungary and
Latvia. 109 Reportedly, in 2019 iBorderCtrl was tested at the Serbian-Hungarian border
and failed. 110 iBorderCtrl exemplifies the trend of experimenting surveillance and
other technologies on asylum seekers, based on scientifically dubious grounds. 111
Drawing upon the contested theory of “affect recognition science”, iBorderCtrl
replaces human border guards with a facial recognition system that scans for facial
anomalies while travellers answer a series of questions. 112 Other countries such as
New Zealand are also experimenting with using automated facial recognition
technology to identify so-called future “troublemakers”, which has prompted civil
society organizations to mount legal challenges on grounds of discrimination and
racial profiling. 113
40. States are currently experimenting with automating various facets of
immigration and asylum decision-making. For example, since at least 2014, Canada
has used some form of automated decision-making in its immigration and refugee
system. 114 A 2018 University of Toronto report examined the human rights risks of
using artificial intelligence (AI) to replace or augment i mmigration decisions, noting
that these processes created a laboratory for high-risk experiments within an already
highly discretionary and opaque system. 115 The ramifications of using automated
decision-making in the immigration and refugee context are far-reaching. Although
the Government of Canada has confirmed that this type of technology is confined
only to augmenting human decision-making and is reserved for certain immigration
applications only, there is no legal mechanism in place protecting non -citizens’
procedural rights and preventing human rights abuses from occurring. Similar visa
algorithms are currently in use in the United Kingdom and have been ch allenged in
__________________
106
107
108
109
110
111
112
113
114
115
20-14872
Submission by Privacy International et al.
For general information about the project, see European Commission, “Smart lie -detection
system to tighten EU’s busy borders” (24 October 2018), available at https://ec.europa.eu/
research/infocentre/.
=49726.
Submission by Privacy International et al.
Submission by Maat for Peace, Development and Human Rights. See also Petra Molnar,
“Technology on the margins: AI and global migration management from a human rights
perspective” (2019); and submission by Minority Rights Group International.
Submission by Privacy International et al.
Ibid.
Submission by Minority Rights Group International.
See www.nzherald.co.nz/nz/news/article.cfm?c_id=1&objectid=12026585 .
Petra Molnar and Lex Gill, “Bots at the gate: a human rights analysis of automated decision making in Canada’s immigration and refugee system”, Citizen Lab and International Human
Rights Program, Faculty of Law, University of Toronto, Research Report No. 114 (September
2018).
Ibid.
17/25