A/HRC/48/Add.xx
social media exploitation capabilities for the purposes of scrutinizing visa applicants and visa
holders before and after they arrive in the U.S. 172 Submissions also raised concerns about the
US government’s consideration of technologies whose goal was “determinations via
automation” regarding whether an individual applying for or holding a US visa was likely to
become a “positively contributing member of society” or intended “to commit criminal or
terrorist attacks.”173 One submission noted in particular the use in the US of risk assessments
tools in immigration detention decisions, including one using an algorithm set to always
recommend immigration detention, regardless of an individual’s criminal history. 174
59.
All this points to a trend in immigration surveillance, where predictive models use
artificial intelligence to forecast whether people with no ties to criminal activity will
nonetheless commit crimes in the future. Yet these predictive models are prone to creating
and reproducing racially discriminatory feedback loops.175 Furthermore, racial bias is already
present in the datasets on which these models rely.176 When discriminatory datasets are
treated as neutral inputs, they lead to inaccurate models of criminality which then “perpetuate
racial inequality and contribute to the targeting and over-policing of non-citizens.”177
60.
The response to the COVID-19 pandemic has led to the rapid increase in “biosurveillance”—the monitoring of an entire population’s health and behaviour on an
unprecedented scale, facilitated by emerging digital technologies.178 As States increasingly
move toward a bio-surveillance system to combat the pandemic, there has been an increase
in the use of digital tracking, automated drones, and other technologies “purporting to help
manage migration and stop the spread of the virus.”179 There is an outsize risk that these
technologies will enable further discrimination on the basis of race, ethnicity and citizenship
status.180
IV.
Recommendations
61.
The Special Rapporteur recalls her previous report to the Human Rights Council
and reminds Members States of the applicable international human rights obligations,
in particular:
(a)
The scope of legally prohibited racial discrimination in the design and use
of emerging digital technologies;
(b)
Obligations to prevent and combat racial discrimination in the design and
use of emerging digital technologies; and
(c)
(Obligations to provide effective remedies for racial discrimination in the
design and use of emerging digital technologies.
62.
The Special Rapporteur reiterates the analysis and recommendations in her
previous report regarding the obligations of States and non-State actors and urges
States to consider them alongside the recommendations included herein. In the specific
context of border and immigration enforcement, she recommends that Member States:
63.
Address the racist and xenophobic ideologies and structures that have
increasingly shaped border and immigration enforcement and administration. The
effects of technology are in significant part a product of the underlying social, political
and economic forces driving the design and use of technology. Without a fundamental
shift away from racist, xenophobic, anti-migrant, anti-stateless and anti-refugee
172
173
174
175
176
177
178
179
180
Mijente, Submission citing https://www.nytimes.com/2019/10/02/magazine/ice-surveillancedeportation.html.
Ibid.
MRG, Submission.
Mijente, Submission.
Ibid.
Ibid.
https://www.newstatesman.com/science-tech/2020/03/rise-bio-surveillance-state.
https://edri.org/wp-content/uploads/2020/11/Technological-Testing-Grounds.pdf.
Ibid.
19