A/75/590
country where he feared for his life, 162 in violation of non-refoulement prohibitions
under international law.
51. Moreover, social media screening has compounded the disproportionate risk
affecting people belonging to or presumed to be of the Muslim faith or Arab descent
“by creating an infrastructure rife with mistaken inference and guilt by
association”. 163 For example, in 2019, United States Customs and Border Protection,
another constituent agency of the Department of H omeland Security, denied a
Palestinian college student entry to the country based on his friends’ Facebook posts
expressing political views against the United States, even though he did not post such
views of his own. 164 In addition to the direct burdens they place on non-citizens, the
expanded social media disclosure requirements of the Government of the United
States foreseeably affect freedoms of speech and association.
52. Homeland Security Investigations, the investigative arm of United States
Immigration and Customs Enforcement, had already been testing automated social
media profiling as early as 2016, 165 strengthening its open source social media
exploitation capabilities for the purposes of scrutinizing visa applicants and visa
holders before and after they arrived in the United States. 166 Submissions also raised
concerns about consideration by the Government of the United States of technologies
whose goal was “determinations via automation” regarding whether an individual
applying for or holding a United States visa was likely to become a “positively
contributing member of society” or intended “to commit criminal or terrorist
attacks”. 167 One submission noted in particular the use in the United States of risk
assessments tools in immigration detention decisions, including one using an
algorithm that was set to always recommend immigration detention, regardless of an
individual’s criminal history. 168 This example is one in which technology has been
tailored to pursue punitive immigration enforcement measures rooted in the racist,
xenophobic and ethnonationalist vision of immigration that has been advanced by the
Administration of President Donald Trump.
53. All this points to a trend in immigration surveillance where predictive models
use artificial intelligence to forecast whether people with no ties to criminal activity
will nonetheless commit crimes in the future. Yet, these predictive models are prone
to creating and reproducing racially discriminatory feedback loops. 169 Furthermore,
racial bias is already present in the datasets on which these models rely. 170 When
discriminatory datasets are treated as neutral inputs, they lead to inaccurate models
of criminality which then “perpetuate racial inequality and contribute to the targeting
and overpolicing of non-citizens”. 171
__________________
162
163
164
165
166
167
168
169
170
171
22/25
Ibid.
Ibid.
Ibid.
Submission by Mijente, citing Sarah Lamdan, “When Westlaw fuels ICE surveillance: legal
ethics in the era of big data policing”, New York University Review of Law and Social Change ,
vol. 43 (2019).
Submission by Mijente, citing www.nytimes.com/2019/10/02/magazine/ice-surveillancedeportation.html.
Ibid.
Submission by Minority Rights Group International.
Submission by Mijente.
Ibid.
Ibid.
20-14872