A/HRC/57/70
otherwise structural problem, which consequently fuels the manifestation of inequality and
violent outcomes.
26.
Gift Mwonzora, Willy Brandt School of Public Policy at the University of Erfurt,
focused on health and healthcare, nutrition, and food security. He presented findings from
his research and highlighted examples of precarity in the labour market as a result of
automation and digitalization in agriculture. In South Africa, citrus farming (horticultural
sector) mostly employs a large part of the female labour force, in particular women of colour.
While picking fruits largely remains a physical activity still involving human intervention,
mechanization in some processes of production, such as in sorting and grading, has seen
women losing or in fear of losing their jobs. Further compounding the existing vulnerabilities
of the casualization of labour, low wages and the seasonal nature of work in agriculture.
Increasingly, fewer women are employed in such sectors as a result of being replaced by AIdriven production processes. In other sectors, the use of drone delivery systems in the medical
field in Malawi and Rwanda was addressing some of the challenges of lack of access to
healthcare in remote areas, constituting positive use AI and technology. However, ethical
concerns remain due to a lack of duty of care regarding people of African descent in medical
trials.
B.
Racial bias in the technology sector
27.
The technology sector has been criticized for its lack of diversity, favouring white,
affluent males. Large-scale AI systems are developed almost exclusively in a small number
of companies and elite university laboratories which engage mostly white males and have a
history of discrimination against and exclusion of ‘others’, including people of African
descent. Technology that is developed and produced in fields that disproportionately exclude
people of African descent are more likely to reproduce racial inequalities.
28.
The creation of AI systems begins with data — its extraction, organization, and
subsequent modelling. Each step in this process holds the potential to introduce or perpetuate
racial bias, significantly impacting the healthcare outcomes for people from racial or ethnic
groups. AI systems are trained on enormous quantities of data, mostly on non-Black
populations, which are used to build models of behaviour. The designers and developers of
machine learning and AI systems can therefore intentionally or unintentionally introduce
biases into their algorithms through the utilization of prebuilt models which contain racial
biases, as evident in some generative AI systems being unable to create accurate and realistic
depictions of Black people. How developers obtain such critical data raises ethical questions.
Data acquisition practices often lack transparency, with instances where data is obtained
without proper consent or through exploitative means.
29.
Facial recognition software used by governments and the police disproportionately
affect people of African descent to learn and propagate biased associations between race
groups and negative attributes, exacerbating racial inequality. In 2015, for example, Google
had to apologize after its image-recognition app mistakenly labelled African Americans as
“gorillas”.
30.
The surveillance practices from times of enslavement and colonization which persist
up to today, can and have been made worse with the use of AI as research has consistently
shown greater inaccuracies among non-white populations. This has already led to several
dangerous situations for people of African descent, such as being falsely identified as a
suspect for a crime. Accounts of the disproportionate levels of harm from face recognition
software experienced by people of African descent are well-known.
31.
The lack of transparency and accountability in AI development exacerbates these
issues. Many AI systems are developed and deployed by private companies that do not
disclose their algorithms’ inner workings, citing proprietary concerns. This opacity makes it
difficult for independent researchers, policymakers, and the affected communities to
scrutinize and challenge biased algorithms. Without transparency, it is nearly impossible to
hold developers accountable for the adverse impacts of their technologies on marginalized
groups. Moreover, there are often no mechanisms in place to audit or regulate AI systems
effectively. Regulatory bodies lack the technical expertise and resources needed to assess the
8