A/80/278
and culturally diverse teams as a minimum measure to reduce bias in the design of AI
systems and operations.
63. Third, AI systems must be programmed to prevent and redress any
discrimination and bias. Understanding and mitigating biases is an essential condition
for the deployment of more ethical and equitable AI systems. System transparency is
required, as only through knowledge of the data used by AI can these biases be
reduced. Improving the explainability of these systems is also crucial for an
understanding of how decisions are made using AI tools. For this reason, the
involvement of sociologists and human rights experts is important. However, these
processes have been completely left in the hands of technicians and technology
experts who have very limited understanding and knowledge of human rights
obligations and standards regarding discrimination and stere otypes.
64. Fourth, States must ensure that they pay particular attention to groups that may
be more vulnerable or affected, including children, minorities, Indigenous Peoples,
women and persons with disabilities. They must ensure that effective measures are in
place to address the cultural rights and the free creativity of these sections of the
population.
65. Fifth, even more important is the active and meaningful participation of the
individuals and communities whose data has been used, the effective participation of
minorities, and the free, prior and informed consent of Indigenous Peoples and local
populations. When tools are intended for use by specific groups, particularly
minorities and Indigenous Peoples, data sets should be developed with the
participation of the concerned groups.
66. Sixth, due to a power imbalance between various stakeholders, human rights
impact assessments are essential prior to the deployment of models and could also
help to address systemic bias. Benefit-sharing processes can help in the compensation
of communities and creators for the use of their work by AI. These are important
measures that States should take to fulfil their legal obligations regarding cultural
rights.
67. Finally, AI tools must never be developed without control and evaluation by
humans. Artificial intelligence is not true intelligence, but rather machine learning,
an automatic and repetitive copy or imitation of human creativity. It should not be
allowed to develop uncritically. It needs to be subject to revision and revisability, and
the impacts of its processes need to be accounted for. States must ensure that they
have clear lines both for the periodic and inclusive evaluation of such tools as well a s
for accessible remedies for redress for injustices and violations committed .
IV. Conclusions and recommendations
68. Artificial intelligence can improve the life of all only if it is critically
assessed and consciously channelled towards the respect, protection and exercise
of all human rights. If its uses and impacts are not critically assessed and
controlled, AI will limit international recognized human rights, dehumanize
social interactions and work environments and infringe on human dignity. So
far, the impact of AI on cultural rights has been side-stepped, and the protection
of cultural rights in AI has not been effectively regulated. The possibility that AI
systems may violate the freedom to develop and engage in creative activity and
the right to take part in cultural life is not hypothetical: it is currently happening,
unfolding insidiously. There is an urgent need to take a step back from the
incessant fascination with AI and recognize the multifaceted ways in which its
use can irreparably erode human creativity. It is time that States adopted
25-12403
19/21