A/80/278
creators, AI tools do not support the dissemination of their works or make them more
accessible to the public.
31. While the digital divide persists in terms of material access to digital devices
and connectivity, access to the knowledge and skills required to use them, and the
benefit derived from digital technologies, 50 a creative divide is worsening existing
inequalities. 51 In addition, the protections offered by existing or emerging legal
frameworks are inconsistent. Brazil, in its submission, noted that “creators from lowincome backgrounds and the Global South suffer disproportionately from the negative
impacts of AI disruption on cultural economies, while lacking access to the legal
protections enjoyed in more privileged contexts.” 52
32. Ultimately, the creative divide amplifies the existing imbalance in the
representation of diverse cultures in AI-generated content: “Those with access to
compute power, data infrastructure, and dominant languages disproportionately shape
the outputs and aesthetics of generative AI, often marginalising other cultural
perspectives”. So far, the United Nations bodies have been very slow to address AI
attacks on creativity. A clear emphasis on such violations is important , as “without
careful attention to these asymmetries, AI risks amplifying existing inequities in
whose creativity is recognised, valued, and preserved. ” 53
33. The recommendation by UNESCO on the ethics of AI calls on AI actors to
“make all reasonable efforts to minimize and avoid reinforcing or perpetuating
discriminatory or biased applications and outcomes throughout the life cycle of the
AI system to ensure fairness of such systems.” 54 However, there is no concrete
suggestion regarding measures, nor is there any monitoring mechanism.
E.
Artificial intelligence content, bias and discrimination
34. AI tools are not neutral; they are the products of political, technical, linguistic
and economic decisions. 55 AI cultural outputs reflect dominant norms and their
recommendation systems produce distorted results, either because they are shaped by
profit-driven models or because they reflect data gaps. As AI tools reproduce the data
that they have been fed in an uncritical and unchallenged manner, it is no surprise that
their content is partial, stereotypical and discriminatory. They generate content at a
scale and pace not seen before and in a manner that is often unrepresentative of the
diversity of cultural identities, heritages or languages. This cultural bias is part of a
broader ethical challenge: AI systems – whether generative, predictive or decisionmaking – tend to reinforce and even exacerbate existing social, cultural, economic
and political inequalities.
35. Particular attention must be paid to the effect that AI has on minorities,
Indigenous Peoples and other marginalized groups. The underrepresentation of data
from these groups in the training of models results in biased outputs that fail to
accurately reflect the identities of these groups. Their cultures, values, knowledge,
narratives, aesthetics and diverse artistic expressions are either absent 56 or
misrepresented in AI-assisted or AI-generated creations. In sectors or platforms
__________________
50
51
52
53
54
55
56
12/21
Submission by Fundación para la Democracia, p. 1.
See the Fair Culture Charter, principle 6, which states that “Equitable access to digital tools,
digital literacy, skills, and capacities, along with allocating resources to bridge digital gaps, are
critically needed as well”.
Submission by Brazil, p. 2.
Submission by Eva Nieto McAvoy, p. 3.
UNESCO, “Recommendation on the ethics of artificial intelligence”, 2022, para. 29.
Submission by Nicolás Madoery, FUTURX, p. 2.
Submission by Anna Su, p. 2.
25-12403