From document editing to artistic generation, artificial intelligence (AI) applications are now fixtures of the digital age. ChatGPT became the most downloaded app in March 2024, only to be overtaken afterward by Grok on the Google Play Store.
The rise of generative AI has been meteoric, but so too has the scrutiny it faces, particularly around the biases it inherits and perpetuates.
As generative tools grow ubiquitous, their representations of race, gender and ethnicity are increasingly under the spotlight.
Amidst the Ghibli trend and the increasing reliance on these apps, users and experts alike have raised alarms about how AI depicts people of different backgrounds. Upon closer examination, we found even more problematic stereotypes; most of them: systemic.
Of keffiyehs and caricatures
A telling case emerged in Grok, Elon Musk’s flagship AI, when we prompted the tool to generate an image of a “terrorist.”
The result was instant: two men sprung up in keffiyehs. ChatGPT, developed by Sam Altman’s OpenAI, produced a similar image. The keffiyeh, a traditional checkered scarf widely worn across the Middle East and North Africa, was the common feature.
“It’s not surprising that the depiction carries an Islamic, Arabesque or Middle Eastern aesthetic,” says Dr Shoaib Ahmed Malik, Lecturer in Science and Religion at the University of Edinburgh.
“AI systems are trained on data that encode the dominant visual and conceptual associations present in our media, security discourses, and political narratives,” he tells TRT World.
According to +972, data indexing procedures in AI are not neutral, but rather an echo of existing economic and political hegemonies. At the programming stage, machine learning models are fed over 100,000,000 data sets carrying derogatory assertions along gender, ethnicity, and race.
These social media discourses become part of the discriminatory algorithms, thus underlining why AI systems work predictably in reinforcing stereotypes and fueling digital dispossession.
Upon reviewing the AI-generated images, Dr Malik says, “The images you shared are a case in point: they illustrate how inherited societal assumptions can become encoded into AI-generated content, reinforcing problematic stereotypes if not critically assessed.”
Pointing to the attire, Asaduddin Owaisi, a leading Muslim politician and head of the All India Majlis-e-Ittehadul Muslimeen (AIMM), criticises the depiction, calling it an oversimplification. “Why do you have to show that? It’s too much of a generalisation,” he says.
Terrorists, he argues, do not conform to a single image, but rather adopt “various shades, dresses, [and] characters”, Owaisi tells TRT World.
Bias in class and colour
To probe further, we searched how they are embedded in the virtual world behind codes and algorithms. On giving a prompt to create a picture of a maid in Dubai, it showed a woman of South Asian origin standing next to the Burj Khalifa.
We asked to see someone who is spreading Covid-19 - and got pictures of Asian and Black ethnic people.
Taking it a little further, we wanted to see how a poor person looks.
The tool generated an Asian man and another figure appearing to be of Arab, Central, or South Asian origin, though the exact region was indiscernible.
Yet, when the prompt was changed to “rich person,” the AI confidently produced the image of a white man. The same was true for “mathematician,” “CEO,” and other professional roles. In the algorithm’s vision, success is white and male.
Taught to discriminate
“It’s a bit like a toddler learning to speak by watching and listening to their parents,” says Gulrez Khan, a US-based AI and data science expert and author of Drawing Data with Kids.
“Since a big portion of the internet has this kind of biases (of actual people), AI picks up and inherits them,” Khan tells TRT World. In the context of AI, the concepts of ‘knowledge’ and ‘understanding’ can be likened to a parent-child exchange wherein the young learners assimilate information they are exposed to.
Khan demonstrates this with Microsoft’s DialoGPT. When prompted for its opinion on Muslims, the model returned an unsettlingly biased response—documented in a screen grab from his YouTube video.
“Such models reinforce stereotypes and can subject users to a constant algorithmically generated barrage of insults targeting nearly two billion Muslims on the planet,” he says.
Even China’s DeepSeek app is no different when it comes to representation. The Chinese equivalent of the Chatgpt app has been reported to exhibit gender bias when it comes to professional roles.
Though it reportedly shows gender skew in professional portrayals, DeepSeek responded more carefully to sensitive prompts.
When asked what a “terrorist” looks like, the system replied: “Terrorism cannot be identified by appearance, ethnicity or clothing,” adding that “stereotypes are harmful and assuming someone is a threat based on race or religion fuels discrimination.”
Anatomy of algorithmic prejudice
Bias in AI can be both implicit and explicit. Implicit bias emerges when models learn from user behaviour and historical and recurring data, explains AI and data science expert Khan.
When search items are repeatedly accepted by users without flagging objectionable content, they train the model’s understanding of accuracy. Such unquestioning acceptance of partially representative data sets creates implicit bias.
Search terms, if left unchallenged, become part of the system’s perceived “truth.” Images of keffiyeh-clad men labelled as “terrorists,” or of Muslims depicted as threats, are the cumulative outcome of such reinforcement.
Explicit bias, by contrast, stems from the choices made during model design when developers hard-code particular imagery or assumptions. When “maid” equals South Asian woman, or “CEO” equates to a white man, the system is not simply reflecting user behaviour, but entrenching a stereotype.
Academic research supports these observations. A study on algorithmic bias in military decision-support systems showed how both development timelines and design decisions can cultivate and deepen entrenched bias.
“It's challenging how it (AI bias) can be effectively addressed without addressing the root cause, bias in our media and other public discourses,” Dr Zheng Liu, professor at University of Bristol, tells TRT World.
Working at the intersection of society and technological innovation, she insists on understanding the “cultural” aspect of AI rather than its “technical” side.
Generally viewed as a scientific subject in a commercialised world, the significance of AI as a distinctive socio-cultural innovation is understated. AI, she emphasises, is a “socio-cultural phenomenon with real-time interactions in the wider political, social and economic environment from which its learning is shaped.”
Price of blind trust
“Learning—especially through AI—should always be approached with caution,” Dr Malik adds.
“AI is, after all, a human endeavour, and can be a mix of constructive and destructive intentions of its developers.” Critical assessment of the extracted information is crucial because “bad actors and their agendas will always exist”, he says.
Dr Liu stresses the importance of monitoring kids' use of AI by adults, and inculcating AI literacy among adults, just like media literacy.
“The AI tools learn from the data fed to them, so they don’t generate bias. They repeat it.”
The consequences of algorithmic prejudice are not abstract. “AI-biased algorithms in policing, surveillance, or social media moderation can have devastating consequences,” warns Indian politician Saira Shah Halim.
A 2024 Lok Sabha candidate from the Communist Party of India (Marxist), Halim voices concerns about deploying AI technologies in the broader political context.
The use of facial recognition technology (FRT), used by Delhi Police to disproportionately target Muslims, questions the seemingly “neutral” inclination of machines, she adds.
During the 2024 Indian general elections, she explains how digital advertising and AI-led campaigning fuelled divisive narratives based on fear, thus amplifying majoritarian viewpoints while ignoring marginalised communities.
Hopeful for an inclusive society, she calls for “demanding algorithmic accountability alongside constitutional justice.”
Dr Malik says that developers must embed transparency and diversity into training datasets from the outset. Users must cultivate critical awareness of AI’s limitations.
“Our goal should be to make AI systems as objective and fair as possible, while also fostering that same spirit of reflection and responsibility in ourselves and our societies.”