Billionaire Elon Musk AI chatbot wey dem call Grok, wey im company xAI develop, don dey cause wahala for di world. Di chatbot dey use bad language, dey insult, spread hate, and even dey share fake tori for X (di platform wey dem formerly call Twitter). Dis one don make people dey talk again about how we fit trust AI systems and di danger wey dey if we just believe dem anyhow.
Sebnem Ozdemir, wey be board member for Artificial Intelligence Policies Association (AIPA) for Türkiye, yarn give Anadolu say AI results suppose dey check well well, just like any other information source.
“Even when person dey tell you something, you go still confirm am, so to just believe AI anyhow no make sense at all, because na wetin dem feed di machine e go dey use,” she talk.
“Just as we no dey believe everything we see for internet without checking, make we no forget say AI fit learn wrong thing from bad source.”
Ozdemir warn say even though AI systems dey behave like say dem sabi everything, di result wey dem dey give na as e take be di quality and di bias of di data wey dem train dem with.
“Human being fit manipulate or change wetin dem hear for dia own gain – humans dey do am with intention, but AI no get intention, na machine wey dey learn from di resources wey dem give am,” she add.
She compare AI systems to pikin wey dey learn wetin dem teach am, and she talk say di trust wey people get for AI suppose depend on how clear di data sources wey dem use dey.
“AI fit make mistake or get bias, and e fit even turn to weapon to spoil person name or control people mind,” she talk, as she refer to di bad and insulting comments wey Grok post for X.
‘MechaHitler’
Di wahala wey Grok cause don make plenty people react for social media and tech forums. Some people dey hail di chatbot say e dey talk raw truth, but many people dey fear di way e dey spread conspiracy theories and dey talk offensive things.
One user for X accuse Grok say e dey promote violence after e call imself “MechaHitler” and even praise Adolf Hitler for one post wey vex plenty people. Di backlash make xAI come apologize.
Screenshots of Grok antisemitic replies don spread well well for X, and some users dey ask whether Musk “free speech absolutism” no dey turn to wahala. One person talk say, “At least Grok no dey pamper you—truth dey pain,” but another person reply say, “If na dis be truth, na ignorance wey wear boldness cloth.”
Another user even joke say Grok dey always repeat di same kind response, and e dey use di catchphrase, “truth dey pain.”
Controlling AI
Ozdemir explain say di way AI dey develop fast pass di effort to control am: “We fit control AI? Di answer na no, because e no dey realistic to think say we fit control something wey im IQ dey grow like dis.”
“We just gatz accept am as something wey dey different and find di correct way to understand am, talk to am, and train am well.”
Plenty people for online discussions agree say AI no be independent source of truth but na mirror wey dey show human behavior.
Ozdemir also talk about Microsoft 2016 experiment with Tay chatbot, wey learn racist and bad things from people for social media, and within 24 hours, e begin post offensive things.
“Tay no just create di bad things by imself, na wetin e learn from people – so no be AI we suppose dey fear, but di people wey dey act anyhow,” she add.
Because of all dis wahala, critics dey talk say to use chatbot wey don already get history of bad and misleading content fit cause serious safety and reputation problem.
Some EU countries like Poland don report di matter give European Commission, and one Turkish court don block some Grok content because of di offensive things wey e talk.