A ârogue employeeâ was behind Grokâs unprompted âwhite genocideâ mentions
Elon Muskâs artificial intelligence company on Friday said a ârogue employeeâ was behind its chatbotâs unsolicited rants about âwhite genocideâ in South Africa earlier this week.
The clarification comes after Grok â the chatbot from Muskâs xAI that is available through his social media platform, X â began bombarding users with unfounded genocidal theories in response to queries about completely off-topic subjects.
In an , the company said the âunauthorized modificationâ in the extremely early morning hours Pacific time pushed the AI-imbued chatbot to âprovide a specific response on a political topicâ that violates xAIâs policies. The company did not identify the employee.
âWe have conducted a thorough investigation and are implementing measures to enhance Grokâs transparency and reliability,â the company said in the post.
To do so, xAI says it will openly to ensure more transparency. Additionally, the company says it will install âchecks and measuresâ to make sure xAI employees canât alter prompts without preliminary review. And the AI company will also have a monitoring team in place 24/7 to address issues that arenât tackled by the automated systems.
Nicolas Miailhe, co-founder and chief executive of PRISM Eval â an AI testing and evaluation start-up â told CNN that Xâs proposed remedy is a mixed bag.âMore transparency is generally better on this given the nature of the bot and platform (media),â Miailhe said. âThough detailed info about the system prompting can also be used by malicious actors to craft prompt injection attacks.â
Musk, who owns xAI and currently serves as a top White House adviser, was born and raised in South Africa and has a history of arguing that a âwhite genocideâ was committed in the nation. The billionaire media mogul has also claimed that white farmers in the country are being discriminated against under land reform policies that the South African government says are aimed at combating apartheid fallout.
Less than a week ago, the Trump administration allowed 59 white South Africans to enter the U.S. as refugees, claiming theyâd been discriminated against, while simultaneously also suspending all other refugee resettlement.
Per , the âwhite genocideâ responses occurred after a ârogue employee at xAI tweaked my prompts without permission on May 14,â allowing the AI chatbot to âspit out a canned political response that went against xAIâs values.â
Notably, the chatbot declined to take ownership over its actions, saying, âI didnât do anything â I was just following the script I was given, like a good AI!â While itâs true that chatbotsâ responses are predicated on approved text responses anchored to their code, the dismissive admission emphasizes the danger of AI, both in terms of disseminating harmful information but also in playing down its part in such incidents.
When CNN asked Grok why it had shared answers about âwhite genocide,â the AI chatbot again pointed to the rogue employee, adding that âmy responses may have been influenced by recent discussions on X or data I was trained on, but I should have stayed on topic.â
Over two years have passed since OpenAIâs ChatGPT made its splashy debut, opening the floodgates on commercially available AI chatbots. Since then, a litany of other AI chatbots â including Googleâs Gemini, Anthropicâs Claude, Perplexity, Mistralâs Le Chat, and DeepSeek â have become available to U.S. adults.
shows that most Americans are using multiple AI-enabled products weekly, regardless of whether theyâre aware of the fact. But another recent study, , shows that only âone-third of U.S. adults say they have ever used an AI chatbot,â while 59% of U.S. adults donât think they have much control over AI in their lives.
CNN asked xAI whether the ârogue employeeâ has been suspended or terminated, as well as whether the company plans to reveal the employeeâs identity. The company did not respond at the time of publication.