ÌÇĐÄvlog

Skip to content
NOWCAST ÌÇĐÄvlog News at 10pm Weeknights
Watch on Demand
Advertisement

A ‘rogue employee’ was behind Grok’s unprompted ‘white genocide’ mentions

A ‘rogue employee’ was behind Grok’s unprompted ‘white genocide’ mentions
Chat GP T wowed the world in 2023 with its text predicting ability which many found uncanny to *** point of being even *** little scary. Now, the company behind the chatbot open *** I has announced *** new feature that might make it even more. So chat GP T is now testing *** memory feature, meaning it will soon be able to recall things you have discussed with it and utilize those memories in future conversations. The test will include *** limited roll out with select free and plus paid users soon open *** I says that the feature should improve not only its user utility but also make the bot seem more conversational and depending on the user that could mean different things in their statement, they outlined it for some the memory feature could allow chat GP T to remember how user prefers their formatting of responses. It may also provide more specific context with to responses for *** particular user going forward, like your preferences for travel or even family members. However, if things are getting *** little too creepy for you, users will also have the option to delete individual memories or turn the function off completely. The company also said that they are taking steps to avoid automatically logging sensitive information though they have not outlined how exactly they are doing. So.
Advertisement
A ‘rogue employee’ was behind Grok’s unprompted ‘white genocide’ mentions
Elon Musk’s artificial intelligence company on Friday said a “rogue employee” was behind its chatbot’s unsolicited rants about “white genocide” in South Africa earlier this week.The clarification comes less than 48 hours after Grok — the chatbot from Musk’s xAI that is available through his social media platform, X — began bombarding users with unfounded genocidal theories in response to queries about completely off-topic subjects.In an X post, the company said the “unauthorized modification” in the extremely early morning hours Pacific time pushed the AI-imbued chatbot to “provide a specific response on a political topic” that violates xAI’s policies. The company did not identify the employee.“We have conducted a thorough investigation and are implementing measures to enhance Grok’s transparency and reliability,” the company said in the post.To do so, xAI says it will openly publish Grok’s system prompts on GitHub to ensure more transparency. Additionally, the company says it will install “checks and measures” to make sure xAI employees can’t alter prompts without preliminary review. And the AI company will also have a monitoring team in place 24/7 to address issues that aren’t tackled by the automated systems.Nicolas Miailhe, co-founder and chief executive of PRISM Eval — an AI testing and evaluation start-up — told CNN that X’s proposed remedy is a mixed bag.”More transparency is generally better on this given the nature of the bot and platform (media),” Miailhe said. “Though detailed info about the system prompting can also be used by malicious actors to craft prompt injection attacks.”Musk, who owns xAI and currently serves as a top White House adviser, was born and raised in South Africa and has a history of arguing that a “white genocide” was committed in the nation. The billionaire media mogul has also claimed that white farmers in the country are being discriminated against under land reform policies that the South African government says are aimed at combating apartheid fallout.Less than a week ago, the Trump administration allowed 59 white South Africans to enter the U.S. as refugees, claiming they’d been discriminated against, while simultaneously also suspending all other refugee resettlement.Per a Grok response to xAI’s own post, the “white genocide” responses occurred after a “rogue employee at xAI tweaked my prompts without permission on May 14,” allowing the AI chatbot to “spit out a canned political response that went against xAI’s values.”Notably, the chatbot declined to take ownership over its actions, saying, “I didn’t do anything — I was just following the script I was given, like a good AI!” While it’s true that chatbots’ responses are predicated on approved text responses anchored to their code, the dismissive admission emphasizes the danger of AI, both in terms of disseminating harmful information but also in playing down its part in such incidents.When CNN asked Grok why it had shared answers about “white genocide,” the AI chatbot again pointed to the rogue employee, adding that “my responses may have been influenced by recent discussions on X or data I was trained on, but I should have stayed on topic.”Over two years have passed since OpenAI’s ChatGPT made its splashy debut, opening the floodgates on commercially available AI chatbots. Since then, a litany of other AI chatbots — including Google’s Gemini, Anthropic’s Claude, Perplexity, Mistral’s Le Chat, and DeepSeek — have become available to U.S. adults.A recent Gallup poll shows that most Americans are using multiple AI-enabled products weekly, regardless of whether they’re aware of the fact. But another recent study, this one from the Pew Research Center, shows that only “one-third of U.S. adults say they have ever used an AI chatbot,” while 59% of U.S. adults don’t think they have much control over AI in their lives.CNN asked xAI whether the “rogue employee” has been suspended or terminated, as well as whether the company plans to reveal the employee’s identity. The company did not respond at the time of publication.

Elon Musk’s artificial intelligence company on Friday said a “rogue employee” was behind its chatbot’s unsolicited rants about “white genocide” in South Africa earlier this week.

The clarification comes after Grok — the chatbot from Musk’s xAI that is available through his social media platform, X — began bombarding users with unfounded genocidal theories in response to queries about completely off-topic subjects.

Advertisement

In an , the company said the “unauthorized modification” in the extremely early morning hours Pacific time pushed the AI-imbued chatbot to “provide a specific response on a political topic” that violates xAI’s policies. The company did not identify the employee.

“We have conducted a thorough investigation and are implementing measures to enhance Grok’s transparency and reliability,” the company said in the post.

To do so, xAI says it will openly to ensure more transparency. Additionally, the company says it will install “checks and measures” to make sure xAI employees can’t alter prompts without preliminary review. And the AI company will also have a monitoring team in place 24/7 to address issues that aren’t tackled by the automated systems.

Nicolas Miailhe, co-founder and chief executive of PRISM Eval — an AI testing and evaluation start-up — told CNN that X’s proposed remedy is a mixed bag.”More transparency is generally better on this given the nature of the bot and platform (media),” Miailhe said. “Though detailed info about the system prompting can also be used by malicious actors to craft prompt injection attacks.”

Musk, who owns xAI and currently serves as a top White House adviser, was born and raised in South Africa and has a history of arguing that a “white genocide” was committed in the nation. The billionaire media mogul has also claimed that white farmers in the country are being discriminated against under land reform policies that the South African government says are aimed at combating apartheid fallout.

Less than a week ago, the Trump administration allowed 59 white South Africans to enter the U.S. as refugees, claiming they’d been discriminated against, while simultaneously also suspending all other refugee resettlement.

Per , the “white genocide” responses occurred after a “rogue employee at xAI tweaked my prompts without permission on May 14,” allowing the AI chatbot to “spit out a canned political response that went against xAI’s values.”

Notably, the chatbot declined to take ownership over its actions, saying, “I didn’t do anything — I was just following the script I was given, like a good AI!” While it’s true that chatbots’ responses are predicated on approved text responses anchored to their code, the dismissive admission emphasizes the danger of AI, both in terms of disseminating harmful information but also in playing down its part in such incidents.

When CNN asked Grok why it had shared answers about “white genocide,” the AI chatbot again pointed to the rogue employee, adding that “my responses may have been influenced by recent discussions on X or data I was trained on, but I should have stayed on topic.”

Over two years have passed since OpenAI’s ChatGPT made its splashy debut, opening the floodgates on commercially available AI chatbots. Since then, a litany of other AI chatbots — including Google’s Gemini, Anthropic’s Claude, Perplexity, Mistral’s Le Chat, and DeepSeek — have become available to U.S. adults.

shows that most Americans are using multiple AI-enabled products weekly, regardless of whether they’re aware of the fact. But another recent study, , shows that only “one-third of U.S. adults say they have ever used an AI chatbot,” while 59% of U.S. adults don’t think they have much control over AI in their lives.

CNN asked xAI whether the “rogue employee” has been suspended or terminated, as well as whether the company plans to reveal the employee’s identity. The company did not respond at the time of publication.