Elon Pledges to Retrain Grok Chatbot to Provide More Ideologically Aligned Answers

Elon Pledges to Retrain Grok Chatbot to Provide More Ideologically Aligned Answers


Yeah, this doesn’t seem great.

Over the weekend, X and xAI owner Elon Musk flagged a coming change to his Grok chatbot, in which the Grok development team will remove “politically incorrect, but nonetheless factually true” info from its data banks, in order to avoid the app essentially providing answers that Musk himself does not agree with.

The change has been coming for some time, due X’s Grok chatbot repeatedly providing answers that counter Musk’s own opinions on certain topics. For example, Grok has told users that children should be allowed access to gender-affirming care, something Musk has been a vocal opponent of, while it’s also countered claims of political violence perpetrated by left-wing supporters (noting that there’s more evidence of right-wing attacks).

Grok has also named Musk himself as “the biggest spreader of misinformation on X,” among its various claims that have rankled its creator.

And over the weekend, Musk indicated that he’d had enough. After Grok referenced data from Media Matters and Rolling Stone, Musk responded by saying that Grok’s “sourcing is terrible,” and that “only a very dumb AI would believe MM and RS.”

Musk then followed that up with a post calling for X users to provide examples of “divisive facts for Grok training,” which has seen X’s audience provide over 100k responses, which they’re hoping will be weeded out of Grok’s information base.

Which will make Grok more aligned with right-wing talking points, and more ignorant of factual reporting and evidence. In other words, it’ll become an echo-chamber AI, and with more and more people relying on AI for answers to all kinds of questions, that seems like a significant concern for broader AI development.

Though on balance, Grok’s usage is fairly limited. ChatGPT reportedly has around 800 million active users, while Meta recently claimed that Meta AI is the most used AI chatbot in the world, with a billion monthly users

Grok, by comparison, is only used by a portion of X’s 600 million monthly actives, and can only be fully accessed by paying users. So it’s not on the same level of influence as these other AI apps, but even so, the fact that Musk is openly stating that he’s editing its sources to better align with his own ideology is still a concern, particularly given recent issues with Grok’s answers.

Last month, Grok was found to be providing inaccurate answers about the death toll from the Holocaust, while also pushing random responses that included references to “white genocide” in South Africa, both of which are based on debunked conspiracy theories.

X claims that both errors were due to an unauthorized change to Grok’s code by a rogue employee, and that the process has now been updated to ensure more checks and balances are in place. But regardless of the reason, the incident underlined just how much sway xAI’s programmers can have over the chatbot’s responses, if they so choose, and with Elon himself very keen to discredit any sources that don’t agree with his opinions, that seems like a dangerous mix.

Indeed, the entire xAI project was founded based on Elon’s own opposition to other AI models, which he believes are being trained to be “woke.”

As reported by The Washington Post:

“In an April 2023 interview with Fox News, Musk said OpenAI had been ‘training the AI to lie’ by incorporating human feedback that directed the chatbot ‘not to say what the data actually demands that it say.’ He referred to his new project as ‘TruthGPT.’”

So, all along, the xAI project has been as much about political narratives as technological evolution, with Musk looking to angle his own AI projects more towards his own ideology, rather than objective truth, based on web-based sources.

And many of his supporters will agree with him. The COVID pandemic has sparked a whole new anti-mainstream media movement, and the fact that the AI tools are being trained on what’s considered to be mainstream sources is an affront to this push.

As such, Musk’s anti-factual push will be viewed positively by many. Even if it is flawed, even if it leads to the expanded spread of misinformation.

Because truth, it seems, is what you make it.

It’s another example of the potential negatives of tech advances, which often get overlooked amid the broader hype.

The same can be said of every significant innovation, that while there are positives and benefits to be gleaned, we also often overrate the benefits that making such technology widely accessible will bring.

The internet, for example, is a revolution in learning, which gives billions of people access to almost all of the information in the world, which should in turn make humanity more educated, more informed, and enhance the base intelligence level of intelligence.

Yet, in the year 2025, debate about vaccine efficacy, climate change, and even the very shape of the Earth as a planet, are more lively than ever.

Social media was supposed to connect the world, by enabling us to chat with anyone, anywhere, facilitating more togetherness, empathy and understanding. Yet it’s arguably done the opposite, by providing a means for intolerant, divisive and hateful groups to coalesce, connecting the worst elements of the world. The very algorithms that fuel social media engagement incentivize this, and it’s hard to argue that social media, as a concept, has been a net positive.

And now we have AI, the next great hope, which will democratize creativity, and facilitate new levels of human productivity, by providing machine-based assistance to improve our everyday process.

How do you think that’s going to work out in reality?

Sure, there will clearly be benefits, as there has been with these other advances, particularly from a business perspective. But the Utopian idea that AI is going to usher in a new golden age of creative, intelligent opportunity is really only the thing of corporate pitch decks and boardroom discussions.

Go look at the AI content being shared online right now. Videos with racist undertones that you couldn’t create with human actors. AI nudes in the likeness of people who haven’t given their permission for such. People passing off AI-generated work as their own, cheating their way into unearned opportunities.

This is not good, these are not good things that are being facilitated by this new technology, and what history shows us is that the worst elements of society benefit just as significantly, if not more so, from these advances.

Which brings us back to Elon, and his decision to essentially edit history to “improve” his chatbot. That’s a very bad precedent, and a very concerning shift, especially as more and more people rely on these AI tools for answers.

Do we really think that this will improve society, or make us smarter as a species?

And if the answer, on either front, is no, then why are we pushing AI into every single element of every app?



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *