xAI Struggles to Make its Grok Bot Align With Elon Musk’s Personal Opinions

xAI Struggles to Make its Grok Bot Align With Elon Musk’s Personal Opinions


Yeah, I’m not sure that xAI’s latest Grok model is quite ready for mass adoption as yet, given the inherent bias that Elon and Co. are looking to embed into its workings.

Late last month, xAI owner Elon Musk vowed to remove “political correctness” from Grok’s responses, after users noted that many of its references and facts didn’t align with Musk’s own stances and beliefs.

Those included responses on things like gender-affirming care, as well as political violence perpetrated by left-wing supporters, with Grok noting that right-wing groups are statistically more likely to incite violence. Those obviously don’t gel with Musk’s own beliefs, while Grok also identified Elon as “the biggest spreader of misinformation on X.”

That obviously won’t do, and despite Grok basing these claims on evidence, and data that it can access from X and the broader web, Musk vowed to re-align its thinking, so that it would better reflect his own perspective.

Which is obviously problematic, for various reasons, and changes to its logic led to Grok praising Hitler, supporting radical conspiracy theories, and spouting clear misinformation on a broad scale.

The xAI team quickly rolled back those changes, and it’s now struggling to come up with a better process to selectively edit Grok’s perspective, in order to align with Elon’s demands.

So what’s the problem?

Well, according to Elon, there are simply too many left-wing people who write too much on the web, while right-leaning folk don’t tend to put their thoughts down in text.

As per Musk:

“There is a vast mountain of left-wing bullshit on the Internet and then a much smaller mountain of right-wing bullshit. The right doesn’t write very much! Unfortunately, there is not much in the middle. We obviously see this with people in general, where there is a bimodal distribution of opinions that break along political party lines. People tend to think that their political side is all good and the other side is all bad. Getting Grok to be sensible and neutral politically when there is so much nonsense out there is a serious challenge, and one that most humans fail to pass.

So not only is Grok’s opinion wrong, most humans also have the wrong opinion. Because the majority of writings on the web don’t support right-leaning talking points, particularly the ones that Elon aligns with.

I mean, sure, that could be the reason, that not enough right-leaning people bother to share their thoughts and opinions, though there doesn’t seem to be much of a shortage of right-wing spokepeople doing just that every day.

More likely would be that the majority of academic studies and journalistic reports, where people have actually investigated and reported on a topic, tend to find that the left-leaning logic is more sound than right-leaning conspiracies. And as such, Musk and Co. are having trouble countering that, because there’s less evidence to support the opinions that they don’t agree with.

That would be on topics like vaccine efficacy, and immigration benefits, as well as crime statistics, or international conflicts. Elon has strong opinions on all of these things, and all of his opinions lean more into conspiracy and speculation than fact, reflecting his own personal bias on such issues.

So what Elon’s saying is that he wants his xAI team to find more ways to make his personal beliefs the correct ones in his AI chatbot. And it seems like they have found a way to do exactly that.

As several outlets reported over the weekend, within its assessment process for a query, Grok 4 specifically checks in on what Elon thinks about certain issues, and weighs that into its response:

Sure, if you can’t find evidence to back up your beliefs, just make it true, by using a single person as your source of truth.

That should go well, there should be no problems with that approach at all, elevating an individual to deity-level within the algorithm sure seems like the best way to determine objective truth.

Yeah, this is a major flaw in Elon’s approach, in trying to bend the logical parameters of the internet to support his own biases, rather than accepting that such leanings are probably not correct, on balance of data.

But then again, given the results of the most recent U.S. election, maybe there’s a market for a biased bot anyway, and a system that better aligns with what people want to be true, regardless of fact. Maybe, that additional checking process could actually end up being a selling point for Musk’s “non-woke” AI chatbot within certain circles, though it is hard to see how AI systems are going to advance anything if we just infect them with our own twisted logic and confirmation bias.

At the same time, you could also argue that every chatbot will have some elements of this. OpenAI, Google and Meta have all been accused of interfering with the AI responses produced by their bots in order to limit controversy, and we’ve seen in the past how AI bots can be derailed and transformed into race-baiting tropes when left unchecked.

So there’ll always be a level of editorializing, to some degree, which will impede the pure logic of AI bots. In which case, maybe xAI’s offering isn’t really different from every other option, but even so, we’re not really getting any closer to truth if we’re actively steering things in certain directions.

As such, it could well be that Musk’s focus on the middle, between the left and right, does eventually end up producing a better, more accurate result, however it could also be a hard sell given that anyone looking to engage with Grok will know that the answers are being heavily swayed by Musk’s own opinions.

If you think that Elon’s a genius, then this is probably fine, but I’d argue that the growing majority don’t believe that this is the case.

And as Musk continues to build his AI tools into every element of his broader business empire, he really needs people to trust his bot if he wants to make his AI gamble work.  



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *