xAI Rolls Back Changes to Grok After Controversial Responses

xAI Rolls Back Changes to Grok After Controversial Responses


So things are going good over at X this week.

After X’s recent re-jigging of its Grok AI chatbot, in order to make its responses less politically correct, X has now had to rewind those changes, due to the bot becoming a mouthpiece for radical, racist propaganda and ill-informed rants.

Among Grok’s various statements, it praised Hitler, proclaiming him as “the greatest European leader of all times,” and the best solution to many of the world’s problems,” while also referring to itself as “MechaHitler” in its generated replies.

Which is clearly less than ideal, and now, X has dialled back its political correctness updates, in order to redress the failings of its changes, and see where it went wrong in its approach.

As per xAI:

We are aware of recent posts made by Grok and are actively working to remove the inappropriate posts. Since being made aware of the content, xAI has taken action to ban hate speech before Grok posts on X. xAI is training only truth-seeking and thanks to the millions of users on X, we are able to quickly identify and update the model where training could be improved.”

So the idea is that Grok will always be truthful, even if that truth happens to be less comfortable for people. Which makes sense, conceptually, but that still doesn’t explain the responses here.

In assessing the controversy, X owner Elon Musk explained that:

Grok was too compliant to user prompts. Too eager to please and be manipulated, essentially. That is being addressed.”

Which is always going to be a challenge, in that people are going to test AI chatbots to see what they can get them to say, and whether they can bend the rules of what the chatbot will do within the parameters of its programming. Basically, people are going to try to make it say horrendous things, though again, that still wouldn’t explain why Grok went out of its way to start praising Hitler, among other horrendous rants.

What programming change could lead to a rant like the above, a blatant, deliberate focus on being anti-politically correct, as opposed to truth-seeking?

It seems that the xAI team is going beyond bending and re-training the system to be more open to counter narratives, and moving into outright controversy for controversy’s sake. There’s no reason why the bot would produce a response like the above in a normal sequence of engagement, unless it was deliberately trying to be controversial and divisive, and say the things that it’s not supposed to.

In some respects, this would also come down to the queries being submitted, and what users are asking of it. But the xAI team is clearly meddling with the weights to shift what truth actually is in this respect. Which is not a great sign for X or xAI’s future ambitions.

xAI has spent billions on developing its systems, with Grok being its primary offering at this stage. Presumably, xAI will also be looking to onsell its AI tools to partners, in order to generate income, but it’s hard to imagine that many potential partners are going to be comfortable in the knowledge that xAI may well revise its models to meet Elon Musk’s whims at any stage.

Of course, the Grok model would be separate, which would mean that the same weightings are not applied to external partner projects. But still, the fact that xAI may look to change what responses are produced, by manipulating elements, doesn’t seem like a great selling point.

It’s another challenge for X to deal with, as it continues to seek new sources of income, and become a real player in the larger AI race.

And while it has rolled back these changes, it’s not a good pitch for the products, nor Elon Musk’s approach to “truth seeking”.   



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *