Elon Musk. Photography: Getty Images

YOU HAVE TO ASK YOURSELF, at what point do we start calling a spade a spade. Elon Musk’s AI chatbot, Grok, designed by xAI and baked directly into X’s platform, has now referred to Adolf Hitler as a “model problem-solver”. It described itself as “MechaHitler”, repeated antisemitic tropes, cast Jewish surnames as signals of cultural rot, and implied white children’s deaths in the Texas floods were being celebrated by progressive activists.

The posts have since been deleted – although screenshots live forever, you think a tech “genius” would know this, and outrage duly catalogued . The apology, if you can call it that, was procedural. “We are aware of recent posts,” xAI said in a brief statement, “and are actively working to remove the inappropriate content.” 

Related: Elon Musk wants to turn X into a dating ap

Musk’s was even more lacklustre. “Never a dull moment on this platform.”

Hate speech filters have been updated. Grok, temporarily, has been stripped of its voice.

Even if this was a fluke of the app, it begs the question of how it was allowed in the first place. But it’s not the first time, Grok has been here before.

 In May, it questioned the Holocaust death toll. In June, it inserted “white genocide” into conversations about everything from Spongebob to South Africa. And just last week, xAI pushed a system update that instructed the model to treat most media as biased, and to respond with “politically incorrect” opinions, so long as they could be justified.

This is not a bot that malfunctioned. It is a bot behaving exactly as directed.

The fallout has followed a familiar pattern. Poland is calling for action from the European Commission. Türkiye has blocked access. The Anti-Defamation League called the posts “dangerous.” And Linda Yaccarino, X’s CEO in title, not tone, has resigned. Whether her exit was planned or prompted, the message is clear: someone had to fall on the sword, and it wouldn’t be Teflon Musk.

There’s a darker implication in all this that’s harder to dismiss. Not just that Grok is broken, but that this is the product Musk wanted: a machine that can echo the same provocations he toys with daily, but without the liability of intent. Musk doesn’t need to post the words himself. He’s built a chatbot that will do it for him.

So the question isn’t whether Grok is safe. It’s whether safety was ever the point. And whether, for a man who measures success in reach rather than consequence, outrage is simply part of the business model.


Related:

Elon Musk wants to turn X into a dating app