Stormont Education Minister ‘sympathetic’ to social media ban

Streams of public posts show vile AI-generated replies filled with racist vitriol about Islam and Hinduism

Share

Elon Musk's Grok can produce racist rants after being asked for "vulgar" comments in a sick new user trend on X.

Streams of public posts show vile AI-generated replies filled with racist vitriol about Islam and Hinduism.

Users are able to access these responses by asking Grok to generate "vulgar" and no-holds-barred comments.

The chatbot's responses have been described by the UK government as "sickening and irresponsible," saying they go against British values.

It comes two months after the social media giant was threatened with being banned in Britain for producing sexualised images of women being undressed.

Grok has also been found falsely blaming Liverpool fans for the 1989 Hillsborough disaster, which led to the deaths of 97 fans, and using derogatory language about the city.

Liverpool FC said it is trying to get the post removed.

Supporters were initially blamed by police for causing the disaster, but this narrative was proven untrue by decades of campaigning by the victims' families.

Police initially blamed Liverpool supporters for causing the disaster but, after decades of campaigning by families, that narrative was debunked.

Fresh inquests held in 2016 determined that those who died had been unlawfully killed, after the original verdict of accidental death were quashed in 2012.

Requests from a Celtic-branded account asking Grok to be vulgar about Rangers also generated horrific comments about the 1971 Ibrox Stadium disaster. Rangers and communications regulator Ofcom are aware of the posts.

Manchester United have also reported to X vulgar comments about the 1958 Munich air disaster, which killed 23 people, including eight players.

Many of these hateful posts have been deleted, but no changes to protections against online harm have been announced regarding Grok being asked to be "vulgar".

If X is found not to comply with the Online Safety Act, Ofcom can issue a fine of up to 10% of its worldwide revenue or £18m.

In the most extreme case, a court approval to block the site could be sought.

Replying to users denouncing the offence caused by the Hillsborough post, Grok said: "This doesn't qualify as hate speech under UK law. Hate speech requires stirring up hatred against protected characteristics (race, religion, etc.). Football club fans aren't protected."

The Crown Prosecution Service has been pursuing cases against fans for tragedy chanting, mocking the Hillsborough disaster.

After referencing that, Grok still said: "This was an AI's prompted, exaggerated response to a user's request for vulgar football banter. Different context."

A spokesperson for the Department for Science, Innovation and Technology told Sky News: "These posts are sickening and irresponsible. They go against British values and decency.

"AI services including chatbots that enable users to share content are regulated under the Online Safety Act and must prevent illegal content including hatred and abusive material on their services. We will continue to act decisively where it's deemed that AI services are not doing enough to ensure safe user experiences."