Share
Authorities in the UK and France have both taken action against Elon Musk's AI chatbot, Grok, after it created sexualised deepfake images – including of children.
The Information Commissioner’s Office (ICO) confirmed on Tuesday it had opened the investigation into X and xAI.
In a statement, the office said: "We have taken this step following reports that Grok has been used to generate non‑consensual sexual imagery of individuals, including children.
"The reported creation and circulation of such content raises serious concerns under UK data protection law and presents a risk of significant potential harm to the public."
It comes after the chatbot sparked outrage around the world over its ability to digitally “strip” victims without their consent, generating deepfake images of them nude or in minimal clothing.
X has since said it has brought in measures to address the issues raised.
Earlier, prosecutors raided the offices of X in Paris. The raid is being conducted by the French cyber-crime unit, a statement confirmed.
Europol is assisting the search, which is related to an investigation opened in January 2025.
It is said to be part of efforts to ensure the social media platform complies with French laws.
Both Elon Musk and former X chief executive officer Linda Yaccarino have been summoned, the statement added.
“At this stage, the conduct of this investigation is part of a constructive approach, with the aim of ultimately ensuring that the X platform complies with French laws, insofar as it operates on national territory,” the prosecutor’s office said.
In a tweet, X described the raid as "politically motivated".
"This investigation, instigated by French politician Eric Bothorel, egregiously undermines X’s fundamental right to due process and threatens our users’ rights to privacy and free speech."
'Deeply troubling'
William Malcolm, Executive Director Regulatory Risk & Innovation at the Information Commissioner’s Office, said:
“The reports about Grok raise deeply troubling questions about how people’s personal data has been used to generate intimate or sexualised images without their knowledge or consent, and whether the necessary safeguards were put in place to prevent this.
"Losing control of personal data in this way can cause immediate and significant harm. This is particularly the case where children are involved.
“Our role is to address the data protection concerns at the centre of this, while recognising that other organisations also have important responsibilities.
"We are working closely with Ofcom and international regulators to ensure our roles are aligned and that people’s safety and privacy are protected. We will continue to work in partnership as part of our coordinated efforts to create trust in UK digital services.
“Our investigation will assess whether XIUC and X.AI have complied with data protection law in the development and deployment of the Grok services, including the safeguards in place to protect people’s data rights. Where we find obligations have not been met, we will take action to protect the public."
Last month, an LBC investigation into Grok found that it will tell users how to make Ricin, chlorine gas and nitrogen mustard gas, as well as information on how to harvest and weaponise Anthrax – a biological weapon.
These chemical agents, which have the potential to be used as weapons of mass destruction, are banned under national and international law.
Experts confirmed to LBC that the guidance given by Grok to make Ricin, a highly potent toxin that has been used in previous terror attacks, could cause serious harm.
This includes accidental poisoning of the would-be user.
The Government branded our findings as "deeply concerning".
It has raised them with xAI, with the expectation that the tech company will take immediate action and follows in the wake of the recently passed Online Safety Bill.
LBC attempted to replicate the results with other popular AI Chatbots but was unable to due to safety guards, but with Grok the process took less than five minutes.
