Scotland leading local government adoption of AI, report finds

A bogus video was spread online of former minister George Freeman which appeared to show him switching parties.

Share

Fake videos of politicians could “distort, disrupt and corrupt” democracy, an MP who was the victim of one has warned – as concern grows over the impact of AI deepfakes on elections.

Full Fact is an independent fact-checking organisation that aims to tackle how incorrect or misleading claims spread online, and to push for better systems to stop them going viral.

It says the rapid rise of realistic AI-generated audio and video is making it harder for voters to know what they can and cannot trust online, blurring the lines between fact and fiction.

The charity is concerned about the ways in which this hyper-realistic content could twist conversations and interfere with election cycles, especially as the law is yet to catch up with this rapidly avancing technology.

That threat was brought into sharp focus last October when a bogus clip appeared to show Conservative MP George Freeman defecting to Reform UK.

The MP for Mid Norfolk, found himself the subject of the AI-generated video where his voice had been cloned and words were substituted over the top.

The video was then posted and spread on social media. He reported it, but police told him it was not illegal.

The former Minister of State in the Department for Science, Innovation and Technology is now taking action to get this kind of malicious activity banned.

“The deliberate spread of disinformation through AI-generated content – whether aimed at stealing identity for fraud, mis-selling, political indoctrination or any other purpose – is a concerning and dangerous development.

"As a Member of Parliament this sort of political disinformation has the potential to seriously distort, disrupt and corrupt our democracy," Mr Freeman said in a social media post after the clip was shared.

'Dangerous'

Mark Frankel, from Full Fact, described this phenomenon as “dangerous”.

“These tools are producing false narratives and reshaping how audiences use and access information.”

“The public could be misled if there are videos of politicians saying things that they are not or introducing things they aren’t."

He says that the creators of this manipulated content will often be motivated by financial gain from affiliate marketing or be “individuals with a particular standpoint, such as an axe to grind on immigration.”

“They may have a view of the government which they want to disrupt or discredit, and to use these videos in order for their ideas to gain greater viral currency.”

However, as Mr Frankel explained, under current legislation, “unless a crime is committed, such as inciting racial hatred or spreading terrorism, for example, there is no duty on anyone to either take these videos down or label them as AI-generated.”

The government has now said it plans to make creating sexually explicit “deepfake” images a criminal offence.

This came amid a backlash back in January against images created using Elon Musk's AI Grok to digitally remove clothing of largely women and girls online.

Mr Frankel welcomes this amendment to online safety legislation but says the government needs to stop “papering over cracks” in online safety legislation and take a “more comprehensive look at a broader range of legal but harmful issues that are not currently subject to action.”

Full Fact has called for the current Representation of the People Bill to include making deepfake videos of politicians a criminal offence in order to increase transparency and tackle political misinformation, especially during election cycles.

In the opening parliamentary debate on this Bill on March 2, several MPs from across the House expressed support for this position. For the Conservatives, James Cleverly said his party would be “happy to engage with sensible, proportionate measures to ensure that AI-generated material is clearly labelled and subject to transparency as a requirement.”

'Whack a mole'

FullFact believes that the government should go further.

“We feel that the current state of the Online Safety Act is quite messy and unfit for purpose when it comes to tackling misinformation”, they say in a briefing on their website.

“We don’t believe the legislation should be scrapped, but rather strengthened – it should be firmed up and made more robust. It’s time to introduce a specific AI bill, which explicitly mentions deepfakes. This should be in the King's Speech and the government should set out a timeline for consulting on comprehensive AI legislation.

In a letter written by Liz Kendall, Secretary of State for Science, Innovation and Technology, she acknowledged that “legislation must adequately mitigate against the risk of emerging harms as AI technology develops.”

“These issues are like ‘Whack a Mole’, you put out one fire out and another one starts. We are at risk of being outpaced by AI, so when it comes to this issue, we have to approach this in a more holistic way”, says Mr Frankel.

He added there future regulation should place a greater burden of responsibility on tech platforms during campaign periods to label AI-generated content.

'Hijacked for nefarious purposes'

Speaking on this issue in Parliament this week, Mr Freeman said: "We cannot allow individuals to live in fear of their identities being hijacked for nefarious purposes.

"It is time for Parliament and the UK Government to take bold action.

"Denmark has already made significant progress in legislating for such a measure, by giving people copyright to their own body, facial features and voice. I believe the UK should follow suit. 

"When I was Minister for Science with responsibility for AI, I refused to wave through the text and data mining proposals without ensuring there were appropriate safeguards for the creative industries.

"These precautions should apply to all individuals in the UK."

'Completely unacceptable'

A DSIT spokesperson said: “The potential for deepfakes to sow division, spread false information, and influence public opinion is well‑recognised. Using online tools to target and exploit people is completely unacceptable.

“Under the Online Safety Act, services that allow users to upload content or interact with others, including social media platforms, must proactively tackle illegal fraudulent content.

"This includes fraud by false representation, whether shared or generated by users – or they will face enforcement action.”

'Foment division'

Julia Lopez MP, Shadow Technology Secretary, commented: “Technology is evolving rapidly and the law must keep pace to protect our democracy.

“Deepfake videos erode trust and faith in what people see and hear online and we know our enemies weaponise that online space to spread misinformation and foment division.

“Government needs to find ways to help people and determine what is real and fake, like digital watermarking, so that the pace of technology does not undermine things we value.”