in

Online disinformation has sparked a wave of far-right violence in the UK

Riot police officers push back anti-immigration protesters outside the Holiday Inn Express Hotel, which houses asylum seekers, on August 4, 2024 in Rotherham, United Kingdom.

Christopher Furlong | News Getty Images | Getty Images

It didn’t take long for false claims to start circulating on social media after three young girls were killed in the British town of Southport in July.

Within hours, false information about the attacker’s name, religion and immigration status gained significant traction, sparking a wave of misinformation that fueled days of violent rioting across the UK.

“Referencing a LinkedIn post, a post on X falsely named the author as ‘Ali al-Shakati,’ who is said to be a Muslim migrant. By 3 p.m. the following day, the false name had more than 30,000 mentions on X alone,” Hannah Rose, a hate and extremism analyst at the Institute for Strategic Dialogue (ISD), told Vscek in an email.

Other false claims shared on social media included that the attacker was on the intelligence watch list, that he had arrived in the UK on a small boat in 2023 and that he was known to local mental health services, according to the ISD analysis.

Police denied the claims the day after they were first reported, saying the suspect was born in Britain, but the version of events had already gained traction.

Misinformation fuels prejudice and distortions

This type of misinformation is closely related to the rhetoric that has fueled the anti-immigration movement in the UK in recent years, said Joe Ondrak, UK head of research and technology at tech firm Logically, which is developing AI tools to combat misinformation.

“It’s really a mockery of them, you know. It’s just the right thing to say to provoke a much angrier response than there probably would have been if the misinformation hadn’t been out there,” he told Vscek via video call.

Riot police officers push back anti-immigration protesters outside on August 4, 2024 in Rotherham, UK

Christopher Furlong | Getty Images

Far-right groups soon began organizing anti-immigrant and anti-Islam protests, including a demonstration at a planned vigil for the girls who had been killed. This escalated into days of rioting across the UK that saw attacks on mosques, immigration centers and hotels housing asylum seekers.

Misinformation spread online has played on pre-existing biases and prejudices, Ondrak said, adding that misreporting often thrives in moments of high emotion.

“It’s not a case of this false claim being put out there and then, you know, being believed by everyone,” he said. Instead, the reports act as “a way to rationalize and reinforce pre-existing biases, prejudices, and speculations before any sort of established truth can come out.”

“It didn’t matter whether it was true or not,” he added.

Many of the right-wing protesters argue that the high number of migrants in the UK fuels crime and violence. Migrant rights groups deny these claims.

The Spread of Misinformation Online

According to ISD’s Rose, social media has been a key vehicle for the spread of misinformation, both through algorithmic amplification and because it has been shared by large accounts.

Accounts with hundreds of thousands of followers and paid blue Xs shared false information that was then forwarded by the platform’s algorithms to other users, she said.

“For example, when you searched for ‘Southport’ on TikTok, in the ‘More Searched’ section, which recommends similar content, the fake name of the attacker was promoted by the platform itself, even 8 hours after police confirmed that this information was incorrect,” Rose said.

Shop windows are boarded up to protect them from damage ahead of the demonstration against the far right and racism.

Thabo Jaiyesimi | Sopa Images | Lightrocket | Getty Images

ISD’s analysis showed that the algorithms worked similarly on other platforms such as X, where the attacker’s misspelled name was flagged as a trending topic.

As the riots continued, X owner Elon Musk weighed in, making controversial comments about the violent demonstrations on his platform. His comments prompted a backlash from the UK government, with the country’s courts minister calling on Musk to “behave responsibly.”

TikTok and X did not immediately respond to Vscek’s request for comment.

The false claims have also found their way onto Telegram, a platform that Ondrak said plays a role in entrenching narratives and exposing more people to “harder beliefs.”

“It was all these statements that were funneled into what we call the post-Covid environment of Telegram,” Ondrak added. That includes channels that were initially anti-vaccine but were co-opted by far-right figures pushing anti-migrant arguments, he said.

In response to a request for comment from Vscek, Telegram denied that it was helping spread misinformation. It said its moderators were monitoring the situation and removing channels and posts that incited violence, which is not allowed under its terms of service.

According to Logically’s analysis, at least some of the accounts calling for participation in the protest could be traced back to the far right, including some linked to the banned far-right group National Action, which was declared a terrorist organisation under the UK’s Terrorism Act in 2016.

Ondrak also noted that many groups that had previously spread false information about the attack have begun to retract it, claiming it was a hoax.

Thousands of anti-racism protesters gathered in cities and towns across the UK on Wednesday, far outstripping numbers at recent anti-immigration protests.

Content moderation?

The UK has an Online Safety Act to tackle hate speech, but it won’t come into force until early next year and may not be enough to protect against some forms of misinformation.

On Wednesday, the UK’s media regulator Ofcom sent a letter to social media platforms saying they shouldn’t wait for the new law to come into force. The UK government also said social media companies should do more.

Many platforms already have terms and conditions and community guidelines, which to varying degrees cover harmful content and impose measures against it.

A protester holds a sign reading “Racists are not welcome here” during a counter-demonstration to an anti-immigration protest called by far-right activists in the London suburb of Walthamstow, on August 7, 2024.

Benjamin Cremel | Afp | Getty Images

Companies “have a responsibility to ensure that hate and violence are not promoted on their platform,” ISD’s Rose said, but added that they need to do more to enforce their rules.

He noted that the ISD had found a number of content on several platforms that likely violated their terms of service, but which had remained online.

Riot police officers push back anti-immigration protesters outside on August 4, 2024 in Rotherham, UK

As misinformation spreads amid UK riots, regulators are currently powerless to act

Logically’s Henry Parker, vice president of corporate affairs, also noted nuances for different platforms and jurisdictions. Companies invest varying amounts in content moderation efforts, he told Vscek, and there are issues with different laws and regulations.

“So there’s a dual role here. There’s a role for platforms to take more responsibility, to enforce their own terms and conditions, to work with third parties like fact checkers,” he said.

“And then there’s the government’s responsibility to be really clear about what their expectations are… and then be very clear about what happens if those expectations aren’t met. And we’re not at that stage yet.”

Written by Anika Begay

The Tesla Dojo, A Timeline | TechCrunch

Jay-Z Claims His Future With ‘Reasonable Doubt’ During Damon Dash Auction