How Platform Design Amplified Misinformation in the Southport Attack Aftermath
Sohan Dsouza, open-source intelligence (OSINT) investigator, takes over Common Ground to analyse six key actors who exploited social media to spread false claims that fueled anti-immigrant riots.
On the 29th of July 2024, at around 11:45, a seventeen-year old boy in Southport, England walked into a dance class and mortally stabbed three young girls. Within barely 9 hours of the murders, an entirely fictitious story was being spread about the stabber, reaching millions of people. This story gave the stabber an Arabic name, and claimed he was a migrant and a recently arrived asylum seeker. In the riots that erupted over the days that followed, mosques, immigrant establishments, and asylum seeker accommodations were targeted for threats, vandalism, and arson.
The rapid spread of the misinformation was facilitated and incentivised—and its propagators obscured—to a large extent by tech design decisions, especially on the large online platforms.
I and other open-source intelligence (OSINT) investigators were able to capture information about the spread, almost in real time in some cases, and eventually about some of the actors involved. Even so, much information about identity and connections remains hidden, revealed only infrequently by occasional slip-ups and instances of oversight.
The agents involved in this event all occupy their individual niches, playing different roles depending on how identified, monetised, platformed, networked, influential, and/or augmented they were. I offer analysis of six such niches here.
Qatar Plot: “The Operator”
A rapid boil is easier to bring up when you have a simmer running. Those who would see cosmopolitan, liberal, democratic societies falter in order to make their own look less the uncompetitive option have long had an often overt strategy of effecting an undercurrent of division and unrest in such societies. In some cases, though, the seeding of rage can be a consequence of information environment attacks that have other primary motives and targets.
One such attack was a vast, high-spend, international, cross-platform influence operation that I and fellow influence operation investigator and researcher Marc Owen Jones investigated from early 2024, and exposed in July the same year. We named it “The Qatar Plot”, after a name the attackers used for one of their campaigns.
Though it began in Q4 2023, and primarily as a political attack on the emirate of Qatar, starting especially in Q2 2024, the operation began to push harshly xenophobic messaging on immigration and on Muslims, especially aimed at audiences in the UK and Europe, in the course of attacking Qatar. In the UK, based on just its known Meta-hosted content, we estimated a reach of about 6.5 million people. And based on the sample we isolated, its UK-aimed ad-driven content did indeed comment/reply engagement with a heavily anti-Muslim tone.
The operation spent over $1.2 million on just Meta-based advertising (Facebook and Instagram), and an unknown amount on other platforms (we know they used X advertising too, at least). It used a number of sneaky tactics to evade detection and engineer an appearance of authenticity on the platforms, including the use of hacked digital assets in addition to heavy use of farmed digital assets.
It also employed a “phone farm” cyber mercenary outfit based in Vietnam to cover its tracks, having the farm operators run at least the Meta-based side of the operation on their behalf and pay Meta in Vietnamese dong. The proxy operator was so confident in their ability to evade countermeasures on Meta platforms that it sold identity-verified Facebook pages and accounts on Facebook itself.
We were able to identify and find contact details for the operator, as well as some other associated individuals, thanks to a few slip-ups that enabled us to connect identities across assets.
Even so, a lot of information was hidden, lost, or only discovered long after, because of platform transparency design that would, say, remove information about ads and ad sponsors if the ad was taken down, or only provide names of sponsors without linking to the actual assets (easy to evade identification if you use a burner page with a non-unique name).
While it’s highly unlikely that this operator planned for the events of end-July in and spinning off from Southport, what they did remains an example of how attempts to exploit the information environment and push narrative “by any means necessary” can have severe real-life consequences regardless.
Europe Invasion: “The Influencer”
The first significant account to push misinformation about the perpetrator of the stabbing attack in Southport was a high-follower X account then called “Europe Invasion”. Within barely two hours of the attack, it confidently published the allegation, without citation, that the attacker was a “Muslim immigrant”. The post reached millions.
When later variants of the misinformation emerged, it doubled down and boosted them as confirmation of its own initial claim.
And when an anti-Muslim Indian nationalist troll claimed the attack for Islam in a false-flag post, it boosted the claim, further entrenching the perception of the attack as specifically Islamist terrorism, thanks to more gullible (and apparently quite ignorant) reposts in the far Right ecosystem on X.
I had taken notice of this account way back during the investigation of the Qatar Plot, whose campaigns had been using content published by it. The account had formed a dyad with an equally inauthentic account, then called “Daily Immigrants”, to push anti-immigrant, anti-refugee, and anti-Muslim content of the kind common in the “Radio Genoa” European ethnonationalist content ecosystem.
Investigating their digital trails, we found that both were Turkish-language accounts dropping engagement bait, until they were scrubbed multiple times, changing names and handles, and repurposed for a new xenophobic lease on life starting early 2024. One was even renamed during our investigation, with us catching it in the act while monitoring it.
Investigative journalist Linus Svensson and the Verifierar team of Swedish channel SVT reached out to me after, and we shared information during their investigation of the accounts. They discovered that the accounts were part of a digital asset farming operation run by a Turkish couple in Dubai. Both accounts have since been renamed and re-handled yet again, with one rather ironically offering content on how to game X’s algorithm.
I also discovered other related and concerning patterns in the Turkish-language internet, such as the farming of digital assets to be bought and sold on trading boards, as well as generative AI xenophobic propaganda templates shared across deployments for Turkish and British audiences. Such propaganda was also commonly used by the “Europe Invasion” account.
And per its boasting, we found it was in fact monetised on-platform with ad revenue sharing, in addition to its other means of soliciting donations/tips on- and off-platform.
More recently, I exposed a very similar account as also having been a farm’n’flip Turkish-language account, having a past life as a “thirst” account accumulating authenticity-projecting engagement by posting sexually charged content.
It made blatantly false statements about the identities of the perpetrators of (thankfully non-fatal) church burnings in Wales and Ireland. Had people died in those, they could very well have exploited the information void to foment another spate of misinformation-fueled rioting—especially given that suspects in one of the church burnings were also minors, like the stabber at Southport.
The “Europe Invasion” account, its partner, and its apparent wannabe successor “Radio Europe” (with what appears to be its own new dyad booster account in “West Echelon”), are a few among the vast constellation of accounts on X running similar xenophobic grifts from behind a shield of platform-facilitated anonymity (I discovered another set of accounts, traceable to India, doing the same).
With X’s service of inorganically boosting content and granting a verification badge for a fee, and with other tactics to game X’s algorithm (its partner account itself now offers these as “coaching”), some of these accounts can gain very wide reach—and serious impact.
Eddie Murray: “The Accuser”
On LinkedIn—of all places—we saw, within just about 3 hours after, the first known directly identifiable individual insinuated being a witness to the stabbings, while asserting that the stabber was a migrant and calling for the borders to be closed.
This post—or screencaps of it, given that LinkedIn isn’t (yet) the optimal location to spread false claims—was picked up and cited widely, becoming the quoted content for a number of accounts itching to tie the attack to migration alarmism, including ones with reach as high as that of the co-leader of Britain First.
The account has since been scrubbed of these posts, but the damage was done.
UPUK News: “The Regurgitator”
An account that often escaped notice early on was a small Indian-origin channel “UPUK News”. It was one of the first to spread Murray’s claim (at 16:23 BST), and assert on that basis that the stabber was confirmed to be a migrant.
Despite the account having relatively few followers, the post accumulated a reach in the hundreds of thousands.
This could be due to the search optimisation of the post using hashtags, which helped it especially exploit the information void following the stabbing.
Indeed, the content apart from the hashtags appears verbatim in a (now deleted, slightly earlier) post by another “newsy” account with more followers. This indicates that UPUK News grabbed the content, slapped on some hashtags, and reposted it, together with the attached image, and added its own attention-grabbing video indicating that the content was breaking news.
Further examination of the UPUK News account reveals many instances of such regurgitation of content from other newsy accounts, and tacking on a set of hashtags to exploit relevant searches and trends. In one case, even a typo was carried over, indicating that the operator doesn’t actually edit the content. This is strong evidence of use of copy-post software.
Interestingly, the account, which goes by its website domain “upuknews.in” on Facebook, no longer has possession of the actual domain, which now redirects anonymously to some old review site. This is consistent with a strategy employed by many grift operations to have only a platform presence, thus avoiding both the costs and the risks of acquiring or continuing to maintain an off-platform presence.
Incidentally, another account in the “newsy” griftsphere, “@ag_Journalist”, not only replicates much of the content that’s also on UPUK News (including the one mis-labeling the Southport stabber as a migrant), and also has a genAI banner image, but has the exact same genAI banner image. The modus operandi is also the same: grab content from other accounts, and tack on hashtags.
This one I was able to trace to a specific named individual, who happens to be, well, a journalist.
This outfit seems to not particularly care what it copy-posts, repeating both supporting and opposing positions on everything from “culture wars”, to Russia-Ukraine, to Israel-Iran. But earlier this year, I had exposed other sloperations, though with a far more ideologically polarized selection of copy-posted content of the kind found in the “Europe Invasion” ecosystem, run out of India.
As with “Europe Invasion”, the platforms’ lack of countermeasures, interoperability, and transparency allows such blind copy-post sloperations to proliferate and profit without accountability—occasionally picking up and blowing up misinformation.
Bernie Spofforth: “The Circulator”
Shortly before 17:00, an account “@Artemisfornow” was among the many that had picked up Murray’s post, but had the distinction of being the first to add details alleging that the stabber was named “Al-al-Shakati”, was on an “MI6 watchlist”, and was “an asylum seeker who had come to the UK by boat last year”.
This strain of the infovirus took off with even greater speed, picked up within mere minutes by many in even the blue-badge far Right ecosystem on X, including ones with reach as high as that of the leader of the Reclaim party.
The author, Bernie Spofforth, attempted to make a case in another thread that others had posted those claims first, but could not point to a specific originator account, except to ones that had demonstrably posted it later. She eventually deleted the post with the false claims.
Curiously, she had for some time been a micro-celebrity of sorts in her ideological niche, having been noted as a propagator of (mostly climate-related) right-wing content in earlier reports by Climate Action Against Disinformation and by City University of London.
And most amusingly, she decided that the very next day was a good time to have a go at those who expose false claims and falsehood facilitation.
Channel3 Now: “The Legitimiser”
The “Ali-al-Shakati” variant well and truly blew up with the pickup of the blurb by another “Potemkin journalism” outfit, then called “Channel3 Now”, offering a likewise “newsy” façade around content regurgitated from elsewhere online. In this case, though, the outfit had a website with a name and look resembling a genuine journalistic enterprise, in addition to presences on X and Facebook. It confidently stated the false claims around 18:01, fleshing them out with contextual details probably picked elsewhere online.
This gave the migrant danger narrative new legs, as it now looked like a product of legitimate journalism. Posts quote-posting or attaching screencaps of the website article and social media repetition of the claims by the “channel” exploded across X. On Facebook, it was additionally picked up by another newsgrift operation—this one based in Nigeria.
An ad exchange scan I ran on the “Channel3 Now” website revealed that it was monetised to saturation. Additionally, it was revealed as using Ezoic AI, independently confirmed by counter-misinformation company Valent. Ezoic offers genAI publishing tools like Writio, which can be used to generate article-length content in minutes. This—if not another genAI authoring-publishing tool—could explain how the author of the article (credited only as “Channel3 Now Staff”, naturally) was able to get an article out so quickly, pulling information from the false claim X post, as well as from multiple other developing-story sources with actual journalistic input.
An OSINT peek into the history of its X account and of its website revealed that it had long been experimenting with different branding, including posing as a Fox News affiliate, down to the logo.
My Qatar Plot co-investigator Marc Owen Jones had managed to grab a transparency snapshot before the Facebook account of “Channel3 Now” was disabled, indicating its history of being farmed and flipped (yes, another one), as well as its links to Pakistan and the US, which were confirmed by ITV News and BBC Verify.
It was only two days later—a day after Merseyside police published a statement about the false claim spread—that it issued an apology. A day later, the stabber’s non-Arabic name and British-born background details were published. The “Channel3 Now” website, Facebook, and X account were eventually taken down.
Looking Ahead
Certainly, some non-tech factors played into exacerbating the misinformation problem. The age minority of the stabber was a major one—it created an information void even larger than would otherwise be expected in the immediate aftermath of an incident like this. Of course, opportunists will want to strike while the iron’s hot.
A transparent, viewpoint-neutral dashboard service with running historical status updates, and with revelation timeline expectations and explanations, on knowns and unknowns, for all ongoing incidents, might help mitigate such attempts.
However, the platforms cannot escape accountability either. The capture of audiences and creators to be funneled into native hosting and boosting paths—and often also native monetising—has facilitated and incentivised an information environment contaminated with inauthentic conduct and content, in the form of both influence operations and profiteering grifts.
Investigators and researchers shouldn’t have to rely on operator mistakes and third-party archiving services to be able to investigate information threats.
More real-time and historical data must be made available, without the bureaucratic or pricing obstacles; if this cannot be provided in part or in whole directly by the platforms, governments should step in and get access to make available this information as critical infrastructure.
Identification and at least cross-linking information, especially if it involves movement of money, needs to be especially transparent and complete.
While user-facing transparency varies across platforms, even in the cases of relatively greater transparency, users have to dig quite a bit to hope to encounter signs that the asset they’re dealing with is inauthentic. A sort of “nutritional label” would greatly help in this regard. For example, upon trying to follow accounts like “Europe Invasion”, getting an alert that the account has changed names a few times, and has been scrubbed of posts, with options to learn more, would be very helpful.
Of course, a more stable solution would be one involving more interoperability, so users can control their own online experiences. That way, users will not be hostage to systems designed to inflame, exploit, and obscure; and that can be gamed as easily. Users can choose how they will connect, create, and receive, what services to subscribe to for monetisation and for advertising (if they don’t wish to pay for some or all of it themselves), and more importantly, be able to have all of these services auditable using other services, for signs of manipulation and inauthenticity (including via customisable forms of the aforementioned “nutritional labels”).
This wouldn’t solve all our information environment problems, but it’s a start.
At the moment, the home base of many of the big tech companies operates under a regime that is unlikely to impose such terms of service on them, and indeed views counter-misinformation work as something to defund. But continued exposure of such systems as enabling what is basically fraud will hopefully motivate at least other governments to move faster—and smarter—on empowering users, and the investigators and researchers who hold the platforms accountable on their behalf.































