Big Tech's Deadly Grip
Alarmist though it may sound, there's an increasing risk that algorithm driven tech platforms pose an existential threat to public debate, order and democracy.
We need to talk about tech.
When humans are faced with a risk that’s potentially existential, such as climate change, it’s part of our sense of indomitability to assume outcomes won’t be as bad as they could be. The worst case scenario seems unlikely.
The largely unregulated power of big tech is in this category. It may be more than likely that, in the context of the UK where political polarisation is not as extreme as it is in the US, the risks can be managed. Perhaps the Online Safety Act, yet to be fully implemented, can be used to smooth some of the roughest edges of tech’s impact.
But if there’s a not inconceivable probability that, without further checks or more action by government, big tech could end up destabilising our politics further and disrupting our democratic system, then should we not act out of precaution?
And yet I find that, while many understand the potential for harm, there’s a high degree of ‘we’ll sort it’ and ‘people in the UK are more resilient to extremism’ and ‘the power of our ideas will win through’ and ‘we just need a better narrative’ complacency. What if it’s not, they don’t, they aren’t and we can’t?
Algorithm-Driven
One analog is financial markets in 2007. As with the allocation of capital throughout the 2000s, algorithms have in the past decade taken over the information ecosystem and govern what we see.
Previously, while there was plenty of dreadful content available, both in mainstream and social media, we had to opt in to see it. Now, thanks to the algorithms, we are served harmful, racist, misogynistic, violent and untrue content. We no longer have to find it. It comes to us.
You don’t want to be a child in this ecosystem. Research by GAP for Vodafone revealed that boys aged 11-13 were exposed to harmful, often misogynistic content within an average of 30 minutes of going online, following innocent and unrelated searches. One-in-ten were served harmful content within 60 seconds.
Similarly, the excellent Centre for Countering Digital Hate (CCDH) recently found that YouTube was recommending eating disorder and self harm videos to girls.
I recently met Mariano Janin whose daughter killed herself after sustained cyber bullying, Ellen Roome, who’s son Jools death has been linked to an online challenge, and Stuart Stephens, who son Olly was killed after a dispute on social media.
Their stories are unbearably moving. We should need no more motivation to act than the increasingly concerning effect exposure to algorithm-dominated tech platforms are having on developing brains.
But there’s something underlying the way social media works that goes way beyond the developmental rights of children and concerns us all. It’s not just about harmful content, or its probably deeply profound effects on those under the age of digital consent, or the growing number of bereaved parents. It’s the very business model itself that is problematic. For all of us.
As soon as you know that 97% of Meta’s 2022 profits were derived from advertising, then, intellectually at least, the rest falls into place. Whatever mission they were created to fulfil, social media platforms have become complex, global, algorithmic advertising billboards; their primary purpose is to serve us as many adverts as possible. Everything else is subsidiary.
This has three effects.
In direct terms, the algorithms are tuned to keep our attention for as long as possible in order to serve as many ads as possible. Children are drivers of household consumption and future consumers, so they are targeted in particular, regardless of their age but it’s the same for everyone.
In keeping our attention, there’s a kind of blind-ish process - a bit like financial market algorithms buying all the junk prior to the crash - in which algorithms tuned to the ad-driven business model compete for our attention. In the attention economy, outrage sells.
Bad actors, as they’re laughably known (this isn’t amateur dramatics), know this and have spent the past decade finding out how best to harness the ‘recommender algorithms’ that show us content. These include commercial vested interests, extremist politicians, hostile foreign governments, ‘chaos merchants’ and influencers looking to make a quick buck.
My son not unreasonably assumes that when he’s aimlessly scrolling TikTok, he’s seeing a linear procession of related content. But all social media content is ‘curated’ (though it’s no museum) by recommender algorithms, hence teenage boys are shown violence, misogyny and porn, girls weight loss and self harm, baby boomers anti-vax, white men Tommy Robinson and so forth.
If this sounds like a really bad idea, that’s because it is. To quote Chris Morris’s timeless and relevant ‘Paedogeddon’ episode of Brass Eye when a child is accidentally blasted into space onboard a rocket designed to rid the earth of a notorious paedophile, ‘this is the one thing we didn’t want to happen.’
I doubt any of the tech giants started out with the aim of creating epistemic chaos and widespread radicalisation, but that’s where we’re heading.
It won’t be stopped by asking platforms to be nicer. Big tech is now as big as big oil or big tobacco, but face fewer constraints. These are among the largest and least regulated firms that have ever existed. Only nation states can de-fang this beast.
Some are. Famously, Australia is threatening a social media ban for under 16s; Brazil famously suspended the use of X after concerns about disinformation, but this was overturned in the country’s Supreme Court; Albania has banned TikTok after the death of a teenager was linked to social media use.
But much more is going to be needed.
I’ve included some links to further reading about the algorithmic dark heart of social media below.
Bending Policy and Political Truth
In the original Kingsman film, Samuel L. Jackson plays a rich megalomaniac who issues the world’s population with free SIM cards and data, which he then uses to transmit an evil signal that sends users into a violent frenzy. The disinformation circulating on social media following the killing of three girls in Southport in late July seems to have had a similar effect.
The racially-motivated, social media-fuelled riots that followed should have acted as a wake-up call. Only by flexing the criminal justice system to its maximum and removing more than 1,200 rioters from the streets did the government manage to contain what for a few days looked likely to spiral out of control.
The kernel of untruth was that the Southport killer was a Muslim and illegal immigrant. This was shared many times and spread via online influencers, including Andrew Tate. Online investigators Valent trace much of the amplification back to an apparently legitimate (and now defunct) online news service called Channel3Now.
What’s fascinating - and chilling - is Valent’s hypothesis that Channel3Now perhaps innocently published a story based on the fake asylum seeker claim, which was amplified by software that the news service was using to boost its reach. If correct, this should serve as a massive warning gong to all decision makers (and fans of social order and democracy) because what it means is that a handful of fake grit got into the blind mechanisms of online amplification and was thrown all over the place.
Mention ULEZ and world-weary environmentalists will roll their eyes (is THAT ALL you’ve got!). But a similar piece of fake grit - that many more people would be affected by the expansion of London’s clean air zone than was actually the case - got into the machine and sprayed itself across the clean air policy landscape.
Did it matter? The policy went ahead and appears to have had little influence over the London Mayoral elections the following May. Similarly, though a doorstep ULEZ backlash was blamed by Labour central for the party’s failure to win the Uxbridge and South Ruislip by-election in July 2023, the party subsequently won the seat in July’s General Election.
Except that, despite not impacting on the ULEZ implementation itself nor really on the new Mayor or new government’s electoral chances, the perceived backlash has all-but killed clean air policy. A promised clean air act has been dropped since the election, and any suggestion of further traffic curbs in London or other major UK cities is enough to send political advisers into paroxysms of denial.
Khan’s political opponents may not see their ULEZ-scrapping 2024 campaigns as a success, but whoever helped crank-up and leverage the ULEZ backlash (someone was paying for the Twitter bots that boosted ant-ULEZ content) is likely to be happier with the outcome.
Measures to curb nitrogen pollution from farms in the Netherlands and Heat Pump policy in Germany have suffered a similar or worse fate.
The effects of Covid-19 vaccine conspiracy theories are also met with eye-rolling (after all, vaccine take-up has been high enough to limit the disease). But again this misses the longer-run point. An anti-vaxxer movement existed prior to the pandemic, has been boosted by disinformation surrounding Covid vaccines and, with the re-election of Trump, now threatens the very efficacy of public health in the US.
At this point, what it’s probably fair to conclude is that single instances of online political disinformation are insufficient to cause widespread instability in-and-of themselves, but are part of a picture that is seeing historically marginal, right wing, conspiratorial and anti-establishment beliefs take hold. Each wave of online mania breaks a little higher on the shoreline of truth.
Weathering the Storm
Tracking disinformation is like chasing Willo-the-Whisp; it melts away before it’s fully within our grasp. Plus platforms have increasingly restricted access to APIs, making research harder.
My current employers Global Action Plan (GAP), one of the organisations that has led the call for more action, argues that focussing on bad content is like a game of whack-a-mole.
GAP’s solution is a ban on surveillance advertising. That has seemed quite an uphill struggle at times, even in the world of big tech accountability campaigns. But it may yet prove to be the right focus.
While platforms and authorities should always be obliged to remove illegal or obviously harmful content, a lot of disinformation doesn’t even really fall into either of these categories. Policy makers should focus on disrupting the platforms’ business model and advertising is at the absolute heart of this.
During the OSA debate, free speech and privacy groups - rightly, in my view - fought hard to ensure regulation didn’t allow government to restrict people’s ability to speak out or go about their online business in private. These principles should be at the heart of all regulations, which is why a focus on business model rather than only on content is key.
Peter Kyle, the Science, Innovation and Technology Secretary into whose portfolio online regulation falls has recently published a draft Statement of Strategic Priorities for online safety. Though leaning firmly into the existing Online Safety Act, which comes fully into force in 2025, Kyle has left the door open for further legislation if it appears necessary.
A savvy government would put children’s safety at the forefront of any further efforts and would keep Josh MacAllister’s Safer Phones Private Member’s Bill on the books as a potential legislative opening. This won’t be lost on the tech firms, who will do their best to close it down.
Campaigners - and that should really include all sectors of civil society as none of us will gain from unbridled social media - should pursue a three-pronged strategy.
Thanks to groups like 5Rights and GAP, the Online Safety Act contains measures to force platforms to make their products safe by design. This remains ambiguous, but if Ofcom can be persuaded to gold-plate the way it regulates, then the OSA may yet prove to be landmark
My fear is that, with deep pockets and broad reach, the platforms will believe they can control the debate with Ofcom and so alongside efforts to ensure the OSA is well-implemented, there needs to be a big and broad based campaign for much more profound regulation of social media. This should anchor high to create space for the debate.
In the meantime, MacAllister’s Bill can keep the door open. Civil society groups should look to bring as many MPs as possible to the table; MacAllister and Malthouse have a very good grasp of the debate and can be excellent champions.
While some forces in UK politics stand to gain more from unregulated big tech than others - especially if Musk opens his cheque book - it’s vital that all three prongs remain firmly cross-party. There’s good support - active and residual from the OSA - on all main benches.
Trump’s White House will lean hard on Brussels as it seeks to roll out and extend the Digital Services Act. It won’t be easy for Westminster. So as well as domestic efforts, it will be important - and for all I know already underway - to build a global coalition of the willing. The sheer size and power of tech demands nothing less.
In environmental and medical law, the precautionary principle is key. This states that, given cause for grave concern, policy makers should move to curb harm even if all of the evidence is not yet available. The question is, does big tech now meet this criteria? Plenty think it does.
Further Reading:
Here are a few important reports and books to get your teeth into, I will ad more but please suggest in comments:
Human Rights lawyer Susie Alegre’s Freedom to Think sets the unregulated online world in the wider epistemic context and is important.
On the environmental front, Alegre also wrote GAP’s report Big Tech’s Dirty Secret report, which focuses in on the risks to the climate of unregulated big tech.
Again, at the macro, epistemic threat level of the debate, Jamie Susskind’s The Digital Republic argues that the state needs a whole new infrastructure of legislation and institutions to curb big tech’s power.
At the campaigning end of things, The People vs Big Tech coalition is the main forum calling for regulation and, ultimately, the breakup of tech platforms.
For a guide to the US online conspiracy scene, and what may be coming our way, Gabriel Gatehouse and Lucy Proctor’s The Coming Storm Podcast is very well worth a listen from start to finish. Note the centrality of Peter Thiel, the man behind PayPal and Palentir.
Though with a clear commercial interest, Valent’s pro bono investigations into ULEZ and the Southport riots are well wroth a read. Also follow Valent founder Amil Khan on LinkedIn.
Hugh Knowles has written a few good pieces on the wider, epistemic threat posed by social media. Here’s his 10 reasons why social media is a problem for Larger Us.
One change in law that would help tremendously would be for the law to treat platforms such as Meta as publishers. That would bring them within the ambit of existing regulations. They would of course hate it, as they would need to check every post for illegal/defamatory content. There may be a middle ground. I haven't read any commentary on this though - have you?