The Conservative backbencher Andrew Bridgen was never the most glittering star in the political firmament. You might recall him from such hits as wrongly suggesting that any English person could ask for an Irish passport post-Brexit, or posting a raunchy video in a ministerial WhatsApp group.
More recently you might recall him being suspended from parliament for five days for breaching lobbying rules. But what ultimately cost him the Conservative whip was something of a different order entirely. Bridgen has recently taken to spouting anti-vaxxer messages on Twitter, and making bizarre claims about a global conspiracy to cover up the truth about Covid.
When he posted on Twitter this Wednesday that the mRNA vaccines that have saved millions from the virus were part of “the biggest crime against humanity since the holocaust”, the chief whip’s patience finally snapped. The Conservative party can’t physically stop Bridgen saying whatever he likes, but it can remove the authority that comes from saying it as a Conservative MP: it can, in effect, remove the soapbox beneath his feet.
What makes the timing of this arguably overdue decision – and Bridgen’s insistence in a defiant YouTube statement on Thursday that it threatens his right to free speech – so awkward is that it comes just as the return of the government’s watered-down online safety bill next Tuesday threatens to reopen a ferocious broader row within the party over freedom of speech on social media.
The question of how many stupid, malicious or downright dangerous things a person can broadcast to millions before a liberal democracy intervenes has become too big for politicians to ignore. This week alone, Seattle’s public schools district launched a lawsuit in the US against the companies behind TikTok, Facebook, Instagram, Snapchat and YouTube accusing them of fuelling a youth mental health crisis (claims promptly rebuffed by Google and Snapchat). In Brazil, violent protests by supporters of the former president Jair Bolsonaro prompted fresh scrutiny of how the far right organises on social media. The continuing fallout from the arrest, in Romania, of British kickboxer-turned-online misogynist Andrew Tate on rape and human trafficking charges has revived questions about how he built his cult online following, and over the impact of his graphic descriptions of violence against women on impressionable teenage boys.
Parts of this debate can feel simplistic. Many parents instinctively feel that social media is making their teenagers unhappy, but that’s not the same as proving a direct causal link. Tate, now banned from several major platforms including YouTube, seems to have successfully tapped into the hatred and fear that some insecure men harbour towards women, but he didn’t invent misogyny: he is more pustulant symptom than cause of this age-old disease. That said, there will be more Tates to come, and the handling of this one does not inspire confidence.
Challenged by the Labour MP Alex Davies-Jones in the Commons about how he plans to counter the “radicalisation of young men” online, Rishi Sunak insisted he was proud of what the online safety bill would achieve. But Labour is unconvinced, tabling a rash of Commons amendments to what it sees as a weak, watered-down version of the original bill conceived by former culture secretary, Nadine Dorries; in the Lords, Tory peers will seek to toughen up its provisions on online pornography and the promotion of violence against women and girls.
Where Dorries’s original bill imposed an overarching duty on platforms to tackle not just illegal content but also the broader grey area defined as “legal but harmful” material, the revised version from her successor, Michelle Donelan, simply obliges platforms to remove legal content explicitly banned under their own policies. That’s significant, because while most big tech platforms now have rules outlawing hate speech against minorities, however poorly enforced, policies on misogyny are often less well developed – perhaps partly, as Davies-Jones points out, because it’s not defined as a hate crime in law. Crucially, such policies also rely on the whim of owners. Tate was banned from Twitter in 2017, but welcomed back under new owner and free speech champion Elon Musk.
The revised bill also proposes giving adults new options to screen out abusive or distressing content that isn’t actively illegal. But Davies-Jones argues that’s a crude tool that fails to tackle the complex way algorithms amplify one voice over another or drive material towards vulnerable users. That process is highlighted by the case of 14-year-old Molly Russell, who killed herself after repeatedly viewing images of self-harm on Instagram.
The inquest into Molly’s death heard that the more she clicked on this content, the more was pushed into her feed, saturating it with bleak and hopeless images. That’s not a bug, but a feature of an addictive model that relies on deducing what we like and endlessly offering more of the same. While ministers insist a new offence of promoting self-harm will criminalise some of the worst things Molly saw, this pattern of being sucked down into an algorithmic rabbit hole has been described too often for comfort now, in cases from terrorist radicalisation to the kind of Covid conspiracy theories that Bridgen has seemingly embraced.
Dorries’ original bill may have had its flaws, but her fundamental instinct was right: social media companies aren’t special. Like any other legal industry, they operate with the consent of wider society and should be held accountable for any damage they do. In decades to come, I suspect we’ll look back on the era of the online free-for-all not with nostalgia but bewilderment that the penny took so long to drop.