The UK’s landmark Online Safety Bill is being seen as a test case for how governments around the world can legislate to regulate content on the internet.
Liz Truss was decidedly unsure about elements of the bill, but Rishi Sunak’s government has vowed to bring it back to the Commons before Christmas after making tweaks to the legislation aimed at appeasing concerns around freedom of speech.
What’s in it?
The bill is a “mammoth piece of legislation” that will rewrite the UK’s rules for policing harmful content online, ranging from issues such as threatening behaviour to racist and sexual abuse, said The Verge.
The i news site said the “sweeping news laws” force search engines and tech platforms that host user-generated content, such as Google, Facebook and Twitter, to tackle harmful content, “making it safer for users with the threat of fines and criminal sanctions, including jail terms, for social media executives”.
Last week, the government announced plans to make the sharing of non-consensual pornographic deepfakes illegal. The move is part of a wider initiative to stamp out revenge porn and other forms of “intimate image abuse,” along with strengthened laws against “downblousing,” taking explicit images down a women’s top without consent.
It also plans to introduce a new provision that would make online content that encourages someone to harm themselves an offence, bringing self-harm material in line with communications that encourage suicide, which is already illegal.
The BBC reported that move had been influenced by the case of Molly Russell, a 14-year-old girl who ended her life in November 2017 after viewing suicide and self-harm content on Instagram and Pinterest.
The Financial Times said the “groundbreaking draft legislation is being watched closely by regulators around the world and has been vigorously opposed by tech companies that could end up facing huge fines if they breach the new law”.
Who and what does it cover?
Gov.uk said the bill “delivers the government’s manifesto commitment to make the UK the safest place in the world to be online while defending free expression”.
It will cover “the biggest and most popular social media platforms, sites such as forums and messaging apps, some online games, cloud storage and the most popular pornography sites and search engines”, which will have tailored duties focused on minimising the presentation of harmful search results to users.
It requires platforms to introduce age verification checks to make sure children cannot access adult material. However, responding to concerns around journalistic safeguards, news publishers’ content will be exempted from the new online safety duties.
Verdict said it is “a bold attempt to introduce a duty of care to limit the spread of illegal content, such as child sexual abuse images, while giving the communication regulator Ofcom the power to impose hefty fines on companies that breach the new rules”.
Penalties of up to 10% of a company’s global turnover for firms that fail to adequately moderate content or remove illegal content as soon as it is flagged have proved one of the most popular elements of the bill.
It is a drastic step that campaigners say is needed to force Big Tech to take the legislation seriously. But “regardless of what the bill says on paper”, said Tech Crunch, “huge questions remain over how platforms will respond to legal duties being placed on them to regulate all sorts of speech – and whether it will boost safety for web users as claimed”.
Under the new regulatory regime, media regulator Ofcom will also be able to audit algorithms that control users’ experience online.
What’s been tweaked?
The government said the amendments it is proposing are necessary because the original plans “would have meant the biggest platforms would have had to not only remove illegal content, but also any material that had been named as legal but potentially harmful”, said Sky News.
Michelle Donelan, the culture secretary, told the broadcaster the bill in its current form “had a very, very concerning impact potentially on free speech”.
Some critics “had argued it opened the door for technology companies to censor legal speech”, said the BBC.
Conservative leadership candidate Kemi Badenoch had tweeted that the original bill was “legislating for hurt feelings”. David Davis, another vocal critic of the bill from the Tory backbenches, told the BBC he was glad that the legal but harmful duties had been taken out but he still had other “serious worries” about the threat to privacy and freedom of expression which could “undermine end-to-end encryption”.
Is it as groundbreaking as first thought?
Verdict said the bill “sets the democratic world’s strictest internet safety legislation” but it has proved hugely controversial and prompted a furious debate between child safety groups and freedom of speech advocates.
i news reported that the legislation has been “hit by a series of delays after Culture Secretary Michelle Donelan promised to tweak the crucial section of it that deals with ‘legal but harmful’ content online”, sparking fears that “unless the Bill passes the Commons by Christmas, it will be ditched entirely”.
Toby Young in The Critic hailed plans to amend the “legal but harmful” clause as “a major victory for all the free speech groups” but child safety campaigners, who have been pushing the government for years to pass online safety legislation, have raised concerns about the bill being weakened too much.
Yet despite reservations in government over free speech, a majority of MPs and, crucially, the wider public still support the bill, meaning it will most likely pass.
Tech Crunch reported on fears among groups outside Parliament of a “looming mess” that “will apply the biggest penalties to UK web users faced with access restrictions like age verification pop-ups and homegrown startups faced with impossibly fuzzy demands and expensive compliance costs”. There are also many, the site said, who argue that “the bill won’t do what’s claimed and protect kids either”.