Finally, the UK’s online safety bill gets its day in Parliament – here’s what you need to know

Underwhelming and unsurprising. Those were the words Molly Russell’s father used to describe the response of Meta, Pinterest and Snap to a series of recommendations from the coroner who presided over the inquest into his daughter’s death.

The online safety bill, which is debated in the UK parliament today (Tuesday) before moving to the House of Lords, has long been the focus of internet safety campaigners’ hopes, not industry self-reform.

The bill is a much-changed piece of legislation now so here is a breakdown of its structure currently after a series of tweaks in recent months.

Protecting children

The bill requires all tech firms within its scope – services that publish user-generated content from Facebook to TikTok, plus search engines – to protect children from harmful content and activity that causes harm (such as child sexual abuse material). Firms must also ensure that any content that could be accessed by children but is not illegal, such as content related to self-harm, is age-appropriate.

Age assurance, the technical term for checking someone’s age online, is going to be a challenge for the big platforms because they will also be required to enforce age limits, typically 13 years for the major social media platforms. They fear that strict vetting, whether by requesting further ID or using face scanning, will put users off and therefore hit advertising revenue. Campaigners say that tech platforms do not do enough to either check ages or shield teenage users from harmful content. Under one change announced in November, companies will have to set out their age assurance measures in the terms of service that users sign up to.

Extra changes announced in November include requiring tech firms to publish risk assessments of the dangers their sites pose to children. Under the structure of the act, platforms have to carry out risk assessments of the harms their services might cause to children, then explain how they will tackle those risks in their terms of service, in a process that will be vetted by the communications watchdog, Ofcom.

The regulator will also have the power to make companies publish enforcement notices they receive for child safety breaches. Under the terms of the act, Ofcom has the power to fine companies up to £18m or 10% of worldwide turnover (that would be more than $10bn in the case of Mark Zuckerberg’s Meta), or even to block sites in extreme cases.

The biggest change has come from an amendment backed by Conservative rebels that exposes tech executives to criminal liability for serious breaches of child online safety. On Monday night the government reached a compromise deal in which criminal charges could be brought against bosses who commit persistent breaches of their duty of care to children.

Legal but harmful content

The bill has dropped the imposition of a duty of care on major tech platforms – such as Instagram and YouTube – to shield adult users from material that is harmful but falls below the threshold of criminality, such as some forms of racist or sexist abuse, also know as “legal but harmful” content. That provision had been at the centre of critics’ concerns that the legislation was a “censor’s charter”.

Instead, Ofcom will ensure that platforms’ terms of service are upheld. So if a tech platform tells users that it does not allow content that encourages eating disorders (an example of legal but harmful material) then it will be required under the act to deliver on that pledge, or face a fine.

Users will also have the right to appeal content removal or an account ban. “This will protect against companies arbitrarily removing content or banning users, and provide due process if they do,” the government says.

That represents extra work for tech firms, as does another compromise: adults who don’t want to see certain types of legal but potentially upsetting material must be given the option of reducing its appearance on their feeds. This type of content will be listed by the government and includes material that is abusive, or incites hatred on the basis of race, ethnicity, religion, disability, sex, gender reassignment or sexual orientation.

Setting this up, and enforcing it, will require hard work and investment at the tech firms and at Ofcom. You would imagine that the latter, in particular, is going to be busy.

New criminal offences

People who use social media posts to encourage self-harm face criminal prosecution under a new offence introduced by the bill that covers England and Wales. It will also criminalise the sharing of pornographic “deepfakes” – images or videos manipulated to resemble a person. The taking and sharing of “downblousing” images, where photos are taken down a woman’s top will also be tackled. Cyberflashing will also be made illegal.

One harmful communications offence has been taken out. This would have targeted people who send a message or post with the intention of causing “serious distress”. To its Tory critics this was legislating for “hurt feelings”.

Generally, the bill places a duty of care on all firms to protect adult users from illegal content such as child sexual abuse images, revenge pornography, threats to kill, selling firearms and terrorist material. Tech platforms have to proactively prevent that material from reaching users.

The bill has critics on both sides. Tech companies say criminal sanctions threaten investment into the UK and the Samaritans, the mental health charity, says removing the duty of care on legal but harmful content for adults will squander “a vital opportunity to save lives.” But the fact that criminal liability for endangering child safety online is set to go on the statute books shows that MPs agree with Ian Russell.

If you want to read the complete version of the newsletter please subscribe to receive TechScape in your inbox every Tuesday.

Read More

Dan Milmo