Britain passed a sweeping law on Tuesday to regulate online content, introducing age verification requirements for porn sites and other rules aimed at reducing hate speech, harassment and other illegal content.
The Online Safety Bill, which also applies to terrorist propaganda, online fraud and child safety, is one of the most ambitious attempts by a Western democracy to regulate online expression. Around 300 pages long, the new rules took more than five years to develop, sparking intense debates over how to balance free speech and privacy and banning harmful content, particularly aimed to children.
At one point, messaging services including WhatsApp and Signal threatened to abandon the UK market altogether until provisions of the bill seen as weakening encryption standards were changed.
The UK law goes further than efforts elsewhere to regulate online content, requiring companies to proactively search for objectionable content and judge whether it is illegal, rather than requiring them to act only after being alerted illicit content, according to Graham Smith, a London analyst. lawyer specializing in internet law.
It’s part of a wave of rules in Europe aimed at ending an era of self-regulation in which tech companies set their own policies on what content can stay online or be removed. The Digital Services Act, a European Union law, recently came into force and requires companies to more aggressively police their platforms against illegal content.
“The Online Safety Bill is a game-changing piece of legislation,” Michelle Donelan, Britain’s Technology Secretary, said in a statement. “This Government is taking a huge step forward in our mission to make the UK the safest place in the world for the internet. »
British politicians have been under pressure to adopt the new policy as concerns grew about the mental health effects of internet and social media use among young people. Families who blamed their children’s suicide on social media were among the bill’s most aggressive advocates.
Under the new law, content aimed at children that encourages suicide, self-harm and eating disorders must be restricted. Pornography companies, social media platforms and other services will be required to introduce age verification measures to prevent children from accessing pornography, a change that some groups say would harm availability of information online and would violate privacy. The Wikimedia Foundation, the operator of Wikipedia, said it would not be able to comply with the law and could be blocked as a result.
TikTok, YouTube, Facebook and Instagram will also have to introduce features allowing users to choose to encounter less harmful content, such as eating disorders, self-harm, racism, misogyny or anti-Semitism.
“At its heart, the bill contains a simple idea: providers should consider the foreseeable risks their services create and seek to mitigate them – as many other industries already do,” said Lorna Woods, professor of Internet law at University. of Essex, who helped draft the law.
The bill has drawn criticism from tech companies, free speech activists and privacy groups, who say it threatens free speech because it will incentivize companies to remove content .
Questions remain about how the law will be enforced. This responsibility falls to Ofcom, the UK regulator responsible for overseeing television broadcasting and telecommunications, which must now set rules on how it will monitor online safety.
Companies that fail to comply face fines of up to 18 million pounds, or about $22.3 million, or up to 10% of global revenue, the highest amount. higher being retained. Company executives could face criminal prosecution if they fail to provide information during Ofcom investigations or if they fail to comply with rules relating to child safety and exploitation sexuality of children.
politics New Gb1