The aim? To make the internet safer for British users by protecting adults and children from harmful and illegal content.
The regulator? UK Office of Communications (Ofcom).
The target? Social media and tech companies whose executives could face jail time or steep penalties within months of the Online Safety Bill becoming law.
Generally, there's a need for greater internet regulation globally to minimize the harmful impact of certain online content. For instance, according to a study performed by the Journal of Information Technology, the degree of exposure of young people to the harmful aspects of the internet needs to be toned down.
Similarly, a study published in Nature Communications indicates a correlation between lower life satisfaction in UK teens and young adults due to social media use. So, the UK's Online Safety Bill has been a natural response to these eSafety concerns.
Although there have been complaints about the Bill having a too complex structure, the key focus of the Bill remains apparent. The Bill aims to provide more excellent protection to British internet users by regulating technology providers and adding to their duties and responsibilities.
For starters, online platforms have the task of removing illegal content, such as material relating to terrorism or child pornography. In addition, all target platforms must protect users against legal but potentially harmful content – for instance, topics that can trigger self-harm or eating disorders.
Platforms will need strict age verification methods to prevent young children from accessing age-inappropriate content, such as pornography. Another significant issue the Bill addresses is that of online scams. As one study published in the Journal of Criminology notes, many people fall victim to online fraud and cons in the UK.
Fortunately, the Bill will now hold high-risk platforms accountable for implementing systems and processes to protect against fraudulent and harmful adverts. However, the government will likely make some alterations and refinements as the Bill passes through Parliament before becoming law.
Although the introduction of the draft Bill was in 2021, enforcement is likely to be in place by the end of 2022. That means all companies affected by the scope of the Bill need to sit up and take notice quickly. The list of companies includes any social media platforms, search engines, pornography sites, and messaging apps that service the UK market. It doesn't matter where the company is based.
These service providers have to evaluate the current content on their sites and take down any illegal content, disinformation, or illegal search terms. Then, they need to rewrite their terms and conditions to specify and enforce the appropriate, legal, and non-harmful content that can be shared on their platform.
In addition, platforms with pornographic or violent content need to restrict access to their site by children while also providing adults with the option to verify their identity and limit the scope of accounts they interact with online. Users will now have an easier time submitting complaints and appealing when their posts forcibly are removed from the site.
This Bill might seem like a tall order to tech companies who had previously turned a blind eye when people shared harmful content on their platforms. However, the Bill comes with a "shape up or ship out" demand. For instance, company seniors that fail to comply with the regime could face steep penalties, jail time, or get blocked from the UK market.
The Bill's enforcer is none other than Ofcom, UK's national communications regulator. To enforce the regime, Ofcom will have the powers to:
The intention behind the UK's Online Safety Bill is well-meaning since service providers have a responsibility to ensure the safety of their users. However, despite the many positives, the Online Safety Bill has raised some concerns about whether Ofcom's duties and powers will infringe on the freedom of expression.
The main issue is that the definition of "legal but harmful" is open to interpretation, thus setting up many companies for failure. For example, how do companies know which content is deemed harmful and should be removed? In addition, anyone could argue that some topics regarding mental issues are "legal but harmful."
In addition, many platforms now have the burden of policing themselves, which might lead to a lot of second-guessing about whether the content on their site is illegal or harmful.
It's perhaps overambitious to attach such dire consequences to vague, complex, and subjective matters. Moreover, the censorship of legal speech might violate human rights.
Fortunately, the UK Government's Draft Online Safety Bill is still undergoing scrutiny, and there's the expectation the government will address some of these issues. However, according to a House of Commons Digital, Culture, Media and Sport Committee report, there's already a debate about the urgent concerns raised by the Bill that the government needs to address.
The UK's Online Safety Bill is a gamechanger and a huger relief to critics who strongly felt that most internet companies were doing little to moderate harmful content on their sites. However, the Bill's total impact on the UK market is yet to be seen. Still hopefully, the outcome will be increased accountability for online platforms, better safety for users, and protection of freedom of speech and human rights.
BlueCheck’s industry-leading identity verification infrastructure enables merchants to grow their business faster. As we serve a wide variety of industries, our solutions are custom-tailored to the unique needs of our customers, including PACT Act and eCommerce compliant offerings.