Online Harms: A comparative analysis

A comparative analysis of the regulation of online harms in eight key jurisdictions

Online Harms: A comparative analysis

Online platforms are under scrutiny like never before, with a wave of regulation sweeping the digital economy and ever more regulators seeking to intervene. Online harms is one of the newest frontiers in the trend towards greater regulation of online content. This stems from the realisation in recent years that content on the internet can cause real harm and that the challenge is global. Posts promoting extremism have been linked to acts of terror, campaigns of disinformation and “fake news” have dogged democratic elections and charities and governments have drawn attention to the horrifying volume of child sexual abuse imagery circulated online. The risks of these “online harms” came into even sharper relief in 2020 as we spent more time online than ever before.

A consensus has emerged between governments, societies and many of the major platforms that more needs to be done to combat online harms. Many governments across the globe are looking to replace the current patchwork of discrete laws and voluntary initiatives with more holistic regulation. The challenge for regulators is to perform a regulatory balancing act: protecting against harm while upholding fundamental human rights. This challenge was highlighted by recent events in the U.S. where, in the absence of regulation, tech companies have been left to determine where to draw the line between potential harm to their users and freedom of speech.

In our new publication “Online Harms: A comparative analysis” we look at eight key jurisdictions in this new frontier: analysing the current position in Australia, France, Germany, Singapore and the United States, as well as bold proposals put forward by the EU, Ireland and the United Kingdom. While there is no consensus across these jurisdictions, there are some common themes. We navigate the complex landscape and analyse, compare and contrast these regimes with both a thematic and a country-by-country review.

Ben Packer

"Though there are some similarities in how different countries are tackling online harms, there are major differences too: most notably between those which impose obligations in relation to individual pieces of content and those that focus on the overall systems and processes that online platforms must put in place. Complying with all the regimes while maintaining a consistent user-experience across the globe will present major challenges for many platforms. However, platforms are already making great strides in reducing the amount of harmful content online ahead of the most ambitious regulatory proposals coming into force."

Ben Packer

Partner, London

Online Harms: A comparative analysis

Download your copy of our full report or keep scrolling for a summary of our key insights.

Download >

The UK’s Online Safety Bill: At a glance summary

Download your copy of our infographic summarising the key takeaways from the draft UK Online Safety Bill

Download >
UKs online safety bill front cover

Key insights

1

Broad range of intermediaries in scope

The question of which companies are in scope varies for each regime. The services in scope in all eight of our jurisdictions include social media platforms, cloud hosting services and video content sharing platforms. This is where Germany draws the line, but the other seven jurisdictions also cover video games with user interaction, online marketplaces and search engines. Australia, Ireland, Singapore, the UK and possibly the EU also cover private user to user interactions. Both the UK and the EU proposals seek to impose more stringent obligations on larger platforms.
2

Some consensus on the types of harm that users need to be protected from

Each country tackles the question of what content is harmful in a different way, but illegal content (such as terrorism-related content or child sexual abuse material) is deemed harmful by all. Beyond that, certain regimes also seek to regulate the grey area of content which is 'lawful but harmful', but what is considered to fall within this category varies significantly from jurisdiction to jurisdiction. 
3

Certain regimes focus on specific content and others on systems and processes

The German, Australian and Singaporean regimes all impose rules on individual pieces of content in the form of obligations to take down or disable access to content quickly. In contrast, the EU (for online platforms and very large online platforms), Irish and UK obligations impose requirements on the overall systems and processes that a platform must have in place. This requires platforms to actively assess the risks of harmful content and to take steps to mitigate those risks.
4

What compliance involves

While there are fundamental differences in how the overarching obligations are framed, there are some relatively common obligations imposed on platforms across the various regimes, including the obligation to remove or block certain types of content once it has been reported to the platform. Many of the jurisdictions require platforms to have a system in place to allow for user reporting of certain types of content and to report information to authorities about content found on the platform. Transparency reporting and a “designated authorised person” requirement are also common. The UK and EU proposals also require the largest platforms to conduct a holistic risk assessment.
5

The consequences of non-compliance

All of the proposed and current regimes that we considered – except in the U.S. – will allow authorities to levy financial penalties for failure to comply, many of which are significant including fines equal to a percentage of global annual turnover. Some go further and allow for corporate criminal liability and even individual employee fines or criminal liability. Some jurisdictions also provide for the 'nuclear' option of blocking access to the platform in that jurisdiction altogether.
6

Looking ahead

The next few years will see the introduction of a number of new laws regulating online harms. For some of these, governments have given an indication of timing; for others, we are awaiting more concrete details of what happens next. Reforms are being proposed in the next two years in the UK, EU, Ireland, Germany and Australia and reform in this area is a topic for debate under the new administration in the U.S. In the meantime, the need for action is so immediate that many platforms are taking their own steps to reduce harmful content ahead of the most ambitious regimes being introduced.
Key Contacts