Broad range of intermediaries in scope
The question of which companies are in scope varies for each regime. The services in scope in all eight of our jurisdictions include social media platforms, cloud hosting services and video content sharing platforms. This is where Germany draws the line, but the other seven jurisdictions also cover video games with user interaction, online marketplaces and search engines. Australia, Ireland, Singapore, the UK and possibly the EU also cover private user to user interactions. Both the UK and the EU proposals seek to impose more stringent obligations on larger platforms.
Some consensus on the types of harm that users need to be protected from
Each country tackles the question of what content is harmful in a different way, but illegal content (such as terrorism-related content or child sexual abuse material) is deemed harmful by all. Beyond that, certain regimes also seek to regulate the grey area of content which is 'lawful but harmful', but what is considered to fall within this category varies significantly from jurisdiction to jurisdiction.
Certain regimes focus on specific content and others on systems and processes
The German, Australian and Singaporean regimes all impose rules on individual pieces of content in the form of obligations to take down or disable access to content quickly. In contrast, the EU (for online platforms and very large online platforms), Irish and UK obligations impose requirements on the overall systems and processes that a platform must have in place. This requires platforms to actively assess the risks of harmful content and to take steps to mitigate those risks.
What compliance involves
While there are fundamental differences in how the overarching obligations are framed, there are some relatively common obligations imposed on platforms across the various regimes, including the obligation to remove or block certain types of content once it has been reported to the platform. Many of the jurisdictions require platforms to have a system in place to allow for user reporting of certain types of content and to report information to authorities about content found on the platform. Transparency reporting and a “designated authorised person” requirement are also common. The UK and EU proposals also require the largest platforms to conduct a holistic risk assessment.
The consequences of non-compliance
All of the proposed and current regimes that we considered – except in the U.S. – will allow authorities to levy financial penalties for failure to comply, many of which are significant including fines equal to a percentage of global annual turnover. Some go further and allow for corporate criminal liability and even individual employee fines or criminal liability. Some jurisdictions also provide for the 'nuclear' option of blocking access to the platform in that jurisdiction altogether.
The next few years will see the introduction of a number of new laws regulating online harms. For some of these, governments have given an indication of timing; for others, we are awaiting more concrete details of what happens next. Reforms are being proposed in the next two years in the UK, EU, Ireland, Germany and Australia and reform in this area is a topic for debate under the new administration in the U.S. In the meantime, the need for action is so immediate that many platforms are taking their own steps to reduce harmful content ahead of the most ambitious regimes being introduced.