The UK's proposals to regulate online platforms: An update from the consultation

On 12 February 2020, the UK Government reiterated its commitment to creating one of the world's most ambitious regulatory frameworks for online content. In a joint publication, the Department for Digital, Culture, Media and Sport and the Home Office provided a summary of the feedback from last summer's consultation on the Online Harms White Paper. The White Paper proposed that platforms that allow users to share content or interact with one another would be subject to a statutory duty of care to keep users (and others) safe from "online harms".

Though the publication of the consultation feedback set out few concrete proposals, it did signal the direction of travel in many areas. The UK Government has said it wants to lead the world in regulating online harm and the proposed regulatory regime is far more comprehensive and wide-reaching than any equivalent regulatory regime in other Western countries. It borrows some concepts from financial services, but also others from telecoms and even health and safety legislation. This gives an indication of how the proposals may be applied in practice.

Speak to one of our key contacts to discuss the proposals and how it might impact your business.

Online harm legislation

The 6 key questions to consider


Who will have to comply with the regulatory regime?

The regulatory regime is likely to apply to platforms that allow users to share or discover user-generated content or interact with each other online. This will clearly include many social media and photo/video sharing platforms, but could also cover gaming platforms, Cloud-based storage sites and other online services.

Whether private communication channels will be subject to some degree of regulation (and what is meant by "private channels") remains an open question.

Once implemented, the proposal is that the regulator (likely Ofcom) will take a risk-based approach: focusing initially on those platforms they perceive to pose the greatest risk of harm, either due to their size or known issues with their approach to preventing online harms.


What will they have to do to comply?

Regulated platforms will have to comply with a statutory duty of care. This will require them to take reasonable steps to keep users (and other persons) safe from online harms. Firms will be required to take action that is proportionate to the severity and scale of the potential harm.

The focus of the regime will be on the systems and processes firms have in place, rather than considering individual pieces of content or individual complaints. Though firms will be required to have effective mechanisms to allow users to report harmful content and challenge takedown decision, the regulator will not adjudicate individual complaints.

The expectations of how platforms will fulfil their duty of care will differ for illegal content and content that is "not illegal but harmful":

   Illegal content  Legal (but harmful) content
 Examples from the publication Child sexual exploitation and abuse ("CSEA") material; terrorist content Online bullying; intimidation in public life; self-harm; suicide imagery
Proposed requirement to comply with the duty of care
  • Minimise risk of illegal content appearing
  • Remove any illegal content that does appear "expeditiously"
  • Particularly robust action required for CSEA and terrorist material
  • No requirement to remove legal content
  • Firms to decide what type of content or behaviour is acceptable on their service and set this out in clear and accessible terms and conditions
  • Firms to enforce this effectively, consistently and transparently

To guide firms on how to comply with the duty of care, the regulator will publish detailed codes of practice on the systems, procedures, technologies and other measures firms should adopt to comply. If firms chose to comply in a different way, they will need to explain to the regulator on how their approach will deliver the same (or a better) outcome. To enable the framework to respond to emerging harms, there are unlikely to be codes of practice for every type of harmful content, but there will at least be codes on CSEA and terrorist content.


What is meant by the term "online harms"?

The Online Harms White Paper listed 23 online harms (page 31) that may fall within the scope of the regime: ranging from CSEA and terrorist material to disinformation and cyberbullying/trolling. Though the feedback on the consultation emphasised the need for clarity on which harms will be in scope, this has yet to be clarified.

What happens if firms don't comply?

If firms fail to meet the duty of care, the regulator will have a range of sanctions at their disposal. This may include the power to impose civil fines (tied to metrics such as annual turnover or the volume of illegal material hosted), the power to require firms to rectify failings and possibly even powers to disrupt non-compliant platforms (including the potential for the regulator to require ISPs to block non-compliant websites). The proposal suggests that the regulator will use their powers in a tiered manner, only resorting to the most disruptive powers as a last resort.

The regime will likely include some kind of appeals mechanism against enforcement decisions, but its form is still being considered. Whether the regulator will accept "super-complaints" from certain designated bodies remains an open question, as does the issue of whether firms situated outside the UK will be required to appoint a "nominated representative" in the UK to assist in enforcing compliance with the regime.

One of the most controversial proposals in the Online Harms White Paper was the suggestion that firms may be required to nominate a senior executive with accountability for complying with the duty of care. Like the Senior Managers Regime in financial services, if the nominated executive did not take reasonable steps to ensure their firm complied with this duty, they could be personally liable for a civil fine (or possibly even criminal liability). Again, whether this proposal will be introduced remains an open question.


What else is covered by the proposals?

Once enacted, the proposals will also require regulated firms to take several other steps, including potentially being required to contribute to the costs of the regulator and producing annual transparency reports in a form determined by the regulator.

The regulator too will be expected to take other steps, including advising firms on how to comply with the duty of care, promoting innovation and child safety online, developing a framework on "safety by design" and reporting on its activities to the public and Parliament.


What happens next?

Over the coming months, the UK Government plans to continue taking iterative steps to develop the proposals. This will include:

  • engaging further with the industry and other stakeholders;
  • publishing a full response to the Online Harms White Paper consultation in the spring;
  • publishing interim codes of practice on tackling CSEA and terrorist content "in the coming months". Ahead of any legislation, compliance with these will be voluntary but the Government encourages firms to comply as soon as they can;
  • publishing the Government's own transparency report; and
  • continuing to consider wider issues on regulating digital technologies, including electoral integrity.
  • The Government has emphasised its commitment to maintaining momentum and introducing the regime in the near future. The coming months will therefore be a crucial period for the industry and stakeholders to ensure they provide their input on the proposals.

x Covid-19 Resource Hub