(844) 627-8267
(844) 627-8267

The Online Safety Act: Does this present a difficult balancing act for online service providers? | #childsafety | #kids | #chldern | #parents | #schoolsafey


Introduction

The Online Safety Act (“OSA”) aims to make the internet a safer place, protecting adults and children from illegal and harmful content by making online service providers such as social media companies more accountable for content published on their sites[1]. Despite the positive intentions, the OSA may have unintended consequences. In particular, service providers will face the difficult task of balancing the duty to protect users from illegal and harmful content against the duty to protect freedom of expression.

The OSA became law on 26 October 2023.

What duties will your organisation have under OSA?

The OSA regulates user-to-user services (a service that allows content posted by one user to be viewed by another user (s.3(1) OSA)) and search services (e.g. search engines (s.3(4) OSA)). This includes services registered outside of the UK that have links to the UK. There are certain exemptions, for example email and SMS services which will not regulated by the OSA.

There are general duties for all service providers, however some service providers will be given additional duties determined by whether they are Category 1, Category 2a or Category 2b services[2]:

  1. Category 1 – These are the largest user-to-user services and are more high risk (for example, major social media companies). Higher duties of care will be placed on these service providers.
  2. Category 2a – These are the largest search services.
  3. Category 2b – These are services that do not fall within Category 1 or 2a but still have risk factors.

The Government estimates that most service providers will not be placed in any of the above categories, and therefore will only need to adhere to the general duties. The regulator will publish a list of services that fall into each category in due course.

Key Aspects of the OSA

The OSA has the following aims[3]:

  1. Protect adults and children from illegal content – All service providers will have to carry out a risk assessment to establish the likelihood of illegal content being published on their platforms. Illegal content is defined in s.59 of the OSA and refers to offences relating to terrorism, child exploitation and abuse amongst other offences. Following the risk assessment, steps will need to be put in place to remove illegal content and prevent (as far as possible) any harm being caused.
  2. Protect children from inappropriate and harmful content – Any service that is likely to be accessed by children will need to undertake a child risk assessment to identify the likelihood of children accessing content that may be harmful to them. Harmful content is defined in s.234 of the OSA and means any content that causes physical or psychological harm. Processes will then need to be put in place to reduce the chances of children seeing such content. For example, age verification.
  3. Enable adults to protect themselves from harmful but legal content – Providers must allow adults to control the content they view and be able to block the appearance of content produced by unverified users. The ability to control content should be proactively advertised to users. This applies to Category 1 services only (see below).
  4. Require service providers to produce transparency reports – Service providers will need to provide annual reports detailing the information requested by the regulator. This could be information such as any incidents where users have been exposed to illegal and/or harmful content.

Whilst the above may restrict what can be published on online platforms, the OSA is not an attempt to censor speech that is legal[4]. The OSA, under s.22 and s.33, also aims to protect freedom of expression by imposing a duty on service providers to protect this right. This will be more difficult for some service providers than others. For example, a social media company is likely to have defamatory content published on its site by users. Whilst this content could be harmful, it might also be someone’s honest opinion, which they are entitled to express. As such, it may prove increasingly difficult for service providers to find a balance between policing content and allowing users to exercise their freedom of expression.

Compliance

Ofcom has been appointed as the regulator and will publish codes and guidance for service providers in the coming months. Ofcom should receive its powers within two months of Royal Assent (Royal Assent was received on 26 October 2023, meaning Ofcom should receive its powers before the end of 2023).

Ofcom’s guidance and codes

Ofcom will publish its guidance and codes in three phases, in consultation with service providers, as set out below.

Phase One: Illegal Harms Duties

Ofcom plan to set out measures that service providers can take to mitigate the risk of exposing users to illegal harm such as terrorism and fraud. This will include measures to protect children from child sexual exploitation and abuse. As part of phase one, on 9 November 2023, Ofcom will publish codes and guidance on these duties including:

  1. draft guidance to service providers on how to conduct their own risk assessments;
  2. draft codes on how service providers can mitigate the risk of harm; and
  3. draft enforcement guidelines.[5]

Phase Two: Child Safety, Pornography and the Protection of Women and Girls

Ofcom will detail its child protection duties in two parts. First, online pornography service providers and other interested stakeholders will be able to read and respond to Ofcom’s draft guidance on age assurance from December 2023. Secondly, service providers will be able to read and respond to draft codes of practice relating to the protection of children, around Spring 2024.[6]

Further to this, Ofcom expect to consult with service providers on a register of risks and risk profiles relating to harms to children and draft risk assessment guidance focusing on children’s harms.

Phase Three: Transparency, User Empowerment, and Other Duties on Categorised Services

The third phase of Ofcom’s guidance and codes will focus on the additional duties that will apply to Category 1, Category 2a or Category 2b services, including those relating to transparency reporting, user empowerment and fraudulent advertising. [7]

Ofcom aim to:

  1. publish a register of categorised services by the end of 2024;
  2. publish draft proposals regarding the additional duties applicable to the categorised services in early 2025; and
  3. issue transparency notices in mid-2025.[8]

Failure to comply

In Ofcom’s “Roadmap to Regulation”, it states that the presence of illegal or harmful content will not result in Ofcom automatically concluding that a service provider has failed to uphold its duties. Ofcom understand that it is impossible to completely prevent the dissemination of such content and the aim should be to minimise the risk of illegal or harmful content. Instead, Ofcom will look at what the service provider has in place to protect users in determining whether a service provider has failed to act in accordance with its duties under the OSA.

Under the OSA, if service providers are found to have breached it, they could be fined up to £18 million or 10% of its global revenue, whichever is greater. Additionally, officers or senior executives of the company could face a maximum two-year prison sentence.

Comment

The OSA will hold service providers to a higher standard than ever before in the UK. Service providers will need to actively tackle illegal and harmful content posted on their platforms otherwise serious penalties may be imposed by Ofcom.

Service providers should await further guidance and codes from Ofcom on the OSA. In the meantime, providers should ensure staff are aware of the implementation of the OSA and suitably trained on the key aspects of the OSA detailed above that service providers must abide by.

How will the OSA interact with existing law?

Under s.5 of the Defamation Act 2013 (“DA”) there is a defence for website operators/service providers if an action is brought against them in respect of a defamatory statement posted on their site and they can show that they are not the author. Nevertheless, it is possible that the OSA will undermine this protection afforded to service providers by the DA as they could still commit an offence under the OSA if defamatory content published on their platforms is classed as illegal and/or harmful and they did not have adequate measures in place to protect users from exposure to illegal and/or harmful content.

As a result, the OSA may encourage service providers to be more proactive in identifying and stopping users from targeting others with defamatory statements, or sharing defamatory content, which damages the reputation of others. However, service providers will need to be careful not to infringe on users’ right to freedom of expression, which is protected under the Human Rights Act 1998 (“HRA”), and now the OSA, as over-policing content could curtail this right.

In many cases it will be difficult for service providers to determine whether a statement is defamatory. Service providers can therefore attempt to tackle this issue by improving the reporting procedures available to users on their platforms. In doing so, this will allow users who believe they have been targeted with defamatory statements to have an easy way to report this to service providers, allowing service providers to then investigate these specific statements. In turn, this will give service providers time to consider their responsibilities under the DA, OSA and HRA before deciding whether the complained of content should be removed, and thus reduce the chances of service providers making rash decisions which could infringe upon these laws.

————————————————


Source link

National Cyber Security

FREE
VIEW