Children’s online safety in the UK is having its seatbelt moment. On Friday social media and other internet platforms will be required to implement safety measures protecting children or face large fines.
It is a significant test for the Online Safety Act, a landmark piece of legislation that covers the likes of Facebook, Instagram, TikTok, YouTube and Google. Here is a guide to the new rules.
What is happening on 25 July?
Companies within the scope of the act must introduce safety measures to protect children from harmful content. This means all pornography sites must have in place rigorous age-checking procedures. Ofcom, the UK communications regulator and the act’s enforcer, found that 8% of children aged eight to 14 had visited an online pornography site or app over a month-long period.
Social media platforms and large search engines must also prevent children from accessing pornography and material that promotes or encourages suicide, self-harm and eating disorders. This has to be kept off children’s feeds entirely. Hundreds of companies are affected by the rules.
Platforms will also have to suppress the spread of other forms of material potentially harmful to children including the promotion of dangerous stunts, encouraging the use of harmful substances and enabling bullying.
What are the recommended safety measures?
Measures under the codes include: algorithms that recommend content to users must filter out harmful material; all sites and apps must have procedures for taking down dangerous content quickly; and children must have a “straightforward” way to report concerns. Adherence is not mandatory if companies believe they have valid alternative measures to meet their child safety obligations.
The “riskiest” services, which include big social media platforms, could be required to use “highly effective” age checks to identify under-18 users. If social media platforms that contain harmful content do not introduce age checks, they will need to ensure there is a “child appropriate” experience on the site.
X has said if it is unable to determine whether a user is 18 or over, they will be defaulted into sensitive content settings and will not be able to view adult material. It is also introducing age estimation technology and ID checks to verify if users are under 18. Meta, the owner of Instagram and Facebook, says it already has a multilayered approach to age checking. This includes its teen account feature – a default setting for anyone under 18 – that it says already provides an “age appropriate” experience for young users.
Mark Jones, a partner at the law firm Payne Hicks Beach, said: “Ultimately it is going to be for Ofcom to decide whether these measures meet the requirements under the OSA [Online Safety Act] and, if not, to hold the companies to account.”
The Molly Rose Foundation, a charity established by the family of the British teenager Molly Russell, who took her own life in 2017 after viewing harmful content online, said the measures did not go far enough. It has called for additional changes such as blocking dangerous online challenges and requiring platforms to proactively search for, and take down, depressive and body image-related content.
How would age verification work?
Age assurance measures for pornography providers supported by Ofcom include: facial age estimation, which assesses a person’s likely age through a live photo or video; checking a person’s age via their credit card provider, bank or mobile phone network operator; photo ID matching, where a passport or similar ID is checked against a selfie; or a “digital identity wallet” that contains proof of age.
Ria Moody, a lawyer at the law firm Linklaters, said: “Age assurance measures must be very accurate. Ofcom has said that self-declaration of age, or terms of service saying users must be over 18, are not highly effective measures and so platforms should not rely on these alone.”
What does that mean in practice?
Pornhub, the most-visited provider of online pornography to the UK, has said it will introduce “regulator approved age assurance methods” by Friday. It has yet to say what these methods will be. OnlyFans, another site which carries pornography, already uses facial age verification software. It does not store an image of the user’s face but estimates age using data taken from millions of other images. A company called Yoti provides that software and also does so for Instagram.
Reddit started checking ages last week for its forums and threads which include mature content. It is using technology made by a company called Persona, which verifies age through an uploaded selfie or a photo of government ID. Reddit does not have access to the photos but stores the verification status to avoid users having to repeat the process too often.
How accurate is facial age verification?
Software allows a website or app to set a “challenge” age – such as 20 or 25 – to limit the number of underage people who slip through the net. When Yoti set a challenge age of 20, fewer than 1% of 13- to 17-year-olds were incorrectly let through.
What other methods are there?
An equally direct method is to require users to show a piece of formal identification such as a passport or a driving licence. Again, the ID details do not need to be stored and can be used solely to verify access.
Will every site carrying pornography carry out the age checks?
They should, but many smaller sites are expected to try ignoring the rule, fearing it will damage demand for their services. Industry insiders say that those ignoring the rules may wait to see how Ofcom responds to breaches before deciding how to act.
How will the child protection measures be enforced?
Ofcom can deploy a range of punishments under the act. Companies can be fined up to £18m or 10% of global turnover for breaches, or whichever is greater. In the case of Meta, such a fine would equate to $16bn. Sites or apps can also receive formal warnings. For extreme breaches, Ofcom can ask a court to prevent the site or app from being available in the UK.
Senior managers at tech companies will also be criminally liable for repeated breaches of their duty of care to children and could face up to two years in jail if they ignore enforcement notices from Ofcom.