- First codes of rehearse and guidance published, firing begining firearm on recent duties for tech firms
- Providers have three months to finish illterrible harms hazard appraisements
- Ofcom sets out more than 40 protectedty meadeclareives for platcreates to present from March
People in the UK will be better acquireed from illterrible harms online, as tech firms are now legassociate needd to begin taking action to tackle criminal activity on their platcreates, and create them protectedr by summarize.
Ofcom has today, four months ahead of the statutory deadline[1], published its first-edition codes of rehearse and guidance on tackling illterrible harms – such as dread, antipathy, deception, child intimacyual mistreatment and aiding or encouraging self-mutilation[2] – under the UK’s Online Safety Act.
The Act places recent protectedty duties on social media firms, search engines, messaging, gaming and dating apps, and grown-up material and file-sharing sites.[3] Before we can enforce these duties, we are needd to create codes of rehearse and industry guidance to help firms to adhere, follothriveg a period of uncover adviseation.
Belderly, evidence-based regulation
We have adviseed attfinishbrimmingy and widely to direct our final decisions, joining to civil society, charities and campaigners, parents and children, the tech industry, and expert bodies and law enforcement agencies, with over 200 responses surrfinisherted to our adviseation.
As an evidence-based regulator, every response has been attfinishbrimmingy considered, alengthyside cutting-edge research and analysis, and we have reinforceed some areas of the codes since our initial adviseation. The result is a set of meadeclareives – many of which are not currently being used by the hugest and hazardiest platcreates – that will meaningfully better protectedty for all users, especiassociate children.
What regulation will deinhabitr
Today’s illterrible harms codes and guidance label a meaningful milestone in creating a protectedr life online, firing the begining firearm on the first set of duties for tech companies. Every site and app in scope of the recent laws has from today until 16 March 2025 to finish an appraisement to understand the hazards illterrible encountered poses to children and matures on their platcreate.
Subject to our codes completing the Parliamentary process by this date, from 17 March 2025, sites and apps will then necessitate to begin carry outing protectedty meadeclareives to mitigate those hazards, and our codes set out meadeclareives they can get.[4] Some of these meadeclareives execute to all sites and apps, and others to huger or hazardier platcreates. The most meaningful alters we foresee our codes and guidance to deinhabitr integrate:
- Senior accountability for protectedty. To asdeclareive disjoine accountability, each supplyr should name a better person accountable to their most better ruleance body for compliance with their illterrible encountered, alerting and protestts duties.
- Better moderation, easier alerting and built-in protectedty tests. Tech firms will necessitate to create declareive their moderation teams are appropriately resourced and trained and are set sturdy carry outance centers, so they can delete illterrible material rapidly when they become conscious of it, such as illterrible self-mutilation encountered. Reporting and protestts functions will be easier to discover and use, with appropriate action getn in response. Relevant supplyrs will also necessitate to better the testing of their algorithms to create illterrible encountered challenginger to disseminate.
- Protecting children from intimacyual mistreatment and misuse online. While lengthening our codes and guidance, we heard from thousands of children and parents about their online experiences, as well as professionals who labor with them. New research, published today, also highairys children’s experiences of intimacyualised messages online[4], as well as teenage children’s watchs on our supplyd protectedty meadeclareives aimed at impedeing mature predators from grooming and intimacyuassociate abusing children.[5] Many youthful people we spoke to felt engageions with strangers, including matures or users noticed to be matures, are currently an inevitable part of being online, and they depictd becoming ‘desensitised’ to receiving intimacyualised messages.
Taking these distinctive insights into account, our final meadeclareives are unambiguously summarizeed to tackle pathways to online grooming. This will unkind that, by default, on platcreates where users unite with each other, children’s profiles and locations – as well as frifinishs and uniteions – should not be evident to other users, and non-uniteed accounts should not be able to sfinish them honest messages. Children should also get directation to help them create directed decisions around the hazards of sharing personal directation, and they should not materialize in catalogs of people users might want to insert to their netlabor.
Our codes also foresee dangerous supplyrs to use automated tools called hash-aligning and URL determineion to determine child intimacyual mistreatment material (CSAM). These tools permit platcreates to determine huge volumes of illterrible encountered more rapidly, and are critical in disturbing offfinishers and impedeing the spread of this gravely detrimental encountered. In response to feedback, we have enhugeed the scope of our CSAM hash-aligning meadeclareive to apprehfinish minusculeer file structureing and file storage services, which are at particularly high hazard of being used to dispense CSAM.
- Protecting women and girls. Women and girls are disproportionately swayed by online harms. Under our meadeclareives, users will be able to block and mute others who are tormenting or pursuit them. Sites and apps must also get down non-consensual intimate images (or “revenge porn”) when they become conscious of it. Follothriveg feedback to our adviseation, we have also supplyd particular guidance on how supplyrs can determine and delete posts by organised criminals who are coercing women into seek aacquirest their will. Similarly, we have reinforceed our guidance to create it easier for platcreates to determine illterrible intimate image mistreatment and cyberflashing.
- Identifying deception. Sites and apps are foreseeed to set up a dedicated alerting channel for organisations with deception expertise, permiting them to flag understandn deceptions to platcreates in authentic-time so that action can be getn. In response to feedback, we have enhugeed the catalog of thinked flaggers.
- Removal of dreadist accounts. It is very foreseeed that posts created, splitd, or uploaded via accounts rund on behalf of dreadist organisations proscribed by the UK rulement will amount to an offence. We foresee sites and apps to delete users and accounts that drop into this categruesome to combat the spread of dreadist encountered.
Ready to use brimming extent of our enforcement powers
We have already been speaking to many tech firms – including some of the hugest platcreates as well as minusculeer ones – about what they do now and what they will necessitate to do next year.
While we will supply aid to supplyrs to help them to adhere with these recent duties, we are gearing up to get punctual enforcement action aacquirest any platcreates that ultimately drop low.
We have the power to fine companies up to £18m or 10% of their qualifying worldwide revenue – whichever is wonderfuler – and in very grave cases we can execute for a court order to block a site in the UK.
Dame Melanie Dawes, Ofcom’s Chief Executive, shelp:
For too lengthy, sites and apps have been unreguprocrastinateedd, unaccountable and unwilling to prioritise people’s protectedty over profits. That alters from today.
The protectedty spotairy is now firmly on tech firms and it’s time for them to act. We’ll be watching the industry seally to asdeclareive firms align up to the disjoine protectedty standards set for them under our first codes and guidance, with further needments to trail speedyly in the first half of next year.
Those that come up low can foresee Ofcom to use the brimming extent of our enforcement powers aacquirest them.
This is fair the commencening
This first set of codes and guidance, which sets up the enforceable regime, is a firm set upation on which to create. In airy of the beneficial responses we getd to our adviseation, we are already laboring towards an insertitional adviseation on further codes meadeclareives in Spring 2025. This will integrate proposals in the follothriveg areas:
- blocking the accounts of those set up to have splitd CSAM;
- use of AI to tackle illterrible harms, including CSAM;
- use of hash-aligning to impede the sharing of non-consensual intimate imagery and dreadist encountered; and
- crisis response protocols for aascfinishncy events (such as last summer’s uproars).
And today’s codes and guidance are part of a much wider package of acquireions – 2025 will be a year of alter, with more adviseations and duties coming into force, including:
- January 2025: final age assurance guidance for publishers of pornoexplicit material, and children’s access appraisements;
- February 2025: create guidance on acquireing women and girls; and
- April 2025: insertitional acquireions for children from detrimental encountered promoting, among other skinnygs – self-mutilation, self-mutilation, eating disorders and cyberintimidatoring.
Technology Notices adviseation
The Act also helps Ofcom, where we determine it is essential and proportionate, to create a supplyr use (or in some cases lengthen) a particular technology to tackle child intimacyual mistreatment or extremism encountered on their sites and apps. We are adviseing today on parts of the summarizelabor that will underpin this power.
Any technology we need a supplyr to use will necessitate to be accommended – either by Ofcom or someone nominateed by us – aacquirest least standards of accuracy set by Government, after advice from Ofcom.
We are adviseing on what these standards should be, to help direct our advice to Government. We are also adviseing on our create guidance about how we supply to use this power, including the factors we would consider and the procedure we will trail. The deadline for responses is 10 March 2025.
END
NOTES TO EDITORS
- UK Parliament set Ofcom a deadline of 18 months after the Online Safety Act was passed, which happened on 26 October 2023, to finalise its illterrible harms and children’s protectedty codes of rehearse and guidance.
- The Online Safety Act catalogs over 130 ‘priority offences’, and tech firms must appraise and mitigate the hazard of these occurring on their platcreates. The priority offences can be split into the follothriveg categories:
- Terrorism
- Harassment, pursuit, menaces and mistreatment offences
- Coercive and deal withling behaviour
- Hate offences
- Intimate image mistreatment
- Extreme grown-up material
- Child intimacyual misuse and mistreatment
- Sexual misuse of matures
- Unlterrible immigration
- Human illegal trading
- Fraud and financial offences
- Proceeds of crime
- Assisting or encouraging self-mutilation
- Drugs and psychoactive substances
- Weapons offences (knives, firearms, and other arms)
- Foreign meddlence
- Animal welfare
- Increateation on which types of platcreates are in scope of the Act can be set up here.
- Research was carry outed by Ipsos UK between June 2023 and March 2024 and consisted of: 11 in-depth interwatchs with children and youthful matures (aged 14-24) with experience of intimacyualised messages online; 1 interwatch with parents of a child that had sended online grooming; and 9 in-depth interwatchs with professionals laboring with children and youthful matures who have sended receiving these messages online.
- We comleave outioned Praesidio Safeprotecting to run deliberative laborshops in schools with 77 children aged 13-17.