Tackling preventable harm
Weakened proposals to address harmful content are bad news for vulnerable adults, but for children too
Last week MPs and peers, in a private session, were shown some of the suicide and self-harm content proactively recommended to Molly Russell on Pinterest and Instagram in the months before her death.
Not a single person gathered into a packed Lords committee room will have left anything other than deeply disturbed by what was shown. I certainly was; and based on the speeches made by a number of peers at Lords Second Reading, I wasn’t alone.
But this was also a highly necessary and important watch.
The volume, type and availability of content that was algorithmically recommended to Molly is the reality of the content to which children and adults with one of more vulnerabilities have long been exposed. Too much of this material continues to appear in feeds and timelines today.
In recent months, the Online Safety Bill’s provisions relating to harmful content have become increasingly politicised, and in response to sustained opposition from some ministers, backbenchers and tech libertarians, late last year were scaled back.
I’ve rarely been more struck more by the gap between political rhetoric and the reality of what unregulated systems and processes mean for children. While some political rhetoric envisages tackling harmful content as ‘legislating for hurt feelings’, anyone who has witnessed the actual content that Molly saw – just how dangerous and disturbing it is – will fully understand that protecting children or adults with vulnerabilities cannot wait.
More than ever, I’m struck by the urgency of moving beyond move beyond the Bill’s highly triangulated approach. Children and adults with vulnerabilities need fundamental protections, not further political compromise.
However, there is a distinct risk that the Government’s revised approach to legislating for harmful content risks being readily gamed by those posting with malign intent; and then further weakened by platforms that may be incentivised to scale back their community standards, or to operationalise content threshold decisions in a way that minimises the type and amount of content they deem illegal.
Here’s 4 areas where the Government should be prepared rethink its approach – and where ministers can demonstrate a commitment to tackling preventable harm through delivering a legislative package that’s demonstrably harm-centred and focussed on risk mitigation.
Molly’s Pinterest and Instagram accounts
During this week’s session, MPs and peers were shown some of the content presented as evidence in last year’s inquest into Molly’s death, but that in accordance with long-standing guidance, is considered too harmful to be published in the media.
Leigh Day partner Merry Varney, who represents Molly’s family, took attendees through videos and images that Molly viewed on Instagram and Pinterest which the coroner concluded contributed to her death in a more than minimal way.
This material includes depictions of self-harm and suicide which the coroner said glamorised self-harm and was almost impossible to watch. Senior Coroner Andrew Walker also cited a risk this content could normalise harmful behaviour among those watching it and could result in users being misdirected away from legitimate potential sources of help.
As Merry described it: “To publish in the media the content Molly engaged with on Instagram and Pinterest would contravene long established guidance in place to protect children and adults from the harm caused by viewing this type of content. It is however imperative that those with the power to regulate companies like Instagram are fully aware of the volume, nature and ease of availability of this harmful content and the ongoing risks to children of being exposed to content which could have fatal consequences.”
Molly’s father Ian said: “This harrowing content was reviewed by a child psychiatrist during last year’s inquest into Molly’s death, who described it as ‘very disturbing, distressing’, adding he had been left ‘unable to sleep well’ for weeks afterwards”.
The nature, volume and velocity of the content Molly engaged with, much of which was algorithmically recommended and in some cases she was encouraged to view through ‘push’ notifications and emails, underscores where the policy and regulatory response needs to go further.
The Senior Coroner found that platforms ‘operated in such a way using algorithms as to result, in some circumstances, of binge periods of images, video clips and text, some of which were selected and provided without Molly requesting them.’ This reinforces the nature of a problem being driven primarily by systems and design choices, and not one that can be addressed solely through targeting content moderation.
Ensuring fit-for-purpose protection for vulnerable adults
Perhaps the most troubling aspect of the material shown wasn’t necessarily that it contained graphic and highly disturbing content, but that during the inquest Meta’s Global Head of Health and Wellbeing Policy Elizabeth Lagone repeatedly said that she didn’t consider much of the content to be in breach of the company’s standards.
That means that much of this content still wouldn’t be removed today, and while the Online Safety Bill at any stage would not have required companies to remove harmful content, as a result of the Government’s most recent policy shift companies will now have to do little other than enforce their terms and conditions.
Put simply, Meta wouldn’t remove some of the content Molly saw in 2023, and they still wouldn’t have to once regulation takes effect.
I’m struck that much of Meta’s approach to moderating harmful content is informed by a position that views posts around suicide and self-harm content as a potentially beneficial means of expression and recovery– a position put forward by some third parties - and that the company then translates this into an operationalised set of content moderation guidelines that is at times seemingly light touch to the point of redundancy.
Certainly, there are likely to inevitable and enhanced risks for those consuming suicide and self-harm material when it is being served up in online spaces with largely ineffective moderation standards and as a result of algorithmic amplification, and when platforms are left to make or interpret their own rules. Although Meta has sought to downplay the research shared by Frances Haugen, her whistleblowing suggests the company was aware of the negative impacts of its products as they were offered at the time, in particular the role Instagram played in contributing to suicide ideation among some UK and US teens.
Before viewing this material, I was already concerned that the Bill contained a clear moral hazard, with a clear incentive for companies to simplify their community standards on sensitive subjects, knowing that their regulatory requirements would then be lessened.
But now I fear there is a much more pressing problem - that the legislation’s provisions relating to harmful content could be readily gamed by the companies to the extent that precious little actually changes. This clearly isn’t the policy outcome that voters want: recent polling for The Samaritans shows that three quarters of UK adults want tech companies to be legally required to prevent harmful suicide and self-harm content being shown to them.
There is an overwhelming case for the legislation to require companies to follow a clear set of minimum standards, in the form of a code of practice, that as a minimum captures the categories of harmful content that will otherwise be covered by the Bill’s user empowerment duties.
That approach would enable a much better balance that protects children and vulnerable adults, while responding to legitimate concerns about the potential for negative effects on free expression.
Risk assessment duties
Perhaps the most surprising part of the Government’s decision to scale back the provisions on legal but harmful content was the dropping of the requirements on category 1 platforms to perform risk assessments relating to harmful content for adults.
This decision substantially undermines the systemic nature of the regime, and it means that companies are expected to address harmful content in a markedly different way from how they discharge their illegal content and child safety duties.
What’s abundantly clear from viewing the types of content that Molly saw is that much of the content is slick, professionally produced, and that some of those posting the material seemingly possess a sophisticated understanding of how content can be produced and edited to ensure it isn’t taken down or algorithmically downranked.
There are obvious parallels here to CSA offending – where content is posted with an often developed understanding of content moderation rules. Much of the self-harm and suicide material that doesn’t breach Meta’s community standards features tell-tell content markers, which for obvious reasons I won’t set out here, but that in the eyes of companies may tip content from being considered prohibited to legitimate.
It’s highly plausible that such information is being shared between malign actors, for the purposes of gaming content moderation rules, and in order to subvert the effectiveness of content moderation processes.
Although the Bill proposes to make it a criminal offence to promote or glorify suicide and self-harm, which in turn means companies will have to risk assess and use proportionate systems and processes to remove what becomes illegal content, in the first instance it will clearly be for companies to determine whether content meets the threshold at which it can be considered illegal.
New provisions introduced at Commons Report Stage set a relatively high bar at which companies must make this distinction.
It’s in this situation the lack of risk assessment for harmful content kicks in. Companies could reasonably point to material that has been posted for the purposes of facilitating and encouraging suicide and self-harm, but because of the application and interpretation of its operationalised harmful content policies, instead come to the conclusion that it is somehow appropriate, should remain on the platform, and by extension considered as legal content.
As always, the devil is in the (operationalised) detail.
It becomes increasingly clear that having two divergent approaches for addressing the same type of content risks becoming increasingly fraught: with a regime that is at once open to being gamed by malign actors, and that could be interpreted loosely by less scrupulous platforms that may choose to worry more about the legal implications of removing content and the reputational implications of being seen to host large amounts of illegal material, and less about the costs of failing to strictly interpret and then follow their regulatory requirements.
User empowerment duties
When the Government announced its decision to remove the adult safety duties, it instead looked to bolster the Bill’s user empowerment functions – account settings that can allow users not to be presented with certain types of harmful content.
The argument goes that this can achieve broadly similar policy objectives to the adult safety duties, with companies still being expected to identify harmful content but with it ultimately being left to the user, not the platform, to choose what they see in their feed.
Although this approach has some merit, the reality is this shift the onus away from tech companies and onto adults, some of whom may be vulnerable. This will sit uncomfortably with many victims and rights groups.
But perhaps more significantly, it also raises significant questions about the efficacy of these proposals: can we realistically expect vulnerable adults to opt out of content that may be causing them harm? Is it realistic that substantive numbers of vulnerable users will use these account settings, when consumers generally demonstrate high level of inertia when being presented with opt-in settings?
Can we necessarily assume every vulnerable person understands what an algorithm is and how it works, rather than thinking that what they see in their feed is indistinguishable from the content presented to everyone else?
We should also not forget that there is a commercial incentive for companies to make it as hard as possible for users to exercise these functions. As we’ve seen from the use of ‘dark patterns’ and other user experience design features, companies may choose to make the user empowerment option as difficult to find and inaccessible to use as they can.
It’s highly likely that peers will look to amend these proposals, with a shift to protection-by-default and ensuring that companies do not use design or commercial incentives to encourage users to change their settings in ways that might be harmful to them.
These are inherently sensible proposals that peers should support, and that the Government should be prepared to accept.
Where do these proposals leave children?
Although the Government has been clear to that it does not intend to change or water down the Bill’s protections to children, the reality is that the adult and child safety duties were previously inextricably linked. Removing one substantively weakens the other.
Specifically, it was the adult safety duties that required platforms to roll up their sleeves and address the problems of harmful content on their sites, whereas the child safety duties are geared fundamentally to age assurance measures – identifying if a user is a child so they can be prevented from viewing them.
The removal of the adult safety duty therefore removes a first tier set of protections for children, and it means that Bill’s heavy lifting must now be performed by age assurance measures alone.
It’s difficult to conclude this won’t leave more children exposed to harmful content than would otherwise have been the case. During Lords passage, the Government should be prepared to explain how it intends to ensure these impacts can be bridged.
For example, will there be a set of minimum standards covering age assurance measures, and will these be enforceable?
Will companies now be expected to achieve a higher overall level of age assurance to compensate, recognising that this presents potential trade-offs and may raise questions about which combination of technical solutions may need be deployed?
Is there a risk that this might result in differential protections for some groups over others, for example given that some forms of age assurance technology produce more accurate results for white skin tones compared to the skin tones of children from Black and Asian groups?
Today’s children are tomorrow’s vulnerable adults
Finally, we should recognise that vulnerability doesn’t simply stop when a young person turns 18.
There is a real risk that some vulnerable children could face a cliff edge moment when they reach adulthood, particularly if the Government doesn’t accept the argument that harmful content should at the very least be screened out by default for all users.
We also need to view this issue through a generational lens. Children and young people growing up today are experiencing the very worst exposure to harmful and dangerous content, and as they move into adulthood will carry with them the vulnerability and mental health impacts of being exposed to this type of content throughout their childhoods.
This is a generation who has been poorly served by tech companies but also by the time taken for legislation to be developed and to come into force. Having left them exposed for too long as children, we shouldn’t repeat these failures as they grow into adults.
It’s time to ensure that it’s vulnerability, not simply age, that affords today’s young people and tomorrow’s adults the systemic and risk-based protections they deserve. The current proposals risk leaving young adults to pay the price, for a second time, of a decade of legislative and commercial inertia.

