Fit for the future?
Amid rapid technological change, the triangulated approach to the Online Safety Bill means it might not be fully future proofed, nor adequately able to tackle high-risk design choices.
In recent weeks we’ve seen the first parliamentary rebellion on the Online Safety Bill.
With the legislation heading into the Lords, this won’t be the only time the Government risks a bloody nose. After months where ministers stubbornly refused to strengthen the Bill, it seems there’s been a tipping point – with increasing opposition to an approach that seems unnecessarily triangulated on difficult issues, and where ministers seem highly reluctant to accept sensible amendments.
Overlay this legislative strategy onto a Bill six years in the making and we increasingly have a problem. Over the course of recent years, entirely new technologies have sprung up, resulting in challenging new threat vectors.
Despite the Bill being initially introduced to tackle record levels of online child sexual abuse, there are growing signs that the legislation, without further amendments, could fall short in its legislative objectives to disrupt online CSA.
Here’s four high-risk design choices that could propel the child abuse threat in the years ahead –and where the Government must act to deliver a stronger, more future-proofed Bill that’s demonstrably capable of responding to the intensifying and increasingly complex CSA threat.
Private messaging
Few parts of the Online Safety Bill have been more controversial than its provisions relating to private messaging.
Private messaging is at the forefront of the child sexual abuse threat, whether this is the production and distribution of child abuse images or grooming. But it is also where the greatest privacy sensitivities lie.
The risks of private messaging are increasingly clear: according to the ONS, 11% of children aged 13 to 15 had received a sexual message in the previous 12 months, including 16% of girls. More than four in five who’d received sexual images or videos had done so through DMs.
And with online grooming offences reaching record levels, increasing by almost 80% in the last four years, private messaging is an integral part of well-established grooming pathways.
While we shouldn’t underestimate the importance of private messaging being in scope - at the start of this process, it isn’t not something that could’ve been taken as read - the reality is that the Government has adopted an overly cautious approach to tackling the risks of private messaging.
And these Government policy choices now risk delays in tackling the problem, and could result in potentially significant unintended outcomes.
In response to concerns expressed by the privacy lobby, the Government introduced amendments at Commons Report Stage that significantly constrain Ofcom’s ability to tackle grooming in private messages and groups.
Specifically, these amendments introduce restrictions on Ofcom’s ability to require companies to use proactive technology to identify or disrupt child abuse in private messages. In practice, these measures would likely restrict Ofcom from being able to include in codes of practice widely used tools such as Photo DNA ‘hash’ technology to detect child abuse images, or for that matter AI classifiers used to detect self-generated images and grooming.
That leaves us with a highly perverse scenario in which the regulator won’t be able to require upfront that companies use already industry-standard techniques, that have already been in use for many years, to detect and remove child abuse material.
Instead, Ofcom will be left to rely on powers to compel tech companies to use proactive technology only after harm has already taken place, or once it can be reasonably concluded that platforms have failed to do enough to tackle the problem, proportionate to their risk profile and available resources.
In the absence of clear upstream requirements, set out in codes, there is also a perverse incentive for companies to choose to delay the rollout of proactive scanning technologies, or even discontinue them altogether, preferring to receive CSA Warning Notices from the regulator that provide them with explicit legal instruction (or to put it more crudely, in the face of reputational risk, cover.) Meta and Apple, I’m looking at you.
This could have significant short to medium-term implications: when Meta stopped proactive child abuse scanning under analogous circumstances in the European Union, child abuse reports dropped by 76% year on year.
Unless the Government shifts from its overly cautious approach, the result could be codes of practice that are highly insufficient to respond to the nature and extent of the child abuse threat. Remember draft versions of these codes are likely to be issued later this year, and without legislative tightening, may well disappoint.
If Ofcom does subsequently move to issue CSEA Warning Notices, these are unlikely to be issued immediately after regulation takes effect.
As a result of the Government’s policy choices, we could therefore be looking at online CSA continuing unchecked in the private messaging of some of the largest and most problematic platforms, with action being delayed until further widespread harm has taken place - potentially until at least 2026.
End-to-end encryption
As I’ve set out many times before, end-to-end encryption is neither intrinsically good nor bad, but in the case of integrated social network and messaging functions, can significantly increase the risk profile for child abuse and other forms of illegal content.
In what’s become an increasingly set of arguments, privacy activists are reflexively dismissing emerging technical solutions that could help to unlock the benefits of end-to-end encryption, while simultaneously mitigating many of the risks around child safety.
In an increasingly focused attempt to maintain the canard that user safety and privacy is a fixed, unresolvable binary, this has already spilled out into briefings to supportive MPs that are increasingly removed from what’s becoming technically possible.
For example, during Commons Report Stage the tech libertarian MP Adam Afriyie warned that the Online Safety Bill risked introducing legislation that effectively ‘bans mathematics.’ Making a memorable comparison to King Canute, he suggested the Bill was legislation aimed at stopping the tide from coming in.
While I don’t doubt the legitimate and sincerely held concerns that he and other parliamentarians may hold, the privacy groups providing such briefings could do with checking in on what technology now allows.
While the online safety legislation right now sets out a largely proportionate and intelligently designed approach to the risks of end-to-end encryption, there can be little doubt that there will be a concerted push in the Lords to undermine these provisions - and in doing so, to gut the effectiveness of the Bill overall.
To put it simply, you can’t legislate for online safety while ignoring the surfaces on which online child sexual abuse can flourish. Around two thirds of the 18.4 million child sexual abuse reports made by Meta in 2019 related to content on private messaging.
The legislative debate over encryption foreshadows a clash between the absolutist rhetoric that Meta can deliver a pain-free rollout of its E2E plans later this year, without having effective child safety mitigations in place, and the crushing reality that its ideologically driven approach is likely to result in a child protection crisis, with online safety referrals falling off a cliff.
I should also point out that the very technologies that Meta has categorically ruled out using to detect child abuse, on increasingly ideological grounds, are already used for other use cases. For example, WhatsApp already uses client-side scanning to detect malware (despite almost unbelievably WhatsApp boss Will Cathcart threatening to pull the product from the UK if it was also used to detect child abuse.) Meanwhile Meta is significantly investing in another potential technical solution, homomorphic encryption, to enable it to continue user profiling, once end-to-end encryption is turned on, for commercial and targeted advertising purposes.
Readers in the UK will have already heard a number of absolutist arguments in recent years that everything will be fine, only for the reality to turn out somewhat differently.
Parliamentarians inclined to believe there is an easy or consequence-free answer might want to reflect on whether they are inclined to give Meta the benefit of the doubt here, or instead seize the opportunity to deliver legislation that can incentivise technically-possible solutions that promote the privacy and the safety of all users, including vulnerable children.
Parliamentary actions today might look very different in the context of the reality being delivered by Meta in just a few months time.
Interoperability
Interoperability is increasingly a focus for regulators and the tech sector - technical and usage interoperability has been pushed by European regulators as a move to promote consumer choice and boost competition in messaging services, while there is an emerging consensus that the push towards immersive products and services must be underpinned by fully interoperable standards.
Let’s take each of these in turn. There’s little doubt that the provisions in the Digital Markets Act to require large messaging services to become fully interoperable – that’s to say the ability to message from platform a to platform b – will deliver much needed competition and give consumers greater choice.
But while there’s been a battle raging among privacy activists about the implications for privacy and end-to-end encryption (stop me if you’ve heard this one before), there’s been precious little said about the potentially significant unintended consequences for user safety.
There are pressing questions about what mandated interoperability might mean for safety features, including the ability to detect and disrupt child abuse. Without appropriate safety mitigations in place, this could provide new opportunities for abusers to contact children across multiple platforms; significantly increase the overall profile of cross-platform risks; and actively frustrate a number of current online safety responses.
Most pressingly, this includes the ability to use metadata to detect suspicious patterns of grooming.
Without clear regulatory guardrails in place, it becomes increasingly plausible for companies to claim that it would be a disproportionate burden on them to firstly have to introduce fully interoperable services; and to then have to significantly re-work their current threat detection capabilities so they continue to work across them too.
Something has to give, and that’s likely to be the efficacy of CSA detection.
Now let’s consider the metaverse. Interoperability is intended to be a cornerstone of the development of the forthcoming range of immersive VR, MR and AR spaces. A number of high-risk immersive products are already designed to be platform agnostic, with in-platform communication able to take place between users across multiple products and environments.
It seems increasingly inevitable the metaverse will be rolled out along such lines, with work already underway to agree a corresponding set of technical and governance standards. There is an obvious commercial incentive for companies to invest in an interoperable model, with eyewatering estimates about the potential commercial return; and for the largest players, an additional incentive that this might blunt or at the very least complicate future legislative or regulatory efforts on user safety and antitrust.
But what does an interoperable metaverse mean for child safety?
In discussions with tech companies and regulators, it seems there is little agreed position about who is the risk owner in interoperable immersive spaces – where does accountability for platform a stop, and platform b begin?
So far, there is limited transparency about the approach that is being taken, even though work to develop these products is already well underway. If there is no clear risk owner, and an incentive for this to remain the case, how do we prevent a race to the bottom simply because no one is immediately accountable?
If there’s no clear risk owner, surely the risks won’t be effectively mitigated at all.
In debates on the Online Safety Bill, there’s been a tendency to say that these are problems that don’t need to be resolved today. Worse still, we’ve had blanket assurances that the metaverse will be in scope, a position that whilst welcome tends to be suggestive of a reductionist position that this means ‘problem solved’ (or even that the technical parameters of the metaverse haven’t been properly understood.)
It’s also been suggested this isn’t a problem that needs sorting today. Yet already there is the expectation that Apple will launch its own immersive product later this year. Roblox is expected to launch on Oculus Quest within months.
An interoperable metaverse is coming down the track, quickly.
It’s always been frustrating that the Government has resisted expert calls for a duty to co-operate – clear measures on the tech companies to effectively co-operate on the inherently cross-platform nature of threats.
Similarly, Government has consistently refused calls for companies to consider how their products might contribute to cross-platform risks, as part of their risk assessment duties; and to demonstrate reasonable efforts to reduce them, when their illegal content safety duties.
This already looked like a clear mistake, a decision that failed to grasp the dynamics of online sexual abuse and that would in turn place an upper limit on the effectiveness of the regulatory regime.
But to continue to ignore the importance of cross-platform provisions, despite the obvious implications of technology coming down the track, would amount to a problematic and entirely avoidable failure to future proof legislation. It’s time for the Government to change tack.
Generative AI
It seems impossible to think about new and emerging technologies in 2023 and not discuss the potential impacts of generative AI.
The fact it now seems necessary to discuss technology that many of us hadn’t conceived of a year ago not only underscores the rapid pace of continued technological change, but also the importance of ensuring that legislative and regulatory responses can be future proofed as much as possible.
In outline terms, it appears that much of the generative AI coming to market in the medium term would fall outside the scope of the Online Safety Bill (although it’s possible that some search-based applications may be covered.)
Nonetheless it’s clear that the impact of generative AI is likely to be considerable. Generative AI is already having significant market and trust and safety impacts, with Google having announced it is prepared to ‘recalibrate’ the level of risk it is willing to take when releasing future AI technology. While it certainly isn’t the first time we’ve seen companies rush out products or adjust safety tolerances in the face of competitive pressure, it is difficult to disagree with Prof Noah Giansiracusa that ‘this is the kind of market-driven arms race we do not need when it comes to tech that could drastically reshape society, for better or worse.’
At a minimum, the push towards generative AI will inevitably add to the’ imaginative obsolescence’ described by Elizabeth Ranieris: the push by tech companies to focus on new and emerging tech developments, resulting in them failing to heed the lessons from the current generation of tech products, while also shifting their attention and resources away from fixing what quickly become viewed as yesterday’s problems. (Problems that are no less acute just because of a strategic shift towards the shiny and new.)
While the risks of generative AI may seem far away, make no mistake that potential harms will emerge quickly. Any lessening of risk tolerances increases the potential for products to be rolled out with considerable potential to cause societal harm.
As it stands, Google is reported to have lagged OpenAI’s self-reported metrics on hateful, toxic, sexual and violent content. In a presentation leaked to the New York Times, the shift in Google’s risk tolerances translates into an attempt to curb issues relating to hate, toxicity, danger and misinformation, rather than an outright objective to prevent them. The Washington Post reports the company has also proposed a ‘green lane’ to shorten the process of assessing and mitigating potential harms.
And over in Menlo Park, the view within some parts of Meta appears to be ‘move even faster’ (and presumably break more things.) In comments made earlier this month, Meta’s chief AI scientist Yann Lecun blamed the tepid enthusiasm for its recent Blenderbot product on paying too much attention to user safety. He described Blenderbot as being ‘boring [..] because it was made safe’, and also went on to claim that Meta had been ‘overly careful about content moderation’ to the detriment of technological advancements.
The tech libertarian worldview that drives so much of Silicon Valley is perhaps unsurprisingly on full display among those at the forefront of developing generative AI, as shown in comments made by Open AI CEO Sam Altman on AI bias: he envisages a future in which ‘people should be allowed very different things that they want their AI to do. If you want the super, never offend, safe for work model, you should get that, and if you want an edgier one that is creative and exploratory but says some stuff you might not be comfortable with, or some people might not be comfortable with, you should get that.’
In comments made to The Verge, Mr Altman went on: ‘really what I think is that you as a user should be able to write up a few pages of ‘here’s what I want: here are my values; here’s how I want the AI to behave’ and it reads it and thinks about it and acts exactly how you want because it should be your AI.’
Beyond the novelty of Chat GPT writing politicians speeches, there is a clear risk that the unregulated growth of generative AI could fuel increasingly serious and illegal harms, including child sexual abuse.
Chat GPT and deepfake audio presents opportunities for grooming and manipulation; generative search functions may provide new and readily accessible routes for sophisticated offenders to discover and access child abuse material; and the rapid development of text-to-image tools such as DALL-E2 and Stable Diffusion, that create works of their own by drawing on patterns identified in existing content, point to the rapid growth in visual generative AI – and potential new routes to produce new child abuse and pseudo child abuse images.
In the UK and internationally, legislators don’t have the luxury of solving one problem at a time. If new technology is emerging which could be major vectors for producing or discovering illegal content, but which fall outside of the illegal safety duties as currently defined, we need to start discussing what the fix may be.
Perhaps it is too early to rework the scope of this Online Safety Bill - but if that’s the case then neither can the alternative be indefinite delay. After a legislative process that’s already taken six years, it’s probably not a popular notion to suggest that an Online Safety Bill 2 might be needed in relatively short order.
But perhaps that’s exactly where the debate might be headed.

