I recently spent a few days in the United Kingdom, which, if you’re to believe some online influencers, has become the new home of a widespread censorship regime. In July, the country implemented new rules under the Online Safety Act it passed in 2023 to restrict minors from accessing certain harmful, abusive, and explicit content, requiring websites and platforms to use age verification systems to comply with the law. Progressive digital rights activists slammed the country for threatening the internet, further legitimizing the political right’s “free speech” discourse in the process.

There are legitimate concerns about the law, which I’ll get into a little later, but a lot of the attacks levied at it are politically and economically motivated, and effectively take advantage of how social platforms reward sensationalism. I was curious to see how the system worked in practice, and whether it merited the scale of the response it received. Reader, I remain skeptical.

I didn’t run into the age gate on my phone — I assume because I have a Canadian SIM card — but when I logged into Bluesky on my computer, I was hit with a little window informing me that certain features like adult content and direct messaging would be off-limits until I verified my age. The DMs are included because they can serve as a more concealed avenue for abusive comments and explicit material.

I could have easily just activated my VPN to get around it — as many people in the UK have done in recent weeks — but I wanted to see how these systems worked, so I went ahead to verify myself. The system Bluesky is using gave me two options: to use a service called Yoti to scan my face or to provide a credit card that would be authorized via Stripe. Notably, handing over a photo of my ID — the scary scenario that has been spreading like wildfire online — was not even an option. I decided to do the former because I figured the credit card check would be more straightforward and I wanted to test the face scan.

After being transferred to Yoti, I positioned my face as directed and the certification commenced. It told me it was estimating my age, then that my face scan was being deleted, and finally a screen popped up telling me I was approved. Yoti transferred me back to Bluesky, where I was met with another message letting me know I was effectively free to do what I wanted now that I’d passed the check. All told, the whole process might have taken a minute and a half.

In explaining this, I’m not trying to dismiss the problems with face scans for age verification. Even the companies behind them admit they can be off by several years, if not even more. I wouldn’t be surprised if minorities find they’re less reliable for their faces too. This is why there need to be multiple options and appeal mechanisms available to people. But ultimately, was I worried or affronted? Not really. I’ve had to verify my identity many times in recent years on platforms like Twitter, Google, LinkedIn, and the other big players — even by providing a copy of my ID at times. I’d imagine many of the influencers running wild with sensational takes on the UK’s new bill have done the same. To me, it’s just a cost of being online.

The nuance behind digital restrictions

The UK’s new rules under the Online Safety Act are part of a wave of recent restrictions being implemented to control what certain groups — particularly minors — can access when they browse the web. There can be different motivations behind those initiatives, which is where I feel some of the misunderstandings come from. Though part of it is also fueled by intentional dishonesty and the exaggeration I’ve come to expect from digital rights groups whenever new rules and obligations are introduced for online platforms.

The Online Safety Act is actually a very comprehensive piece of legislation that gives the British government extensive powers over how the internet works in its jurisdiction — not all of which are actively being used. As I mentioned, the newest set of rules target what minors can see online — specifically things like pornography, extremist content, and promotion of eating disorders and self-harm. This is similar, but not the same, as other initiatives rolling out in other parts of the world.

Understandably, a lot of the focus in the digital rights world has been on what is happening in the United States, where Republican state governments in a growing number of states are rolling out initiatives to limit access to certain online content under the guise of protecting kids, but they are actually fueled by a social conservative impulse to try to make information about issues and causes they politically disagree with harder to access. That includes, for example, information on same-sex relationships and gender transition, as part of the broader right-wing effort not just to dehumanize trans people, but to try to erase them from public life.

Social media must be reined in
Countries are adopting higher age limits, but they need to tackle addictive design practices that target everyone

In that context, a certain degree of overreaction to other initiatives is understandable, but we do need to resist collapsing the context of these different bills and ascribing political motivations unique to the United States to other policies around the world that quite clearly emerge from distinct social and political situations. For example, in my view the new UK rules are overbroad, but they are not motivated by the same socially conservative principles as in the United States, as much as the UK political class has been infected by transphobia. Their measures are much more about trying to address the very real consequences we’ve seen from some people’s engagement with the platforms, with a specific focus on young people.

That is even more the case when you look at what is happening in Australia. Once again, the goal is to address the harms that minors have experienced on online platforms by raising the existing age limit of 13 years of age that most platforms implemented on their own around the world to a mandatory 16 years of age with stricter enforcement mechanisms. The policy is a response to a growing movement of parents and families who have seen their kids harmed by algorithmic amplification of content that affects their wellbeing or direct interactions those platforms facilitated. In some cases, their children have even taken their own lives.

There is certainly a reactionary element to the campaign, but again, it’s not driven by social conservatism. It’s driven by the obligation to protect young people, in line with a longstanding social expectation that society does just that. And that’s another issue with the discourses spreading online: a lot of the digital rights community explicitly argues that should not happen; that minors should be effectively treated as adults and should have no limits on what they can access without parental knowledge or permission. When you think about it for a moment, it’s a very extreme position out of line with social norms, but informed by a desire to put an unregulated internet before all other concerns.

Social media harms require action

In the past, I might have been more hesitant about these efforts to ramp up the enforcement on social media platforms and even to put age gates on the content people can access online. But seeing how tech companies have seemingly thrown off any concern for the consequences of their businesses to cash in on generative AI and appease the Trump administration, and seeing how chatbots are speedrunning the social media harm cycle, many of my reservations have evaporated. Action must be taken, and in a situation like this, the perfect is the enemy of the good.

I don’t support the US measures that are effectively the imposition of social conservative norms veiled in the language of protecting kids online. But I am much more open to what is happening in other parts of the world where those motivations are not driving the policy. Personally, I think the Australians are more aligned with an approach I’d support.

They’re specifically targeting social media platforms, rather than the wider web as is occurring in the UK, and the mechanism of their enforcement surrounds creating accounts. So, for instance, now that YouTube will be included in the scheme, that means users under 16 years of age cannot create accounts on the platform — that would then enable collecting data on them and targeting them with algorithmic recommendations — but they can still watch without an account. There are still concerns around the use of things like face scanning to determine age, but in my view, it’s time to experiment and adjust as we go along.

Even with that said, if I was crafting the policy, I would take a very different approach. It’s not just minors who are harmed by the way social media platforms are designed today — virtually everybody is, to one degree or another. While I support experimenting with age gates, my preferred approach would focus less on age and more on design; specifically, severed restrictions algorithmic targeting and amplification, limiting data collection and making it easier for users to prohibit it altogether, and developing strict rules on the design of the platforms themselves — as we know they use techniques inspired by gambling to keep people engaged.

To be clear, the Australians and the Brits are looking into those measures too — if not already rolling out some measures along those lines. These are actions we need to take regardless of the politics behind the platforms, but given how Donald Trump and many of these executives are explicitly trying to use their power to stop regulation and taxation of US tech companies, now is the time to be even more aggressive, not to cower in the face of pressure and criticism.

How Silicon Valley stops regulation

Watching the debate around social media restrictions has been another important moment in recognizing the deception of many digital rights groups and activists, and how often they serve major US tech companies while pretending to do the opposite. This isn’t new. From Canada, I’ve been watching this developing domestically, where over time the companies have sent out their own representatives to argue against these policies far less often. Instead, you have a series of experts who present themselves as independent voices, but just so happen to say things that sound exactly like tech company talking points.

Honestly, it can be fascinating to see how clever their deceptions are. In some cases, they’ll acknowledge the problem at hand and say we need regulation, but then argue against the actual regulation being proposed as always having some fatal flaw. Digital rights groups famously position legislation to make companies like Google and Meta pay some of their profits to news companies as “link taxes” that are supposed to destroy the internet. The Australian, Canadian, and European legislative initiatives have done no such thing.

Reclaiming sovereignty in the digital age
Cyberlibertarianism must die if there’s to be any hope of a better future for the internet

Another pernicious one is to argue regulation is simply unworkable, as it’s impossible for smaller companies to properly comply, only cementing the dominance of the big players — even as the major companies are clearly trying to stop the regulations from moving ahead. We’ve even seen this with the new age restrictions, where some commentators argue that simply because Peter Thiel’s Founders Fund has a stake in an age verification company that means the whole initiative serves dominant tech firms, as if these terrible venture capitalists don’t have their fingers in corporate pies all over the place. Meanwhile, the actual major tech companies are fighting amongst themselves to ensure they’re not the ones that have to implement age restrictions.

I remember when the Canadian government proposed new rules on streaming services a few years ago to make them invest in and prominently display more Canadian content, in line with longstanding radio and broadcast regulations. Despite the government being clear this was targets at streamers, the industry seeded a deceptive narrative that it was going after creators and YouTube channels, riling up a bunch of influencers to oppose the legislation. It was pure deception, aimed at trying to defeat regulations the industry did not want to have to deal with.

We’re even getting some reporting that confirms the more obscured influence operations these companies are engaging in. When California proposed a privacy bill that would have affect the Chrome browser, Google reached out to small business owners to go oppose the bill on its behalf, without ever taking a public stance on whether it supported the legislation. It made it look like a much more sympathetic group would be harmed if it went forward. And it’s not the only time it’s used that tactic.

Reassessing the tech industry

These companies are some of the most powerful in the world. They have a lot of money to throw at trying to make laws they don’t like go away, and they know it sounds a lot better for activists, experts, and other more sympathetic groups to launder their talking points instead of coming out to say them themselves. That’s not to say all opponents are bought off by tech companies — but if they can seed the narrative and get credible voices to spread it, more people will instinctively echo it.

Time and again, many digital rights arguments have proven to be significantly exaggerated with the aim of defeating regulations on tech companies. The digital rights playbook was created at a time when internet companies were nascent and competing with much more powerful traditional industries. Today, those roles have reversed, but the playbook has largely stayed the same and thus continues to serve some of the most powerful companies in the world. At a time when those companies are flexing their muscles, we need to be more aware of how they use their power.

So, all in all, do I love the age restrictions? Not really. But at this point, I’m open to measures to restrict the power of these companies, even if there are some drawbacks. Social media is a net negative: sure, it allows us to connect, share information, and have some laughs, but it’s also enabling widespread social harm and amplifying increasingly extreme right-wing political positions that negates its positive aspects. Hate speech is not free speech, and even then, no one’s rights are impeded if they can’t post as much on a social media platform. Yelling “censorship” at every opportunity is only playing into the extreme right’s deceptive framing of free speech.

It’s time to rein in these platforms and all the harm they’ve wrought.