It’s hard to be shocked by anything Elon Musk does this days, and even less by how easily he’s able to get away with all he does. But even with that said, the saga of the past month has been something to behold.
At the end of December, X rolled back the guardrails on Grok, making it easy for users to start requesting the chatbot take images posted to the platform and generate new versions where the people in them — often women, but also children — were scantily clad or without clothes at all. Over a period of just 11 days between late December and early January, the chatbot is estimated to have created about 3 million sexualized images of men, women, and children. Once the backlash escalated, the company did not fully reverse course.
Musk reportedly directly requested the change, and was happy to see the boost in engagement it generated from his fellow perverts and misogynists. Instead of backing down, the company upsold users by partially placing the ability to undress people without their consent behind a paywall. X later claimed it had fully restricted the feature, but when journalists checked, they found it was still quite easy to get the Grok chatbot to generate images of users in bikinis. Beyond X, Grok could be used to generate even more violent and graphic sexual images.
The brazen and shameless actions by Musk and his platform forced governments to respond. Indonesia and Malaysia were most forceful, bringing in a temporary block of the chatbot and the latter moving forward with legal action against X and xAI. The United Kingdom threatened an X ban, but ultimately allowed the harm to continue as it waited for investigations and promised a ban on non-consensual deepfakes that would not extend to Grok.
The European Union, various European states, and Australia launched investigations of their own, as did Canada’s privacy commissioner after the country’s AI minister shamefully declared it was not planning to ban X after reports of conversations between the UK, Canada, and Australia on the subject. In short, the countries were willing to make strong statements, while allowing Musk to continue enabling the victimization of their citizens. The entire debacle should serve as a wake-up call not just on the nature of X, but for a new approach to social media regulation more broadly.
Socially harmful media
Over the past year, there has been ample discussion of banning under-16s from social media as a means to protect young people from the drawbacks of social media, following Australia’s example. It’s not a policy I’m fundamentally opposed to, though I think it’s been poorly messaged as a ban rather than a slight raising of the existing age limit of 13 and stricter enforcement. Even with that said, X’s deepfake crisis should be the scandal that forces regulators and governments to wake up to the broader problems with social media — issues that extend far beyond those under 16 years of age.
Social media has become a corrosive force on society, in large part because of how companies driven to maximize profits and engagement have shape dominant social media platforms to serve their interests regardless of the effect on their users or the broader society. Mental health consequences of what’s shared on social media, but also how the platforms are designed to keep people hooked on an algorithmic loop of content, affect adults and seniors just as much as young people — if not more in some cases. Platforms have also destroyed the information environment, pushing unreliable and often extreme content at people to get them engaged, driving them to conspiracy and right-wing extremism. That has only been amplified with the proliferation of AI-generated content.

If platforms did ever care about the social and individual consequences of their products, that time has long passed. Musk’s decision to roll back the safeguards on Grok to boost engagement is one example of that, while Meta CEO Mark Zuckerberg has constrained trustworthy news content while pushing AI-generated slop to keep people on the platform and drive advertising profits. Allowing these platforms to continue operating unhindered is actively tearing societies apart, driving a wedge into the democratic consensus and making a segment of the population lose their minds, if we’re being frank about it.
There’s a debate worth having about whether these platforms can be saved at all — and whether the effort should be made. I’m skeptical, but I also acknowledge the shock of fully withdrawing all dominant, US-based social media platforms (and TikTok) overnight from international markets would likely be a shock too great for many users to bear, especially with the extreme right poised to opportunistically take advantage of any such decision. Instead, it feels like there is a three-pronged approach that must be taken to wean society off these platforms before it’s too late.
Remaking social platforms
The first step of taking on social media companies is to develop much more comprehensive rules that platforms must conform to. Ideally, that process would be done collaboratively with likeminded countries cooperating on a set of enforceable standards that apply across markets like Europe, Canada, Australia, Japan, Brazil, and beyond to make it harder for tech companies to wield their leverage against an individual country going it alone. Those rules should be comprehensive, looking at all aspects of platform design and operation. They could include reining in dark patterns that manipulate users, limiting if not outlawing aspects of algorithmic amplification of content, implementing high standards of moderation and larger penalties for breaches, and enforcement mechanisms that allow swift action when rules are violated, leaving the door open to blocking platforms that do not comply — as Brazil did to X in 2024.
The goal of these standards would be to limit the harmful effects of social media platforms, but that will naturally force them to change how they look and operate to conform with those restrictions. Social media companies have long argued that they are platforms, not publishers, to try to avoid accountability and their obligations to their users’ wellbeing. But when they develop complex algorithmic systems that elevate particular content in front of the eyes of their users — including, but not limited to, news content — and even incentivize the creation of certain kinds of content, that is clearly not the case. The standards they abide by must be raised, regardless of the protests that will come from tech billionaires and their allies in the US government.

Reining them in is just the first step. Paired with that must be a serious, well-financed effort to develop alternative means of digital communication that rethink the social media model from the ground up, built on entirely different foundations than the private, profit-hungry platforms that now shape online discourse. This should be approached from many different angles, with the recognition that some of the projects may fail, but that’s acceptable because experimentation in a short timeframe is necessary.
Any such plan must include government funding to explore what public interest social media could look like, inspired by the model of public broadcasting, and could even include public broadcasters internationally working in concert to develop a system they collectively govern. It could also include greater funding for non-profit, open-source experiments to ensure they have the support and resources necessary to scale if that becomes necessary. Surely, some private companies will be interested in competing in that space too, which will open up if the existing juggernauts begin to be constrained.

Those are the government approaches: to address existing problems while putting resources into an alternative. But there is also a role for users. They can advocate with their governments to take these actions, like developing stricter social media regulation and even banning platforms like X that operate so far beyond any acceptable norm. In the meantime, they can also change their habits by dropping US tech services where possible, considering leaving platforms like X completely, pressuring political leaders to do the same, and being open to trying alternatives and encouraging their friends and family to do the same.
Taking power from billionaires
While I may not think a total ban of US-based social media platforms (and TikTok) makes sense in the short term, that doesn’t mean I feel that way about X. We need to remember that the influence of X has long been exaggerated because it was where politicians and the media hung out, and that position has been even further eroded since Musk’s takeover. Given that it is actively enabling the creation of non-consensual deepfakes and child sexual abuse material, not to mention its broader amplification of right-wing narratives, the only real option in my mind is for people to leave, if not for governments to ban it, so that influence is eroded even further. Replacing X is a much less of an ordeal than finding an alternative to Facebook or Instagram.
But, at its core, this is not just an X problem. The scandal around Musk’s platform places a spotlight on the broader ways major social media platforms are out of step with healthy discourse and communication in modern societies — democratic or otherwise. These platforms and the decisions of the billionaires who control them are amplifying existing tensions and accelerating the descent of societies into chaos, making it harder from them to step back from the ledge. This has nothing to do with free speech. It’s about changing the incentives driving how we communicate with one another and create, share, and interact with information. But also about reining in the power of tech oligarchs to sow discord for their own benefit.






Member discussion