For the past few years, the AI industry and executives like OpenAI CEO Sam Altman have very cleverly shepherded the conversation about generative AI to work in their own interests. In their telling, generative AI is a transformative technology that is improving many different aspects of our society, but particularly access to healthcare and education.
Yet they knew there would be critics, so they sought to head off the problems those critics would identify from the jump. Instead of taking real criticism seriously, the industry recognized it needed to set its own narrative for the potential drawbacks of AI. That’s precisely why, as Altman was hyping up the many amazing outcomes that his AI revolution would bring, he presented a warning: AI could also pose an existential threat to humanity.
Notably, it was not a present-day threat. AI in the present was all sunshine and rainbows with no reason not to charge full speed ahead with adoption. But there was a scenario where AI companies developed an artificial general intelligence (AGI) or AI superintelligence — essentially, a scenario where the computer exceeds the capabilities of the human mind and can exert power over us. You might think of it as the Matrix scenario: the machines take over and we’re all subject to their will.
This science fiction scenario is not at all realistic. There are very real questions as to whether a computer can ever replicate human thought and cognition in the way tech CEOs want to believe — in part because they hope to one day merge their minds with machines to live forever. Presenting that scenario was a calculated move: there were already a bunch of people in the industry primed to believe it and it sidelined real concerns about what generative AI was doing in the here and now.
This past week, I was struck by two stories that contrasted the real threat with the fabricated one, and particularly how that fabricated threat enables the real social harms to be perpetuated.
On the one hand, another statement against AI superintelligence was signed by a bunch of people who like to believe they’re very intelligent but have been taken in by some very effective grifters, if they’re not just bad actors themselves. The signatories included a varied cast of idiots, including “godfather of AI” Geoffrey Hinton, outcast royal Prince Harry, Virgin billionaire Richard Branson, far-right agitator Steve Bannon, and right-wing media figure Glenn Beck.

The statement, quite simply, calls for a ban on the development of AI superintelligence until there is a scientific consensus it can be done safely and the public has been brought on board. I’m sure some people signed on because an all-powerful machine seems like an important thing to avoid, without considering how they’re helping to justify the fantasies of some tech enthusiasts. As true believers are playing at their game of appearing serious, the real harms of generative AI continue to grow.
At the same time as the statement was getting a load of press attention, a deepfake video was in the process of rocking the Irish presidential election. Irish voters go to the polls on October 24 to elect a new president for a seven-year term, and polling shows left-wing candidate Catherine Connolly is the clear frontrunner. Even though the presidency is more of a ceremonial role, there are clearly right-wing forces not happy that the country is poised to have another left-wing president, after fourteen years of Michael Higgins holding the post. It’s still not clear who is behind the deepfake video and what their motivations might be.
The deepfake video was framed as a news report from the national broadcaster RTÉ and showed Connolly withdrawing from the race, followed by reporters commenting on what appeared to be a shock announcement days before the vote was poised to take place. It’s hard not to see it as an attempt to suppress the vote for Connolly by making the public believe the presidential election is already over, with no votes needing to be cast. Social media companies belatedly acted to take down versions of it on Meta platforms and YouTube, but not before it had spread far and wide and had been seen by many thousands of voters.
It’s impossible to say what effect the video will have at this stage. There will surely be research into it in the months ahead, and it will be interesting to see what it shows. But for someone like Hinton, who pushes the notion that the existential risk is the main thing we should be focused on, to the extent of dismissing the more grounded concerns of his colleagues, issues like the proliferation of deepfake videos are distractions from the real threat.
The deepfake video of Connolly is the kind of thing we should actually be focused on when it comes to AI, but that’s not what people like Altman want to see or that Hinton will encourage us to think about. It’s those present-day harms that show us the true nature of how generative AI interacts with our society and the type of social and political outcomes it enables. Generative AI is polluting the information environment such that many people can no longer tell if they’re looking at AI slop or something real, leaving them open to manipulation by bad actors. When they do turn to generative AI for news, the information they receive misrepresents the story 45% of the time, according to the largest study so far on the subject.
Companies like Google and Meta have proven they’ve never had much regard for the society they endlessly exploit to generate more profit in response to the generative AI wave. They’ve degraded their users’ ability to get accurate information, whether through the introduction of AI Overviews or placing less emphasis on spreading actual news content. Instead, they’ve been more focused on foisting AI tools on the public, regardless of their reliability, and even flooding the domain with AI slop and chatbots meant to boost engagement, meaning more time on the platform, more eyeballs on ads, and more advertising profits at the end of the day.

The problems with generative AI are endless. The environmental costs of the technology have been well litigated these past couple years, as the data centers that power it demand vast quantities of water and obscene amounts of electricity that creates pressure to build out even more fossil fuel power generation at a time we should be doing the very opposite. But that’s just the tip of the iceberg.
Long before Connolly was targeted by a political deepfake, a far wider swath of people — particularly women and girls — were the victims of nudify apps and explicit deepfakes made possible by image generators powered by generative AI models. More recently, a wave of stories have been published about the mental health risks that can come with forming a dependence on chatbots, including everything from breakdowns and institutionalization to the worst possible outcome of young people taking their own lives — sometimes even with coaching from the chatbot on how to do it.
Governments are belatedly waking up to the harms of social media, particularly as the companies prioritize profits and shareholder value above any other possible metric. Companies no longer care about the individual harm their products can cause, or the political and societal disruptions they can contribute to. Political leaders’ policy responses are open to criticism, like why so many are focusing on age limits rather than much wider regulation that recognizes it’s not just teenagers being harmed by how companies govern their platforms. But it’s quite clear action must be taken to rein in these sources of social disruption.

Social media regulation took far too long to arrive, and even then, it came in an imperfect form. But governments don’t appear ready to grapple with the reality that chatbots and image and video generators are speedrunning the harms caused by social media. The deceptive critical framing of the superintelligence argument has sent governments chasing that red herring as they try to present themselves as being friendly to tech investment to attract a small slice of the trillions of dollars being shelled out on generative AI and data centers. In short, they’re sacrificing the wellbeing of their citizens and arguably the foundations of a democratic society for a chance at short-term investment.
Generative AI is nothing more than a form of social suicide that must be reined in before it’s too late. It cannot be allowed to reach the level of proliferation that social media has achieved, as it may then be impossible to properly roll it back, but also because the social harms from this stage of the digital revolution (if we still want to call it that) will be amplified to an unimaginable degree.
How many people need to be disconnected from reality, siphoned into dependence on chatbots, and put at risk of losing their minds before governments take action against these agents of chaos? The time is running out to wake up to the real threats posed by generative AI, and recognize no hyperscale data center or OpenAI office is worth the costs to the public of allowing this technology to gain a foothold in our societies. We don’t just need to throw off US tech, but the entire model of digital technology that Silicon Valley has pushed on the world.




Member discussion