The list of reasons to want OpenAI banished from the face of this planet and to see Sam Altman behind bars for the rest of his life is longer than I could possibly list in a single article. The generative AI hype wave he kicked off in late 2022 has not delivered on the promises he made, but has delivered a slew of social harms he hadn’t anticipated.
Over the past few years, we’ve seen it become far easier for misogynists and pedophiles to churn out sexually explicit images of women and children. We’ve seen teenagers and adults alike get hooked on chatbots, leading them to have mental breakdowns, be institutionalized, and even commit suicide. We’ve seen those who want to want the public question reality and not know what’s real or fake — whether simply for profit-seeking engagement or more nefarious reasons — use chatbots and image generators to push things into overdrive.
In short, it’s been a social disaster. But even with all that said, I was hardly prepared for the contrast between two stories that emerged in recent days; stories that show how deeply irresponsible and anti-human of a world Altman is trying to create, all while his company ignores its social responsibly in cases that could have ultimately saved people’s lives.
Downgrading humanity
Altman is used to being challenged on the resource demands of his AI ambitions, and occasionally he lets slip some pretty revealing responses. In early 2024, he declared we would need to geoengineer the planet to mitigate the climate impacts of all the energy needed to allow him and the tech industry to realize their ultimate plans for the widespread adoption of generative AI — and to pursue the holy grail of artificial general intelligence, or AGI. Yet, on stage at an event hosted by The Indian Express on February 20, he gave a much more worrying answer.

The OpenAI CEO slammed claims that AI presented a threat to access to fresh water, jumping on a bandwagon that industry boosters have been riding for some time now. Their argument seeks to obscure the local impacts of data center water use by focusing on figures for the regional and national levels or making comparisons to other water-intensive industries. They certainly don’t want you to recognize that data centers were already among the top 10 water-consuming industries in the United States well before the AI boom put things into overdrive.
But the real problem with Altman’s response was how he reframed the question: it wasn’t about how much energy AI used, but how much it used in comparison to humans. Altman does not have these comparative figures at the ready. He admitted as much in his answer. He was constructing a theoretical argument that justified his desire to ignore the impacts of his company and the wider industry. In truth, the figures don’t even matter because he’s engaged in something much more pernicious as he seeks to distract from the impacts of his corporate efforts.
“It takes like 20 years of life and all of the food you eat during that time before you get smart,” Altman asserted, talking about a typical human. “And not only that, it took the very widespread evolution of the 100 billion people that have ever lived and learned not to get eaten by predators and learned how to figure out science and whatever, to produce you.” In short, he’s saying that creating humanity and humans as they now exist required a lot of energy through human history and for each person living today — which means we cannot blame companies like OpenAI for the impacts associated with generative AI and the data centers it requires.
Let’s be clear: this is an absurd line of argument. Altman is seeking to equate AI with humans once again. He’s already tried to sell the public on seeing his chatbots as companions, therapists, and assistants rapidly on their way to human-equivalent levels of cognition, if not there already — assertions that are pure fantasy — and after making those claims, he now wants the resources needed to create AGI to be judged on a human scale too.
There is an undercurrent to the argument that effectively suggests humanity itself needs to be managed if there’s a resource crunch. Human life is downgraded to be equivalent to a machine, and thus has none of the inherent value we tend to associate with it or the qualities that make us uniquely human. There is ample reason to justify the energy use needed to ensure humans live and even thrive — beginning with the fact that we’re actually alive but going as far as to recognize the inherent value of human life that should be preserved and be allowed to flourish. Those qualities do not apply to Altman’s slop-generating machines.

He does not seem to share that same reverence for humanity; his reverence is reserved for the fantastical AGI gods he seems determined to bring into being. This shouldn’t be a surprise. Many of the billionaires at the height of Silicon Valley adhere to an anti-human worldview that not only sees humans merging with machines, but being consumed by them. Altman has paid to have his brain frozen when he dies, in the hope that it can be uploaded to a computer sometime in the future, and has argued that “the merge” — where humans and machines become one — is essential for the future of humanity.
This is all in line with the longtermist worldview, which argues the value of people alive today and people who might live a million years from now are equivalent. If an action today might help ensure billions of people will live in the far future, even if it means harming millions in the present, that is justified under their anti-human calculus. It’s a philosophy that seems to exist purely to justify the science fictional pursuits of tech billionaires while their actions magnify the suffering of billions of actual people. In fact, those future people they envision are not people at all, but “post-humans” who live in vast computer simulations, not as flesh and blood.
Neglecting responsibility
Altman’s statements on stage in India would have been bad enough, but they appeared even more heartless and anti-human after a report from the Wall Street Journal the following day. Mass shootings are sadly far too common in the United States these days, but they’re still quite rare in many other countries.
On February 10, Canada suffered one of the worst mass shootings in its history when eight people were killed in Tumbler Ridge, British Columbia, including five students and a teacher at a secondary school. After the shooting, OpenAI reached out to authorities to provide information about the shooter’s use of ChatGPT and announced it had banned the shooter’s account months earlier.
However, what OpenAI didn’t say, but the Wall Street Journal discovered, was that employees pushed for the company to reach out to Canadian authorities to alert them to what the person who would later take eight people’s lives was inputing to ChatGPT. The user was flagged through an automated system for suggesting scenarios to the chatbot involving gun violence. “Internally, about a dozen staffers debated whether to take action on [the user’s] posts,” wrote the Journal. “Some employees interpreted [the user’s] writings as an indication of potential real-world violence.”
I’ve seen suggestions online that this presents serious privacy concerns, but I think those people need to check their cyberlibertarian leanings. The companies have been quite open about the fact that chatbot conversations are not fully private, just as people’s search history isn’t. These companies have a duty to the public to identify users trying to use their tools to do harm, just as would be the case in other industries. Simply because something happens online does not mean it exists beyond accountability, and if people don’t want their chatbot conversations flagged, they can simply not use chatbots — or avoid talking to them about committing gun violence or harming people.

There’s no question this was negligence on the part of OpenAI. For a company that has talked so much about AI safety, their leaders are clearly not taking their responsibilities for the present-day impacts to their users and the wider society seriously — in part because safety to them is again associated with fantasy rather than reality. AI safety means to align AI with humanity so a future AGI doesn’t seek to annihilate us (or some sci-fi foolishness like that). It doesn’t mean to stop real harm, as OpenAI could have helped to do in Tumbler Ridge had its leadership listened to employees pushing them to inform police.
We have already seen all the reports about the negative mental health impacts of chatbot dependence, and ChatGPT even coaching teenagers on how to commit suicide. OpenAI only announced changes to ChatGPT on that front after it was sued over a teenager’s death. But the story about the company’s decision not to report a potential shooter to Canadian law enforcement, coming right on the heels of Altman denigrating humanity to the level of machine, was a bit too much for me to handle.
Believing hype over reality
As far as I’m concerned, there are two big takeaways here. The first is that OpenAI, Altman, and the generative AI industry more widely needs to start feeling the pressure. They’ve had a pretty easy ride these past three years, as they made big promises, caused hundreds of billions of dollars to flow in their direction, and generated a slew of social harms they haven’t had to properly account for. This technology is being pushed by people who not only disregard human life, but seek to subsume it to computers, and it’s time they’re not only reined in but seriously questioned and held to account for what they’re doing.
But beyond that is to question what our governments are doing by not just welcoming the industry, but often actively pushing generative AI throughout the public sector and into the private sector too. In response to the Journal’s revelations, Canada’s AI minister said he was “deeply disturbed” and reached out to OpenAI for answers. But he’s more of an AI evangelist than someone seeking to really understand the impacts of the technologies and take action to rein them in. His response to the recent Grok deepfake scandal was little more than a secular version of “thoughts and prayers.”
Our governments are actively selling us out to companies that do not have our best interests in mind, based on promises of increased productivity and a flood of investment that are based far more on hype than reality. There are already signals that companies in other parts of the economy are pulling back from AI investment after not seeing the returns, and that even workers in tech who think they’re becoming more productive thanks to these tools are deluding themselves.
While chasing the hype, governments are leaving their citizens open to abuse and harms that few other industries would so easily get away with. As Altman and his colleagues make it clearer than ever that they care very little for most of the humans on our planet and in our societies, their word should stop being taken as gospel and the impacts of their companies should be assessed on what they’re doing in the here and now, not what they might do sometime in the far future.
Note: I chose not to use the shooter’s name in this piece, hence the references to the person as the shooter or a ChatGPT user.





Member discussion