As hype around ChatGPT begins to wane, Sam Altman is changing his message for 2024. Last year, artificial intelligence (AI) was going to improve all our lives for the better, even as it supposedly presented an existential threat to all of humanity. Despite the clear contradiction in those messages, Altman ran with it, both media and government ate it up, and it spawned a cottage industry of doomers to amp up people’s fear of the technology.
By directing our gaze to the future, Altman and the AI industry effectively distracted a lot of people from the very real threats their technologies pose in the present. Instead of thinking about the harmful and discriminatory ways AI might be deployed in policing and social services, we were told to fear a fantasy scenario where computers exceed human-level intelligence then decide they want to enslave or kill us. That allowed Altman’s company OpenAI to influence regulatory efforts in the United States and Europe to ensure they wouldn’t hamper the company’s goals of rolling out their generative AI models however they pleased.
With the regulatory battle posing less of a threat and having just cemented his grip on OpenAI after a failed coup, Altman has new goals in mind and is shifting his rhetoric to reflect them. In the process, he’s also giving us a glimpse of what his broader future vision looks like — and it presents a much greater risk to the vast majority of humanity than any fantasy AI god.
Sam Altman’s shifting narrative
Altman’s new story still has contradictions, but not nearly as many as the one he was telling last year. Instead of being a major threat and/or transformative moment, Altman recently told attendees at the World Economic Forum in Davos, Switzerland that even though artificial general intelligence (AGI) — the theoretical point where computers match or surpass the intelligence of humans — would arrive in the “reasonably close-ish future” that it wouldn’t have the sizeable impact he once believed. “It will change the world much less than we all think and it will change jobs much less than we all think,” he said in a conversation hosted by Bloomberg.
In another panel hosted by CNN’s Fareed Zakaria, he expanded further on that thought, saying chatbots like ChatGPT or image generators like DALL-E were “extremely limited” tools and that AI had been “somewhat demystified” over the preceding year. These statements could be seen as Altman turning over a new leaf, but in the latter half of his Bloomberg interview, he fell back into his boosterism, saying that AI was on an exponential curve while imagining fantasy scenarios where “the cost of cognition falls by a factor of a thousand or a million” and where anyone is able to build their own company of “10,000 great virtual employees” who are at the top of their field and never sleep. “This is going to happen,” he declared.
To be clear, the concept of AGI, let alone the notion it’s on the cusp of being developed by private AI companies, is more of a faith-based statement than anything firmly grounded in reality. The idea that computers can ever replicate human intelligence is hotly debated, but it fits into the view of many in the tech industry that everything should be understood through the lens of computation and code. The brain is not a computer, and many of the problems we face — from climate change to the cost of living — are not technological problems. But that’s not the prevailing view of powerful people in tech.
Altman’s language displays a mix of faith, delusion, and deception that can be hard to untangle in Silicon Valley. The belief that AGI is not only achievable, but on the horizon, is a clear act of faith. The delusion comes in with the belief that the technological pursuits of OpenAI are sure to create a better world. Meanwhile, the deception is there too in the use — at least by people like Altman — of narratives like the AGI threat to shape regulation in his company’s favor. Not all of his statements are solely about advancing his personal or commercial interests, but the narrative he’s laying out — even if it contains elements of faith or delusion — is aimed at realizing a particular future.
The AI future’s climate risk
As the AI hype started taking off, some people avoided getting distracted by the company’s line on the future AI threats and pointed to a much more immediate one: that all those AI tools require a lot more computing power. By extension, that means more energy, more water, and more mineral resources to produce all the computer parts that go into the server racks powering the hyperscale data centers increasingly dotting the surface of the planet. The climate risk of the generative AI rollout was put to Altman by the Bloomberg interviewer and he gave a troubling answer.
Altman believes the two most important “currencies” of the future are “compute/intelligence” — again, conflating computation with intelligence — and energy. Acknowledging the higher demands of AI tools, Altman told attendees, “we still don’t appreciate the energy needs of this technology.” But instead of suggesting some moderation of his expansive vision of AI deployment, he said an energy breakthrough is necessary. In short, we need to place our faith in technology to deliver a development that will allow his vision for the future to be realized without making it even harder to achieve our climate goals.
In Altman’s future, the AI tools made by OpenAI will not only become even more resource-intensive as the company seeks to make them more capable, but they will be built into virtually every aspect of our lives. That will require an even greater buildout of hyperscale data centers around the world as our demand for computation grows, requiring a lot more energy to power them — not to mention water and computer parts like graphics processing units (GPUs). Altman believes that energy should come from nuclear fission reactors and that a breakthrough in the technology will usher in a future of abundant and radically cheaper energy.
However, while we wait for a breakthrough that may never materialize, he told Bloomberg the planet is “going to have to do something dramatic” and use “geoengineering as a stopgap” as emissions and temperatures continue to increase. That should set off some serious alarm bells.
Tech’s power is a threat
The future Altman is laying out should come as no surprise because it reflects the general consensus of the self-proclaimed geniuses at the top of Silicon Valley, ensuring they can pursue their commercial interests with little hinderance and without a challenge to their power. Tech billionaires accept that sacrifices will have to be made — but they’ll be shouldered by the poorest and most marginalized people on our planet, not the wealthy and powerful.
Elon Musk is the same way, arguing that everything should be electrified so nothing else needs to change — including his access to his private jet — while Bill Gates has spent years arguing that market-based tech solutions will address the climate crisis to reframe it as a tech problem instead of a political one. He’s been pushing geoengineering for years, despite its immense risks, and has recently began to tell everyone that 2ºC of warming is basically locked in, not that the consequences will mean much to him.
For Altman, his future is not only one of expected sacrifice by those far below him, but the expansion of his own power and influence. He’s already the head of OpenAI, where he’ll have a freer hand after replacing the board in November. He’s also seeking billions of dollars to become a titan in semiconductor manufacturing and has positioned himself to be a key player in nuclear energy with his chairman positions at Helion Energy, a nuclear fusion research company, and Oklo, a company that seeks to make “advanced” nuclear fission “microreactors.” That’s on top of all the other companies he personally invested in and gained influence with as head of Y Combinator.
After a year of generative AI hype, Altman’s recalibration is a recognition of how much his influence has expanded. Having survived this long, it’s now much harder for regulation or legal threats to halt OpenAI’s ambitions so the scaremongering is not as necessary. Instead, Altman is looking up from his chatbot and ahead to the empire he wants to build and the future he hopes it will usher in. But there’s no guarantee that future is as emancipatory as he wants us to believe, and it contains immense risks that could cause severe harm to millions of people who don’t have nearly the same degree of power to shape their own destinies.