Geoffrey Hinton’s misguided views on AI

He may have made important contributions to artificial intelligence, but that doesn’t mean he knows where it’s going

Geoffrey Hinton’s misguided views on AI
Photo: Flickr/Collision Conf

It will probably come as no surprise to you that I’m no big fan of the so-called “godfather of AI” Geoffrey Hinton, and it’s fair to say I was stunned when he was given a Nobel Prize in Physics — as he seems to have been as well. Not long after that announcement was made, I was asked to write a quick piece about it for the Toronto Star, and they’ve allowed me to share it with you.

I think the perspective on AI that Hinton shares — which is often charitably termed an “AI safety” perspective (or, less charitably, he’s a doomer) — is very unhelpful in actually dealing with the realities and potential near futures of AI — the harms to workers and the wider society that have nothing to do with the sci-fi dream of superintelligence. But I do want to say something positive about him.

Hinton joined the University of Toronto in 1987 and is a Canadian citizen. He’s seen a lot that’s happened in Canada over the past several decades. Earlier this week, he revealed that he donated half of his share of the $1.45 million CAD in prize money from the Nobel Committee to Water First, an organization in Ontario training Indigenous peoples to develop safe water systems.

In recent years, Canada has been facing a reckoning for the cultural genocide it inflicted on Indigenous peoples within its borders, from the lack of clean drinking water to the horrors of the residential schools. At a news conference, Hinton said, “I think it’s great that they’re recognizing (who lived on the land first), but it doesn’t stop Indigenous kids getting diarrhea.” He may be misguided on AI, but good on him for that.

Now, here’s my piece on Hinton’s Nobel Prize, first published by the Toronto Star.


In the mid-1960s, MIT computer scientist Joseph Weizenbaum developed a program called ELIZA. It was a more rudimentary form of a chatbot like ChatGPT, designed to simulate a psychotherapist. Upon seeing how people engaged with it, however, Wiezenbaum’s optimism toward the technology soured.

The program had no understanding of what users were inputting. Even still, Weizenbaum found that people wanted to believe it did. His secretary even asked him to leave the room as she responded to the system’s questions. Today, researchers call this the ELIZA effect: projecting human traits onto computer programs and overestimating their capabilities as a result.

That phenomenon came to mind recently when I heard the news that Geoffrey Hinton was being honoured with the 2024 Nobel Prize in Physics alongside John Hopfield. While Hinton certainly helped move his field forward, his assertions about the risks of artificial intelligence could distract us from the real consequences.

You’ve likely heard Hinton referred to as the “godfather of AI.” His work has been key to the development of neural networks and the algorithms that form the basis of chatbots like ChatGPT. Hinton is a professor emeritus at the University of Toronto and would split his time between the university and Google until he resigned from the company in May 2023.

Don’t listen to the ‘godfather of AI’
Geoffrey Hinton is using fantasies to distract us from the real harms of AI

There’s no doubting that Hinton has made important contributions to his field. But since the rise of generative AI at the end of 2023, Hinton has become known in tech circles for another reason: he promotes the idea that AI systems are nearing human levels of intelligence, and that they therefore pose a threat to human survival. He is not alone.

But there are also a large number of researchers who push back on that idea and charge he’s guilty of falling prey to the ELIZA effect.

Hinton asserts that since artificial neural networks were modelled on biological brains, they must then work similarly to them. That means a tool like ChatGPT isn’t just using complex algorithms to churn out believable results; it’s actually developed a level of understanding that will continue to grow until it exceeds the intelligence of human beings. He says this would mark an “existential threat” to humanity – despite acknowledging to the BBC that as recently as a few years ago most experts “thought it was just science fiction.”

But that’s still the case today.

After theories like Hinton’s started gaining more traction as the hype around ChatGPT grew, science fiction author Ted Chiang started criticizing the excitement, calling the technology “autocomplete on steroids.” Emily M. Bender, a computational linguist at the University of Washington, has similarly called out people like Hinton for conflating a chatbot’s ability to churn out text with the notion there’s any meaning behind it.

Put more plainly: things like ChatGPT only appear to be intelligent because they’ve been designed mimic human language to a plausible enough degree. Their creators want to believe they’re creating intelligent machines, so that’s what they choose to see.

When I spoke to Bender last year, she told me that people like Hinton “would rather think about this imaginary sci-fi villain that they can be fighting against, rather than looking at their own role in what’s going on in harms right now.” AI models present plenty of concerns beyond the supposedly existential and science fictional ones Hinton is most preoccupied with, including everything from their environmental costs to how they’re already being deployed against marginalized populations today. But when CNN asked Hinton about those concerns in May 2023, he said they “weren’t as existentially serious” and thus not as worthy of his time.

For his contributions to his field, Hinton deserves recognition, and he’s received plenty of it. But just because he’s excelled at advancing AI models doesn’t mean we also need to turn to him for answers to the questions about their broader societal consequences. Hinton may be an intelligent man, but we shouldn’t assume the same about the technology he helped create.