Dismissing critics has “real and dangerous” consequences

Casey Newton’s diatribe against AI skeptics illustrates a broader issue in tech journalism

Dismissing critics has “real and dangerous” consequences
Photo: Unsplash/Sincerely Media

There are few things tech critics know better than being dishonestly denigrated for pushing back on the narratives of industry and its boosters.

We’re used to being cast as Luddites as if it were a term of shame, instead of a badge of honor whose history the industry tried to rewrite to turn people off a righteous struggle. We’re also frequently excluded from the conversation until a technology’s harms become too obvious to ignore — or often reverse. Some skeptics even experience severe career consequences for pushing back on the hype their colleagues are all too eager to embrace, while the industry cultivates its own critics who obscure the root of the problem.

Last week, we were treated to yet another example of this when Platformer owner and establishment tech journalist Casey Newton launched a broadside against AI skeptics, and cognitive scientist Gary Marcus in particular. We’re two years into this AI hype cycle, so after attending a conference with a strong “AI safety” focus, Newton seemed to feel it was time to show he’s a true AI understander by echoing industry talking points to slander critics based on some strawman arguments.

In Newton’s framing, you either fall into one of two camps: those who think AI is “real and dangerous” — a group for serious people like himself — or those who feel it’s “fake and sucks” — the more childish position of Marcus and the skeptics he doesn’t care for. Of course, his framing is a dishonest one that misrepresents the position of many skeptics, as Gary Marcus and journalist Edward Ongweso Jr. have described in their own responses. AI can both suck and be dangerous, and can be real or fake depending on the implementations we’re talking about.

To illustrate that AI is “real,” Newton asserts it’s important to “describe how it is already being used today.” He follows that with list of stories that supposedly prove AI is changing the world in previously unimagined ways, even though many of the stories he cites do not actually say what he suggests. Newton says he “collect stories like these ones in a file, and the file usually grows by one or two items a week,” but the list of stories he uses are overwhelmingly positive. He wants to believe AI is changing the world in a particular ways, and those stories, regardless of how accurate they are, confirm that bias.

Source: Bluesky/@crulge.urinal.club

Tech journalism bias is “real and dangerous”

A bias in favor of industry assertions is one we’ve seen over and over again — not just from Newton, but from tech journalism more widely. In January of this year, Newton admitted in an interview on his Hard Fork podcast with crypto investor Chris Dixon that he “deeply regret[ted]” trying to “keep an open mind” about crypto because almost everything he wrote was “at best irrelevant or at worst was stuff that people lost a whole lot of money on” when he looked back at it. The skeptics were right about crypto, as he admitted in December 2022, a year after the bubble burst. But just like the industry folks he frequently talks to, Newton wants to assure his readers that this time they’re wrong.

In his paywalled response to the pushback he received, Newton asserts he’s not ignorant to the drawbacks of AI, pointing to some reporting he’s done on subjects like deepfakes — reporting that hasn’t made him rethink using AI-generated images trained on stolen work to illustrate some of his stories. But in asserting AI is “real and dangerous,” Newton is largely echoing the AI safety position — one which effectively asserts that AI will match and exceed human intelligence, and that we need to be worried about the consequences of such a development.

To Newton’s credit, he doesn’t say that means we need to ignore problems created by AI deployment in the present, as some AI safety proponents like Geoffrey Hinton have been prone to do. But he also doesn’t acknowledge how accepting the myth of superintelligence shapes approaches to regulation in a way that is favorable to industry. It’s exactly why Sam Altman pushed the notion hard when regulatory conversations took off after ChatGPT’s release. This is one place where I disagree with Marcus too, who also believes superintelligence will be achieved.

Artificial general intelligence, or AGI, is little more than a sci-fi dream of tech executives and engineers who grew up on those stories and desperately want to see them come to life. The acceptance of it, in my view, reveals a desire to place fantasy, if not faith, above reality, and to engage in the kind of wishful thinking that ultimately turned AI pioneer Joseph Weizenbaum against the technology for the rest of his life. As Weizenbaum once wrote, “Computers enable fantasies, many of them wonderful, but also those of people whose compulsion to play God overwhelms their ability to fathom the consequences of their attempt to turn their nightmares into reality.”

Reading an argument like Newton’s, you’d swear AI is a novel thing whose expansion we’ve only been watching for the past few years. Yet the hype cycle launched by ChatGPT is just the latest in a long line of peaks and troughs in the development of the technology over the many decades since John McCarthy invented the term “artificial intelligence” so he could “get money for a summer study in 1956.” There’s little reason to believe generative AI is going to exponentially improve indefinitely, and ample reason to believe its actual capabilities are being exaggerated for ideological and commercial reasons.

But AI can also be dangerous for reasons beyond the prospect of superintelligence or certain present-day implementations that are already harming people. As Ali Alkhatib has written very effectively, understanding that history allows us to see AI and the project behind it in a very different light.

AI is an ideological project to shift authority and autonomy away from individuals, towards centralized structures of power. Projects that claim to “democratize” AI routinely conflate “democratization” with “commodification”. … This way of thinking about AI (as a political project that happens to be implemented technologically in myriad ways that are inconsequential to identifying the overarching project as “AI”) brings the discipline - reaching at least as far back as the 1950s and 60s, drenched in blood from military funding - into focus as part of the same continuous tradition.

I don’t think you’re going to hear that kind of framing at an AI safety conference, or from an acolyte of Kara Swisher.

The essential role of skeptics

In 1946, Lewis Mumford wrote about being a critic that “For those of us who have been awake to realities, this has been a lonely role.” Often called the “prophet of doom,” librarian Zachary Loeb has shown how his twentieth-century critiques often remain incredibly relevant in our digital present. Critics in any age, pushing back against immensely profitable and powerful industries, know they will face a skeptical public primed by narratives circulated by well-resourced public relations teams and, sadly, a journalistic class often far too open to repeating them.

Journalist Sam Harnett dug into the role his colleagues played in defending and maintaining the power of the tech industry and allowing the growing gig economy to escape effective scrutiny in the 2010s. He explained many ways in which that happened, including by adopting industry language like “startup” or “sharing economy” that obscured details from readers, framing stories in ways that were friendly to those companies, and allowing firms to describe themselves as “tech” to escape the actual context of the industry they were operating in. In practice, Harnett explained that meant industry

set the terms of debate and the language with which to engage in that debate; they managed to influence academics, politicians, and regulatory actors in ways that most of these individuals would not themselves recognize; and they didn’t even have to do most of the work: they just had to rely on swooning tech journalists whose careers depended on finding and pumping up whatever gadget or app that seemed like it could become the next big thing.

The credulous coverage given to the gig economy and companies like Uber in those early years helped enable the erosion of workers’ rights and the decimation of taxi regulations that gave them some protection from the whims of corporate executives. When I put that point to Newton in 2021 as he questioned whether crypto skeptics were simply “people hitting the age where they never want to have to understand a new technology ever again,” he repeated Uber talking points about “the taxi model” and asserted “business journalism has historically not started from the standpoint of ‘why is this bad and how can I prove it?’” — not exactly what people are asking for.

Regular readers of Disconnect will know I have a certain degree of contempt for tech journalism as a field, particularly how cosy it tends to be with the figures it covers and how that ends up shaping coverage in a way the benefits industry at readers’ and the public’s expense. That’s not to say there aren’t plenty of journalists doing great work — even Newton has done work worthy of praise, like his investigation into the conditions facing Facebook content moderators in the United States.

But those good stories are far too often the exception to the rule of otherwise credulous and boosterish coverage that ends up being part of the reason why companies branded with the tech label have been able to get away with so much over the past few decades — and why we’re now try so hard to clean up the mess they were allowed to create.

Skeptics are not wrong to call out the problems with these companies, the flaws in the misleading narratives they spread about the technologies they deploy on the public, and the frequent failure of those who claim to be holding power to account to call things out for what they are. The problem is instead with a field of tech journalism that’s far too close to the people and companies they’re supposed to cover well to keep the rest of the public properly informed.