Scarlett Johansson can’t escape tech exploitation

OpenAI grabbing her voice is just the latest scandal that demands a regulatory response

Scarlett Johansson can’t escape tech exploitation
Scarlett Johansson at Comic Con in 2019. Photo: Flickr/Gage Skidmore

Less than a week after OpenAI showed off a demo of its GPT-4o with a voice that sounded so strikingly similar to Scarlett Johansson’s that Sam Altman even went to far as to tweet “her” — a reference to the 2013 Spike Jonze movie of the same name — we found out it wasn’t a coincidence at all. According to a statement released by Johansson, Altman had been trying to get the actress to be the voice of the product as far back as last September. He was seemingly desperate to make his favorite movie a reality — or at least a version of it that provided the aesthetic Altman needed to keep pushing the fantasy that his large language models will soon develop consciousness.

According to Johansson, Altman pitched her on the idea that putting her voice behind the chatbot would “bridge the gap between tech companies and creatives and help consumers to feel comfortable with the seismic shift concerning humans and Al.” Probably more important to Altman, hearing Johansson would be “comforting to people” — or at least the tech bros trying to recreate Her. Ultimately, Johansson turned down the offer. But Altman wasn’t done.

Two days before OpenAI’s Spring Update focused on GPT-4o, he contacted Johansson’s agent once again, asking her to reconsider. But he didn’t even wait for a response. They company went ahead with the demo, prompting questions for OpenAI about how much it sounded like Johansson, jokes on Saturday Night Live referencing it, and even the actress’ close friends to message her asking if she was involved. OpenAI has now pulled the “Sky” voice that imitates Johansson, but the whole debacle brings up a number of important issues.

The internet’s dark side

Putting OpenAI aside for a moment, it’s hard to ignore how often Johansson gets caught up in these cycles of tech exploitation and ends up having to become the face of campaigns against it. Back in 2021, she sued Disney when it decided to put Black Widow on Disney+ the same day it was poised to hit theaters without consulting her. Part of Johansson’s compensation was tied to box office performance, so she could credibly argue she’d be paid less as a result of the decision, but it also happened to play into bigger concerns in the industry.

WarnerMedia had sent its 2021 slate of movies to streaming too, without bothering to talk to the actors and directors involved with the projects first. Those decisions heightened the growing dissatisfaction in Hollywood with the streaming model and the decisions being made by major studios. Johansson ultimately settled, but her decision to sue was seen as an important move in a growing fight between talent and studios — regardless of her A-list status. That fight came to a head last year when actors and writers went on a months-long strike demanding a better deal. But Johansson may have been primed for that legal fight due to the longer battle she’s been forced to wage against digital exploitation.

“The digital age is cannibalizing us”
Hollywood actors join writers in strike against companies using tech to degrade their profession

When explicit AI-generated photos of Taylor Swift spread across Twitter/X earlier this year, a lot more people woke up to the growing problem of non-consensual deepfake or AI-generated sexual images that generative AI tools have made much easier to create. But the problem didn’t begin with generative AI. In 2018, Johansson spoke out about deepfakes after fake videos that put her face on graphic sex scenes started spreading online and racking up millions of views. Her statement was scathing, calling the internet “a vast wormhole of darkness that eats itself.”

“The fact is that trying to protect yourself from the internet and its depravity is basically a lost cause, for the most part,” she wrote at the time. “Obviously, if a person has more resources, they may employ various forces to build a bigger wall around their digital identity. But nothing can stop someone from cutting and pasting my image or anyone else’s onto a different body and making it look as eerily realistic as desired.” For all the benefits that came of online connection, the internet continues to have a dark underbelly that often gets minimized by its biggest defenders.

By that point, Johansson was already well-acquainted with the ways the internet could be used to more easily victimize women, whether celebrities like her or just regular everyday people. Years earlier, she’d experienced having men make a life-sized robot using her face without her permission. Her email account was also been hacked and nude photos that were taken from it were later published online. (The hacker eventually got ten years in prison.) As the generative AI boom has taken off, she’s been dealing with it once again, having already sued Lisa AI for using her likeness to promote its product. Now she’s being forced to turn her attention to OpenAI.

Time for action

Since generative AI tools started taking off in November 2022, companies like OpenAI have been playing fast and loose with the rules that govern the use of copyrighted works and people’s personal data. We’ve seen countless examples of AI image generators churning out visuals that look remarkably similar to major franchise films or the graphic styles of specific artists, and voice actors have reported having their voices hijacked by AI voice generators. A proposed class action was launched last week against an AI company called LOVO which is accused of doing just that. Johansson, once again, is among those whose voices were stolen.

AI hype is over. AI exhaustion is setting in.
Google and OpenAI’s latest showcases suggest the AI bubble’s days are numbered

In the same way that Taylor Swift’s victimization at the hands of people generating sexual images of her put a spotlight on the issue, Altman’s ham-fisted attempt to build an AI assistant with Johansson’s voice will now do the same for a whole range of other abuses by companies like his own. The scandal has prompted other artists to speak out in support of Johansson and has made lawmakers take note of the problem. The reality is that the issue of Johansson’s voice is just the tip of the iceberg, and as the hype dissipates, this could be an important moment in the deflation of the AI bubble.

In her statement, Johansson connects what OpenAI did to her to those broader issues. “In a time when we are all grappling with deepfakes and the protection of our own likeness, our own work, our own identities, I believe these are questions that deserve absolute clarity,” she writes. Ultimately, she demands transparency from OpenAI and for lawmakers to move forward with regulation to protect individuals’ rights in the face of AI companies trampling all over them.

This scandal presents an opportunity for voices that have already been trying to raise the issue of theft and exploitation by AI companies to seize the spotlight Johansson has placed on generative AI and demand lawmakers take action. Protecting people’s voices and likenesses is important, but that effort can go much further to take on AI companies training on people’s work without their permission and the broader victimization their tools enable. It’s time to stop falling for Altman’s con and clean up the mess generative AI tools are making.