1.5x speed

Who? Weekly Makes a Fake Sydney Sweeney

Photo-Illustration: Vulture

Eventually, we’re being told, artificial intelligence will come for all we know and love. The only question is when — and how. Who? Weekly’s Lindsey Weber and Bobby Finger are keenly aware of this, and rather than anxiously wait for something to happen to their livelihoods — that is, covering the ever-expanding world of B-to-Z-list celebs — they decided to meet the technology where it is. “We’re being told how scary and how bad for our jobs these things are,†Weber told me. “I want to know what it can do and what it absolutely cannot.â€

In recent months, the duo tackled that question through two very different uses of easily accessible AI tools. The first emerged from their live tour last year, “RitaGPT,†in which Weber and Finger prompt ChatGPT to simulate their long-running segment “What’s Rita Up To?†and perform the generated script in real time. A sample from their Los Angeles show in October: “We’ve got quite the episode for you today, folks. It’s all about the whirlwind adventures of pop superstar Rita Ora in the City of Angels, Los Angeles. But before we dive into that, let’s take a moment to appreciate the fact that Rita Ora is still technically a ‘Who.’â€(You can view the whole ChatGPT-generated script here.) I caught this routine when I saw the show in Vancouver last fall, and three things struck me then. First, it’s a good bit. Second, it’s a good bit because the generated text is ineffective as legibly interesting material. Finally, the text is nevertheless uncanny in how it broadly simulates a rough theoretical understanding of human performance.

Weber and Finger’s second AI-performance art project was another bit that was layered into the podcast’s year-end special: the annual Who, Me? awards show. Using publicly accessible tools made by the company ElevenLabs, Weber and Finger created fake AI voice models of Glen Powell and Sydney Sweeney, which they fed lines of interstitial awards-show banter (that they wrote themselves) and layered into the episode as a recurring, prolonged gag. Powell and Sweeney weren’t the only artificially generated voices deployed in the episode. There were also fake Timothée Chalamet, fake Lady Gaga, and, as it happens, fake versions of Lindsey Weber and Bobby Finger themselves. The end result is, once again, a very funny bit, but it’s also profoundly weird in a way that gets not only at the strangeness of our supposedly artificially intelligent future — but of the artificial nature of celebrity performance as well.

Of course, I just had to talk to them about all of this.

Let’s start with RitaGPT. What was the idea behind doing that for the live shows?
Lindsey Weber: We’ve been doing this Rita Ora segment for seven years, and we’re being told AI is gonna take our jobs, so the thinking was, Let them try. It’s a pretty simple, effective joke that creates a kind of funny performance we can then do by reading these newly generated segments that happen at every show.

Bobby Finger: It’s also a visual gag, which we don’t get to do very much on the podcast, obviously. I think people are familiar with AI and ChatGPT, but they may not use it very much themselves, and so to see the prompt get written in real time and then actually spew something out onscreen always elicits at least a few gasps where people are like, This is genuinely sorta alarming. Because it is alarming, right? It’s always different but funny in the same way. And it’s a surprising way to end the show because AI is just novel enough now. I don’t know we could do this again next year, you know?

L.W.: Also, the segment is funny because it’s bad. Like, ChatGPT is not good at doing what we do. It churns out this generic, kooky, almost uncanny-valley radio-show version of our show. What we’ve always thought was great about the RitaGPT segment is that it proves it can’t actually do our jobs. Actually, it sucks. Actually, they don’t know what Rita Ora is up to because they’re scraping the internet and they’re not getting everything.

What was striking to me was how, yes, the generated text was uncanny and surreal, but there’s also a rough level of legible quality to the thing. It still sounds a little human — really bad stand-up at best.
B.F.: It speaks in clichés. It loves doing bits.

L.W.: You calling it stand-up is funny because it has a “voice†that is its own, but it’s not the right voice.

B.F.: What’s disarming is that it learned comedy from some sources it scraped, but it learned us from other sources. It knows our last names. Sometimes it will know what a Wholigan is without being specifically prompted. It learned about us from a New Yorker feature on us, but it doesn’t listen to our podcast because it’s not scraping audio. The voice it speaks in is, like, random radio transcripts where people have banter that’s decidedly not our own. Maybe, at some point, it will start scraping transcripts of us when they’re more widely available, and that could be worrying, but as it stands right now, it’s just hilarious. That was our point, and I think most people got it.

The thing I always try to remind myself about the conversation on AI is that what’s true now probably won’t be true in a year in terms of what it is and what it can do.
L.W.: Oh, for sure. We’re not in denial that this thing is going to adapt and grow, but it’s silly to be scared of the possibility of something and not deal with the actuality of what it exists as. I also wanna be clear we’re just talking about our specific industry, and obviously it’s going to have different effects on different places. I know of industries now where people’s jobs are already being reduced. But we’re being told how scary and how bad for our jobs these things are. I want to know what it can do and what it absolutely cannot. That’s important.

B.F.: I’m not really scared about it at all. No part of me is afraid of it replacing chat conversations or banter between two humans. But I’m wary of all the other insidious ways it’s gonna seep into podcasting beyond hosting the actual main show, right? Like, getting AI to do my ad reads for me. That’s the stuff that it’s actually, maybe, sort of almost capable of doing right now.

Break down the fake Glen Powell and Sydney Sweeney bit for me.
L.W.: We wrote the entire thing. I think some people misunderstood the bit, which is fair. This might also speak to the fact people don’t quite understand what it’s capable of doing. Also: That took us twice as long to do than normal. AI is supposed to save you time — this did not save us time at all! We had to create the voices and write bits for all of them and then thread it all together and export the files. Even exporting took forever. So much extra work for the humans.

I kept laughing because people were like, “Oh, you’re using AI — I can’t believe that.†But listen to it! It does not sound good. We’re so far from the form being perfected in any way. You can immediately tell these are uncanny-valley computer voices, right? It’s so distracting how bad they are. It was my idea to do ourselves as fake, too, because I wanted you to hear how bad it sounds and how obvious it is that it’s not us.

And then there were the moments in which AI Sydney Sweeney’s voice melted into an Australian accent.
B.F.: That’s totally random. I don’t know how that happened.

L.W.: I think maybe somebody with an Australian accent spoke up in the clip we took to make her voice.

B.F.: It was very easy to make AI versions of myself and Lindsey because we record our podcast in two separate tracks. So all our audio is extremely clean — it’s just our respective voices. But I don’t have a video of just Glen Powell talking, so we had to dig through to find clean audio of them talking and not having a conversation. The best I found was Powell’s Actors on Actors last year because it was him, with good audio, talking to Kate Hudson where he’s monologuing for long bursts. Sydney Sweeney didn’t have that. In videos of her talking by herself at length, she’s always reacting or responding to someone else, but that’s maybe also why her audio had more character, too.

L.W.: And the more character it has, the more it’s able to sound funny or sarcastic or whatever. Although it can barely tell whether the text is sarcastic or funny anyway. I think the Sydney Sweeney audio we used was an interview from a GQ feature or something. She was being funny, and it was filmed in a weird place where there’s feedback, and that feedback exists in her audio.

B.F.: So ElevenLabs lets you create these voices for a small fee, and they say if you’re going to use this, the terms and conditions are that you don’t do any fraud or something. That’s why we kept saying, “These are fake, these are fake, these are fake.†We also wanted to make sure people knew they were fake on top of them sounding absolutely fake.

So the bit isn’t just Glen Powell and Sydney Sweeney. There’s a bit of fake Lady Gaga and Timothée Chalamet in there too.
L.W.: Lady Gaga sounds great. We wanted to find distinct voices. It’s almost as if Glen Powell’s voice isn’t even distinctive enough …

B.F.: Sydney’s is.

L.W.: … To do this gag, but we wanted to riff off their movie. With Gaga and Timmy, these are voices that celebrity impersonators would love to get their hands on. How many amazing drag queens have done the Gaga voice? Timmy also has a good voice to imitate. Then I made them and they just cracked me up.

We love awards shows, and so we were trying to mimic the form of an awards show where a random celebrity shows up and interacts with another random celebrity. If you could have any combination of any celebrities in the world, what would that sound like?

What’s fun is that the fake AI awards-show banter does sometimes feel kinda indistinguishable from, like, actual Golden Globes banter.
L.W.: So, in a way, AI is really capturing the essence of what it is like for celebrities to interact with each other in real life.

B.F.: There was a worry that I saw a couple of times online, like, “Don’t do this again. Why are you using AI?†And it’s like, Why would we ever use this again? It’s awful. It’s only worth using within the framework of the Who, Me?s.

Anyway, I don’t have any aspirations of using AI again in the foreseeable future. We got it out of our system. It was funny, but if we keep doing it, then it just loses its value.

L.W.: I don’t want people to listen to our show and think things are not gonna be real. If we do some sort of parody, or if Bobby makes a song mash-up, it should sound like it is that. We are not very good audio engineers. The stuff we create has an element of amateur hour, which I think is what we want.

B.F.: I want the precedent for this to be “Oh, maybe once a year, or perhaps never again, they will make a joke about AI using the voice of fake Australian Sydney.â€

L.W.: She’s her own character now.

What has the response to the episode been like?
B.F.: The response was overwhelmingly positive. We got maybe two people who were annoyed with us using AI at all — which is totally fair because the worry about AI, especially in the entertainment space, is totally warranted. I think that’s why one of the main AI wins in the writers strike was so great because it was an acknowledgment of the fact that people are gonna start using AI in terms of ideation, but you always have to have a human attached to it. You have to credit the human because, right now, AI only exists because it’s basically stealing stuff from creative people. The human element is always essential. We were using it as a tool. We were the humans who were plugging garbage in to get garbage out.

And our show is good! It’s two of us talking. That’s fine. I don’t need AI Sydney Sweeney in there every week.

L.W.: That’s so true. She’s annoying.

B.F.: She’s annoying!

Who? Weekly Makes a Fake Sydney Sweeney