This article was featured in One Great Story, New York’s reading recommendation newsletter. Sign up here to get it nightly.
In 2021, five musicians from Hastings, England, noticed a hole in the market. “There were no massive rock bands making huge, catchy, stadium-worthy anthems,†says guitarist Chris Woodgates. So they started a group called Breezer and hit the recording studio. “We shared our songs with friends, and everybody told us, ‘This could be the new Oasis,’†says drummer Jon Claire. But while tracks like “Alive†and “Forever†bore the obvious influence of Noel Gallagher’s early songwriting, and front man Bobby Geraghty sang through his nose like Liam Gallagher, Oasis-size success never materialized. “Breezer didn’t quite get the momentum we’d hoped for,†says Claire. They played their final live show last summer, or so they thought.
Then something weird happened. A few weeks ago, Geraghty was surfing YouTube and came across a series of videos in which someone had used brand-new generative-AI software to mimic Liam’s voice and swap it into Oasis songs that had originally been sung by Noel. The results — on tracks such as “Don’t Look Back in Anger†and “Half the World Away†— were uncanny. “I thought, Oh my God. I didn’t even know this was possible,†says Geraghty. “But it sparked something in my imagination, and I wondered what it would be like to hear Liam sing our songs.â€
Geraghty watched a tutorial on the software and went to work replacing his own voice in eight Breezer tracks with an AI-generated model of Liam’s. He uploaded the new versions to YouTube under the name AISIS, billing them as an “alternate-reality concept album†by Oasis’s classic mid-’90s lineup.
AISIS immediately went viral, amassing 300,000 streams in a week. To many listeners, it sounded like the record they’ve wanted the real Oasis to make for years. (The band has been obstinately broken up since 2009.) Even Liam himself approved. “It’s better than all the other snizzle out there,†he tweeted. “I sound mega.â€
The project was so novel it upstaged the new single by Noel’s current band, the High Flying Birds, which premiered days later. “I had a look on Twitter, and Noel’s song isn’t getting many likes and retweets,†says Geraghty. “I’m not saying it’s because the music’s bad. We went viral because people were interested in AI. This is what it takes for a guitar band to get noticed now.â€
Two months ago, AI voice-cloning technology barely existed. Now it’s forcing the music industry to consider such tricky questions as whether pop stars own the sounds produced by their own larynges and if we even need flesh-and-blood pop stars at all anymore. There may not be much time to decide because Breezer’s story is already becoming a familiar one.
The first sign of trouble came in February, when DJ David Guetta announced that the sample of Eminem’s voice he’d played during a recent live set had been created with AI. In March, the electronic hip-hop duo AllttA shared the track “Savages,†in which a human rapper trades verses with an AI Jay-Z. And then, most famously, in early April, an anonymous producer released an original song called “Heart on My Sleeve†featuring AI vocals modeled on those of Drake and the Weeknd. “Heart on My Sleeve†was streamed by tens of millions of people, some of whom noted that they liked it better than recent singles by the actual Drake and Weeknd. The producer, who goes by Ghostwriter977 on TikTok, might’ve been motivated by revenge; they claim to have worked as an uncredited songwriter for pop artists and “got paid close to nothing just for major labels to profit.â€
Universal Music Group, the major label that usually profits from Drake and Weeknd songs, was predictably upset. The company had “Heart on My Sleeve†pulled from streaming services and issued a statement directed at would-be copycats asking them “which side of history†they wanted to be on: “the side of artists, fans and human creative expression, or on the side of deep fakes, fraud and denying artists their due compensation.â€
Naturally, the guilt trip has failed. TikTok and YouTube are flooded with music by AI clones, including covers of “Get Lucky,†by AI Michael Jackson, “Party in the U.S.A.,†by AI Ariana Grande, “Song 2,†by AI Kurt Cobain, and “Kill Bill,†by AI Rihanna. Most prolific of all is AI Kanye West, who’s already cut a path through much of the Great American Songbook with versions of “Poker Face,†“Fly Me to the Moon,†“Y.M.C.A.,†“Man in the Mirror,†“Eye of the Tiger,†“Wicked Game,†“Losing My Religion,†“Ms. Jackson,†“Mr. Brightside,†“Two Princes,†“Like a Rolling Stone,†“Black Hole Sun,†“You’ve Got a Friend in Me,†“Sweet Caroline,†“American Pie,†and “All Too Well (10 Minute Version),†among others.
The software that makes this possible is called SoftVC VITS Singing Voice Conversion, or So-Vits-SVC. It’s free, open source, and can run locally on any computer with a decent GPU. When it launched in March, it was buggy and required coding ability to use, but it’s been getting easier as updated versions arrive almost daily. If you just want to create a simple cover song, there are now websites that automate most of the process.
To train an AI model on the singer of your choice, feed the app 20 to 30 minutes of high-quality a cappella audio and wait a few hours while it works its magic. If you’re in a hurry, you can use a model made by someone else. (Pop stars such as Bad Bunny and Taylor Swift are available, as you’d expect, but so are metalheads like James Hetfield and Pantera’s Phil Anselmo.) A good model can perform any song you want as long as you have its isolated vocal and instrumental tracks (and if you don’t, there are other programs you can use to separate them). “I’m an average consumer with no coding experience whatsoever,†says Geraghty, “but I figured out how to use So-Vits-SVC from YouTube, and it trained my Liam model in about 12 hours.â€
There are limitations. For best results, it helps if your AI clone has the same vocal range as the singer they’re filling in for, which is why Kanye’s cover of “Hello†lacks some of the majesty of Adele’s version. So-Vits-SVC can handle only one voice at a time, so you can forget about having Kanye do the five-part harmonies in “Bohemian Rhapsody.†Also your AI model will follow the original vocalist’s phrasing and inflections, so singers with accents or other distinctive tics may be harder to replace; see, for instance, Kanye’s disastrous rendition of Nena’s “99 Luftballons.†Most important, neither So-Vits-SVC nor any other software can reliably write good music and lyrics on its own yet, so the best AI-generated songs still require creative input from humans.
Of course, at the rate generative AI is advancing, all of those obstacles could be overcome by tomorrow, which is why many who derive their incomes from recorded or published music are panicking right now. There are unresolved questions about what legal protections rights holders have. The courts haven’t decided if original content created by AI tools that have been trained on copyrighted material counts as infringement. Also, in the past, lawsuits over soundalike vocals have hinged not on copyright but on artists’ rights of publicity. For example, in 1990, Tom Waits won $2.5 million after convincing a jury that a Doritos ad in which another singer imitated his voice could trick people into believing he’d endorsed the chips. But it’s unclear whether that case has any bearing on works like “Heart on My Sleeve†or AISIS, which were both clearly labeled as AI creations, or even how such a precedent could be enforced now that anybody with a laptop can roll their own Freddie Mercury, Nina Simone, or Lil Uzi Vert.
“We could talk all day about copyright and rights of publicity, but AI is here, and it’s here to stay,†says entertainment lawyer Jason Boyarski, whose clients have included Prince’s estate and Meek Mill. “Unless the music industry wants another Whac-A-Mole chase like they had with Napster, they’ll need to find a way to embrace this technology and monetize it so that the artists whose vocals are being used can participate.â€
To that end, Grimes has announced she’ll share half the royalties on any hit song featuring her AI-generated voice and suggested she’s training her own vocal model to release to the public. That idea could appeal to pop artists, many of whom outsource their songwriting anyway and stand to become much richer by lending their voices to infinite amounts of new material. It might especially appeal to the estates of dead artists who can’t make their own albums anymore (or living ones like Frank Ocean, Rihanna, and Radiohead, who just can’t be bothered). It would also finally allow talented unknowns like Ghostwriter977 and the members of Breezer to make money from their work. It will almost certainly create a glut of more music than any fan could ever want or listen to.
Incidentally, AI-generated vocals may have already solved one of pop music’s oldest problems, which is that human voices change over time. Liam Gallagher just turned 50, and his vocals are deeper and coarser on recent albums than they were in Oasis’s heyday, the result of a thyroid condition and three decades of wear. When I asked him about it in 2017, he said, “If I don’t have a couple of days in between shows, my voice just fucking just dies, man … I should be looking after it a lot more.†Now he’ll always have a backup.