
Across Taylor Swift’s decades-plus career, her stature as one of the most powerful artists in the world has not completely protected her from nonconsensual sexual advances and grotesque violations of privacy. In 2017, the singer won a lawsuit against a DJ whom she accused of groping her at a meet-and-greet, ultimately securing a symbolic $1 in damages. In 2019, she accused Kanye West of “revenge porn” after the rapper’s “Famous” music video featured a wax doll of her sleeping naked in bed with him. Almost seven years after the singer-songwriter Father John Misty raised eyebrows with a line about “bedding Taylor Swift every night inside the Oculus Rift,” forecasting a future in which advanced technology is deployed to humiliate and expose celebrities, graphic deepfake images of the pop singer have gone viral. Here’s everything you need to know.
Deepfake Images of Taylor Swift Went Viral Online
This week, fake, sexually explicit photos of Swift began spreading online, once again raising concerns about the ethics of deepfake technology and its role in online harassment against women. Some sources suggest the images emerged from a Telegram group; others believe they have traced the photos to the tabloid Celebrity Jihad, which Swift allegedly has already considered legal action against after it proclaimed to have “leaked” nudes of her in 2011. The images gained more traction after being circulated on social-media platforms like X, Reddit, and Instagram. One of the most viral photos was viewed over 45 million times and stayed up for 17 hours before X suspended the account behind it, according to The Verge.
Swifties Responded by Flooding X
The deepfakes are a violation of X’s content policy, which reads: “You may not share synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm.” But after Elon Musk took control of the site in 2022, it has struggled with content moderation, with dwindling staff and looser rules. Last year, X’s top trust and safety chief, Ella Irwin, resigned, later citing a disagreement over principles: “It was important to me that there was an understanding that hate speech, for example, violent graphic content, things like that, were not promoted, advertised, amplified,” she said.
Though X suspended several accounts, the images continued to proliferate, prompting fans of Swift to take matters into their own hands. They started flooding X with photos and videos of Swift’s concert performances to bury the explicit images; they also collaborated to report accounts behind the photos.
What’s Happening Next?
Rumors have surfaced that Swift is considering legal action. A source reportedly close to the singer told the Daily Mail: “Whether or not legal action will be taken is being decided, but there is one thing that is clear: These fake AI-generated images are abusive, offensive, exploitative, and done without Taylor’s consent and/or knowledge.” Swift’s publicist, Tree Paine, did not respond to the Cut’s request for comment.
As deepfake technology becomes more widely accessible, the risk of abuse grows larger, renewing the urgency for reform. Last May, Democratic congressman Joe Morelle introduced the Preventing Deepfakes of Intimate Images Act, which would make creating such images a federal crime. On Thursday, he wrote on X: “The spread of AI-generated explicit images of Taylor Swift is appalling — and sadly, it’s happening to women everywhere, every day.” Another Democratic representative, Yvette Clarke, one of the co-authors of the pending Deepfakes Accountability Act, said: “What’s happened to Taylor Swift is nothing new … This is an issue both sides of the aisle and even Swifties should be able to come together to solve.”