As sexually graphic AI-generated images of Taylor Swift flooded X (formerly Twitter) yesterday, causing the phrase “Taylor Swift AI” to trend on the platform, and the singer's fans went nuts. They worried that the site (whose content moderation team was completely disbanded by owner Elon Musk) didn't act quickly enough to remove the posts, which violated X's community guidelines, or ban the responsible accounts. One of most viral tweets it stayed up for 17 hours and garnered around 45 million views.
In the end, the Swifties were able to remove much of the AI through mass reporting and overwhelm the rest with a raging avalanche of condemnation. But for this war fan, it wasn't enough, and some expressed hope that the singer's misfortune would set the stage for a wider crackdown on non-consensual and invasive AI porn. “The only 'silver lining' to this happens to Taylor Swift.” He wrote an influencer, “is likely to have enough power to pass legislation to eliminate it”. Many agreed that such images should be illegal. (A representative for Swift did not respond to a request for comment.)
The story also caught the attention of a few lawmakers: Sen. Martin Heinrich of New Mexico he tweeted that it was an example of “exactly the danger we face with unchecked artificial intelligence,” adding that Congress must act on the issue. Representative Tom Keane of New Jersey is advertised The proposed AI Labeling Act, a bill that would require clear labeling and disclosures for AI-generated material as part of a regulatory solution.
Swift's superstardom, signs of congressional support, and a highly motivated military seem to promise strong momentum for any effort to root out these non-consensual naked AIs. But that crusade will face a thorny and prohibitive set of complications, according to civil liberty experts — no matter how fired up the Swifties are.
“It's a huge force, and they stood up,” says Katharine Trendacosta, director of policy and advocacy at Electronic Frontier Foundation, a non-profit organization focused on the privacy and free expression of Internet users. “But they did that after Ticketmaster, and in a way we still have Ticketmaster,” she adds, referring to the Swifties damaging the company as a high-price monopoly (and in some cases even suing) for mishandling ticket sales for Swift's Eras tour. . In the AI battle, too, Trendacosta says, we'll see “the nonstop movement of Swifties against the immovable object that is the legislature,” a Congress that's slow to respond to “basically anything.”
“The problems with the Internet are always problems of scale and exposure,” says Trendacosta, noting that blatant depiction of celebrities is nothing new: from painting them naked to Photoshopping their faces onto naked bodies to more sophisticated deepfakes video, celebrities have long been vulnerable to our darkest fantasies. The difference today, he explains, is a matter of “how fast and how much” we see, with AI software enabling relatively few people to produce a staggering amount of content for a massive, hyper-connected audience. ONE 404 Media research found that Swift's images appear to have been leaked by a Telegram group that used the Microsoft app Designer to create abusive images of real women, which were then spread across social media and celebrity nude websites.
But reform and government oversight are difficult, Trendacosta says, largely because lawmakers' ideas for how to combat deceptive AI have all been retrograde. The EFF, for example, opposes the No AI Counterfeit and Unauthorized Copies (No AI Fraud) Act, introduced by Reps. María Elvira Salazar of Florida and Madeleine Dean of Pennsylvania earlier this month. Why; Because by seeking to guarantee “individual property rights in likeness and voice,” the proposed law would extend rights of publicity—that is, your right not to have a company falsely claim that you endorse its product—to any kind of digital representation, “from photos of your child, in recordings of political events, in documentaries, parodies, political cartoons and more,” as EFF notes in a statement on the bill. Other reviewers have also warned against it cooling effect this would have in digital free speech. Under its extended language, common use a Saturday night live The Swift impression clip would potentially be a criminal offence.
“I know a number of lawmakers are trying to either write new bills or tweak existing laws around revenge porn to crack down on it, but a lot of it is incredibly new,” says Mike Stabile. Free Speech Coalition, the trade association of the US adult entertainment industry. “How despicable [nonconsensual AI porn] it may be, it's still technically speech, and efforts to restrict or ban it may run into obstacles in the courts.”
“In the short term, platforms are the best tool for blocking widespread distribution,” Stabile says, adding that adult sites including Pornhub and Clips4sale “were ahead of the curve and banned deepfakes and revenge porn years ago.” Of course, these rules depend on enforcement — and that, according to Trendacosta, can be an insurmountable task in itself.
“The problem we often see with the bigger companies, like Facebook or Google or even Twitter, which isn't even that big, is that enforcement is really selective because they have so much content,” he says. “It's really impossible.” Incidents like the sudden proliferation of Swift's AI-born illustrations of sex scenes will be more focused and garner a relatively quick response, while “the already victimized or marginalized” get little, if any, help, Trendacosta says. The outcry over Swift's admittedly dire situation has far exceeded, for example, concern about children whose photos are being fed into AI models to create child sexual abuse material.
Plus, Trendacosta points out, there are practical limits to the engineering side of the equation. People want to believe that “if the problem is the technology, then the engineer should be able to fix it by building a new technology,” he says, but that doesn't get to the systemic roots of the problem. The Microsoft software used to create pornographic images of Swift has safeguards meant to prevent exactly this kind of misuse. bad actors found ways around them. Nor can we fully rely on filtering technology to detect platform violations. “Machines don't understand context,” says Trendacosta. “If I draw a politician half-naked to make fun of him, that's protected political speech. Machines don't know that.”
So while it's easy to reach a general consensus that it's wrong to spread AI porn that victimizes a pop star, the question of how we might prevent it while guaranteeing the same protections for average citizens — and preserving First Amendment rights — is too vague. On the one hand, our technologies and the human teams behind them are not up to the task. On the other hand, government over-correction can leave us with very limited social networks that shut down legitimate forms of comment.
Which doesn't negate the tenacity of Swifties who want to tackle the scourge of naked AI on her behalf. It just means that we won't see a seismic shift anytime soon, and that for all of Swift's influence on culture, some things remain beyond her control.
from our partners at https://www.rollingstone.com/culture/culture-features/taylor-swift-ai-generated-nudes-swifties-1234954487/