Drake's track with an AI 2Pac verse didn't last long. A day after the Tupac Shakur estate threatened to sue Drake for using an AI imitation of the late rapper's voice on “Taylor Made Freestyle,” he took down the recording. Using 2Pac's voice, however, Drake opened up yet another important conversation about genetic artificial intelligence that reveals just how dangerous the business is — and how rights holders may have more power to shape it than they realize.
So let's get legit! In the cease-and-desist letter sent on behalf of the Shakur estate, attorney Howard King addressed both Shakur's personality rights, which include rights of publicity, or what some states refer to as rights of likeness, as well as copyrights in recordings and rapper songs. . Most coverage of this matter has focused on the former issue, as personality rights are relatively clear — Shakur's estate controls the rights to the rapper's distinctive style. The second gets complicated, as the copyright of the recording – and possibly the copyright of the song – has less to do with Drake's use of 2Pac-style vocals than how he was able to create them in the first place.
To create such a convincing imitation of 2Pac, an AI model would almost certainly have to absorb—and, in the process, copy—a significant number of Shakur's recordings. So King, in his letter, asked Drake “for a detailed explanation of how the sound-alike was created and the people or company that created it, including all recordings and other data that were 'scraped' or used.” Any answer Drake gave would have moved the issue into legal terra incognita — ingesting recordings and songs by an AI would be copyrighted, though it's unclear whether this could be done without a license under fair use. However, the stakes would be high. Unlike a California publicity right violation, which would be relatively easy to prove and incur limited damages, copyright infringement is federal and comes with statutory damages of up to $150,000 per infringed work. That means a company that absorbs 20 projects to create one will be liable for up to $3 million.
For the past year, music creators and rights holders have been talking about genetic AI as something that's coming — the deals they'll negotiate, the terms they'll set, the business they'll do — once they negotiate the right deals. But tech companies they tend to ask for forgiveness instead of asking for permission, and it seems that some of them have already swallowed a significant amount of unlicensed AI music. Think about it: None of the major labels have announced deals for AI companies to absorb their recording catalogs, but enough recordings have been made to make AI vocal imitations of Drake, 2Pac, Snoop — even Frank Sinatra act “Skeet skeet” by Lil Jon. This means that a company or companies could be in big trouble. Or that they have a first mover advantage over their opponents. Or both.
Part of the reason tech companies are moving forward is that deals involving new technology are becoming complex. In that case, how do you value a license you're not sure you need? If you believe that companies need a license to absorb music to allow users to do AI voice imitations – as seems likely – the price for that license will be relatively high, with complicated terms, because rights holders would obviously like to are reimbursed on an ongoing basis. (It is extremely difficult to create a fair one-time license to absorb a catalog of music: first, since copyright law controls copying, the licensor would lose any control not specified in the contract; second, it would be difficult for a potential buyer to raise the kind of money a seller might want, so the economics of ongoing payments make more sense.) If you think the swallow would fall under fair use — which is very likely in some extreme cases but much less so in general — why pay a high fee, much less a restriction with complicated terms?
The legal cases that tip the scales one way or the other will move at the speed of litigation, which moves more slowly than culture, much less technology. The first big case will be against Anthropic, which Universal Music, Concord, ABKCO and other music publishers sued in October for training artificial intelligence on lyrics of compositions they control. (Universal's agreement with YouTube on AI principles may make a decision that this is fair use somewhat less likely, as it shows that major labels are willing to license their music.) There are already other cases in other parts of the media business — The New York Times sued OpenAI and Microsoft in December, for example — and one of them could set a major precedent.
Until that happens – and maybe even after – there will be arrangements. Very few rights holders have much interest in stopping AI — some might in some cases, but it's a losing battle. What they really want to do is use the power they have to destroy, or at least delay, a startup in order to shape it. (“The power to destroy a thing is absolute control over it,” in his words Pavlos Atreidis, Padishah Emperor of the Known Universe, which may be an exaggeration, but it certainly has a point.) This will give them real power — not only to monetize music with AI but to shape the terms of engagement in a way that, let's face it, it's likely to favor big companies with big lists. It will be interesting to see what they do with it.
from our partners at https://www.billboard.com/pro/ai-deepfakes-beware-copyright-infringement-cost-big-money/