Copyright can’t stop the flood of AI intangible assets
Tomorrow the artists, the day after, the rest of us...
Let’s be honest, pop music tends to be so formulaic that it could have been generated by artificial intelligence (AI).
Indeed, most people don’t realise that nearly every hit song from the last 20 years was written in whole or part by the same guy named Max Martin – and that 90-99% of songs all use some variant of a four-chord arrangement. If music was produced in a factory, it probably wouldn’t sound too different from the top 10 radio hits playing right now.
That music can, in fact, be mass-produced, scuttles the idea that there's something unique about art and media. Art is always based on templates. Hollywood and Japanese Manga/Anime studios even have templates on what needs to be in a work before it can be accepted – because often, that template works. And the thing about templates is that they are easy for computers to copy.
So, you can understand why the music labels Sony Music, Universal Music Group and Warner Bros Records might be getting nervous. Max Martin and the multi-billion-dollar industry he represents could easily disappear if fans of the Backstreet Boys can generate a fresh song that sounds exactly like them at the click of a button.
As you might expect, music labels should be thinking about legal action. And they are.
Last month, the three labels accused AI companies Suno and Udio of committing mass copyright infringement by using protected recordings to train their music-generating AI systems.
According to the federal lawsuits, the two AI companies copied the music without permission to teach their systems to create music that will "directly compete with, cheapen, and ultimately drown out" human artists' work.
This lawsuit is just one of the dozens of legal actions being prepared or already in motion by businesses trying desperately to slow down what they all see coming from AI: huge disruption to their valuable intangible assets.
Not all businesses are worried. Some are salivating that AI will soon give them the perfect excuse to get rid of their employees once. But we’ll get to that shortly.
Wait, aren’t humans doing the same thing?
First, it’s worth asking if this music-generating AI is doing anything wrong. After all, as we pointed out in the opening, music is already being mass-produced.
Humans regularly create music that infringes on copyright mainly because there are so few ways to make new forms of music. How is that practically different to what AI is doing?
It’s also hard to say that AI “learns” music too much differently to humans. When songwriters, composers and musicians train their skills by hearing existing (often copyrighted) music, they are not infringing. The infringement would arise if, through their education, they produce music that is similar enough to be a derivative work (or a straight copy). But that’s not what most artists do.
So, why is training AI on legally-purchased copyrighted works a copyright violation? It looks and feels exactly the same to how a human learns.
Copyright suits only make sense if an AI spits out a direct copy of a song, or at least a tune that sounds like it was heavily derived from an existing work. If a person asks an AI to create a song that sounds just like "Stairway to Heaven" and it does, that person will be in hot water with Led Zeppelin – but the outcome would be the same if I asked a person to do the same thing. That’s why we already have copyright law.
It seems fine for a content creator to say a song of chord progression is theirs. What’s not okay is for an artist to say they also own the "shadow zones" (1-sigma standard deviation) close to their music. No one should be able to copyright the colour magenta or the number 7. That is why copyright laws exist – for the expression of a concept, not the concept itself.
We've been down this path many times. The tricky threshold of originality for intangible assets has dogged us for decades. You can do something similar, but not identical. There is no single way to determine if one inspiration is a knock-off of another. Judges can make these decisions, but even then, different judges will return different verdicts for different cases.
If the courts decide that, in this case, it is illegal to use copyrighted works to "train" AI, why would it not also be a copyright violation to "train" a human musician in the same way? Training is simply gleaning information from a pre-existing work. Information and knowledge are never protected by copyright. It’s hard to see any good reason why copyright law should be further deformed to cover training. If the record labels can show infringing output, that’s a good basis for legal action. But surely not the training element.
Obviously, storing music in a representational form rather than a literal form is already legal. Our brains do it all the time. It’s even legal to write down the sheet music to a song if you have perfect pitch.
Output is dicier. If you tell an AI to "output these lyrics {my heart will go on} in the style of Celine Dion," and it does, then technically it's not playing back a recording of Celine Dion’s copyrighted song. It's just re-synthesising it from basic principles coupled with the ability to be a nearly perfect digital imitator of Celine Dion’s voice and music.
You can also teach music students from sheet music if you have purchased this music from a licensed distributor. You can, by ear, figure out a song and write it down for yourself, so long as you don’t then make copies for distribution. You can also be "inspired by" a song and write a new song in the same vein, even though you run the risk of a jury deciding the song sounds a little too much like the existing tune.
None of this is new except that today all of this can be done “on a computer."
Music is the tip of the AI iceberg
The problem with AI is not really a legal question, it's an economic one.
Copyright law was initially created to protect artists from being exploited. It was always possible for a human artist to learn from the works of thousands of others and replicate their styles and genres on demand. But it was not commercially viable to do that for cents per song. You would have to pay a competent musician thousands of dollars per song if you wanted quality music.
But Average Joe demanding hundreds of songs on demand for personal consumption – not to be played over the airwaves – was not a risk that copyright holders ever considered. Now, AI can produce such output for almost free. Humans have no hope of competing at that price.
The thing is, this isn't a problem for the creative industry alone.
AI is presently being trained on the written word, images, and music because this is where the plentiful data is. But the goal is not simply to create a machine for producing words, music and art. There's too much money flowing into AI for that to be the goal. The strategy is clearly to reach the holy grail of eliminating employees. Tomorrow the artists, the day after, the rest of us.
AI text, imagery and music is the tip of a very ugly iceberg. Media is just the training wheels for a much larger introduction of AI that will impact everyone. After all, any large amount of data can theoretically produce a computer model capable of mimicking a human output. Every corporation is trying to gather that data to optimise and reduce headcount until there is only a single manager for every department telling an AI how to do the job that today is performed by a hundred people.
Customer service, tech support, any form of public-facing finance...the whole chain of purchase to pay...everything will be replaced with an AI model.
Which ought to be fine, right? Letting the machines do all the drudge work? Well, how do you price something that can be done for almost free? That’s a good question if you’re interested in the value of intangible assets.