Getty Images

ARTIFICIAL INTELLIGENCE is causing major disruptions in creative industries as the technology threatens to render creativity redundant. Actors and writers have been striking to prevent an apocalyptic hellscape in which AI produces every bit of content we consume. Ok, maybe that’s not the only reason, but the rise of AI is certainly one of their concerns. Musicians haven’t yet followed suit, but with AI created songs taking over the listening space—one has even been submitted for a Grammy—and music streaming platforms unwilling to make a stand, the industry is in turmoil.

Streaming platforms like Spotify are a form of authority in the music industry. These platforms have decreed that some use of AI in songs, like auto-tune, is acceptable for their platform, while more extensive uses, like the type that directly mimics artists, are not. There is a middle ground however, as rapid AI advancements have opened new avenues of creative expression. AI can now create music that is substantially original, but still derived from human input, raising questions over whether the burgeoning AI music scene has a place on streaming platforms. Navigating these tempestuous waters, Spotify CEO Daniel Ek says, “Is going to be tricky.”

Earlier this year, Spotify removed an AI-created track titled ‘Heart On My Sleeve’, which impersonated the voices of The Weeknd and Drake. Noting that Spotify was already well aware of such instances of creators using AI to clone voices, Ek told BBC that Spotify was working to remedy the issue. “You can imagine someone uploading a song, claiming to be Madonna, even if they’re not. We’ve seen pretty much everything in the history of Spotify at this point with people trying to game our system,” Ek said. “We have a very large team that is working on exactly these types of issues.”

Despite an aversion to AI-created music that directly impersonates other artists, Ek said there’s uncertainty surrounding whether AI-created music that is only influenced by other artists should be allowed on Spotify. For now, Ek said that Spotify won’t be banning AI music, as long as it doesn’t directly imitate artists, and only incorporates their stylings.

We’re getting into dangerous territory here. AI created songs could start popping up all over streaming platforms if they qualify as substantially original, posing a threat to musicians—the kind that don’t source their creativity from data. Let’s break down the music industry’s current landscape, in the wake of AI’s rise.

Could AI cause musicians to go on strike? Hozier thinks so

In an interview with the BBC, Grammy-nominated artist Hozier expressed doubt over whether AI music “meets the definition of art” and suggested he would take action if AI posed a threat to the music industry. “Whether it’s art or not, I think, is nearly a philosophical debate,” Hozier said. “It can’t create something based on a human experience. So I don’t know if it meets the definition of art.” When asked if AI presents such a threat to his industry that it warrants a strike, Hozier didn’t mince his words. “Absolutely,” he said.

Most of us have come across music created by AI before, with AI covers taking over TikTok in recent months. It might be difficult to imagine how those videos of AI Donald Trump singing Lana Del Ray’s ‘Summertime Sadness’ could possibly threaten the music industry, but not only could AI put musicians out of their jobs, it could actually spell the end of music altogether. Allow us to explain.

INSTAGRAM | @hozier

How does AI threaten musicians’ livelihoods?

As referenced above, a song featuring the voices of Drake and The Weeknd went viral on social media earlier this year. Except, it wasn’t their voices at all. The song’s creator used AI to study the artists musical catalogues and clone their voices. ‘Heart On My Sleeve’ was released on popular streaming services, before being taken down after it was deemed it violated copyright law. We’d love to play you a snippet of the track to demonstrate the AI’s effectiveness, but legally we can’t.

More recently, Google and Universal Music have been in talks to license artists’ voices for songs created by AI—meaning they could literally buy the rights to a singer’s voice and use technology to make new music without needing their involvement.

While fans won’t complain about more music from their favourite artists, the use of a musician’s voice to create songs without the consent of the voice’s owner infringes on copyright and takes money away from them. AI created music that mimics popular artists can cheapen their brand and decrease the demand for their talents. What’s more, if we can just get AI to make new music that sounds just as good, why do we need musicians at all? That’s why some artists are concerned with the threat of AI.

Will musicians go on strike?

Musicians face many of the same digital replacement threats posed to actors and writers, but they haven’t gone on strike yet. As of now, there are no official plans for musicians to go on strike, and it’s not likely they will—unless they can unionise.

The American Federation of Musicians has 80,000 members in the USA and Canada, covering orchestra, film and theatre musicians. But most of the big artists who dominate the charts, sell-out stadiums and produce those annoyingly catchy viral TikTok sounds, have no union.

Artists are contracted to different record labels, so if a label decides to use AI to create new songs using an artist’s voice, artists will have to take up any grievances with their label, rather than seeking union help. Artists without a label can rely on laws against copyright infringement, but when more and more AI songs are being created every day, it will be hard for the law to keep up.

What’s so bad about AI-created music?

We get it, AI-created music can actually be pretty good. Especially when the AI is generating fresh hits from artists who seemingly haven’t dropped in years (we’re looking at you, Frank Ocean). But while AI-created music is fun to experiment with for now—you may be thinking ‘to hell with the musicians, I just want new music!’—it does have long-term limitations.

AI learns by being fed existing information from data and algorithms that bolster its grasp on what we mere mortals deem important. It cannot create its own new information, but simply reproduce altered amalgamations of information that it’s come across.

Bleh. That sounds way too complicated. Basically, what we’re trying to say is that AI can only make music that’s derived from existing music. It can’t account for the trends, tastes and genre emergences that will shape our listening habits in the future to create something we haven’t ultimately heard before. Sure, if you miss the ‘Old Kanye’, AI can churn out a Graduation-esque track with ease, but if you’re tired of the same played out records and hoping to hear what future Kanye will sound like (if he ever leaves his current uber-problematic era) AI will give you diddly squat.

All in all, we’d prefer it if music stayed fresh and fuelled by human ideas, rather than ominous technology. So, if in the future all music sounds exactly the same as it does today, a few strikes might not be too bad.

Related:

TikTok is launching a new music streaming service in Australia

Elon Musk’s Neuralink brain implant to begin human trials