This week, the Beatles released a long-dormant song using AI: What other musicians think about the technology

AI in music

AI in music

Alternative hip-hop artist KC Alexander Robinson is passionate about his music.

“I think people are always going to crave that human interaction,” he declares.

Robinson, practicing vocals Friday in a Northeast Minneapolis recording studio, sees artificial intelligence as a tool, but not a substitute for human musicians.

“We all have a unique voice,” he says. “We all have a unique thumbprint. So how can we safeguard our digital thumbprint so to speak?”

Robinson’s thoughts come just one day after the Beatles released a new song called “Now and Then.”

The band used an AI technology called “track separation” to isolate John Lennon’s voice, originally recorded on a demo.

A guitar part recorded by George Harrison during the 1990s was added to the mix, and Paul McCartney and Ringo Starr contributed bass and drum parts.

“We’ve never been able to separate the audio from these recordings,” says Zach Hollander, an engineer-producer at the Pearl Recording Studio. “Getting any sort of old pieces or snippets from those greats and being able to make new songs opens up tons of opportunities for remixes and new exciting tracks.”

But John Abraham, a University of St. Thomas Engineering professor, who’s a big music fan, says AI is also having another effect.

“The musician has lost control over the vocal likeness and that’s a real concern for people,” he declares.

Abraham is talking about “AI cover songs” — music that’s generated by an AI program that collects audio files and then word-by-word, generates vocals an artist never recorded.  

“People around the world are feeding in vocal samples from people,” he explains. “Three to five minutes of your voice fed into machines, and then it replicates you.”  

Abraham likens these programs to a Pandora’s Box, where the artists have no control over their work.

“What would happen if a Prince AI was used to generate to play 50 Prince songs that he wrote but never performed?” he says. “Is that good or bad? It depends on who you ask.”

Abraham says he even plays songs he’s found online to his kids, to see if they can tell if the tunes are the real thing.

“I play an AI cover song for my kids, and you’ll have maybe a country artist sing a hip-hop song,” he notes. “We as a society haven’t figured out how to walk this path and mitigate the risk but get the benefits. There are some real concerns as well as real benefits.”

Hollander says AI does have some positive applications in the studio.

“Like fixing mistakes in a drum performance, or if a singer’s timing is off, or if their note isn’t on, AI can help us fix them, so we can focus on the creative decisions,” he says.

But he and Robinson believe that people have musical and vocal skills that no technology can duplicate.

“When someone sings a note and it really hits you or something, I don’t know if we can ever replicate that,” Hollander says. “ I can almost hear a different way each musician plays the drums or a guitar, and I think, you going to lose that touch that a musician has, that there is something we all react to about that.”

All of this has left the music recording industry with a lot of questions.

Can AI imitate creativity?

Could musicians lose income because of technology?

Already, there are calls for legal protections for artists whose work is being replicated by AI.

Robinson says using an artist’s work without their consent is simply wrong.     

“I do think it’s a form of robbery and I think it’s going to create chaos and a chain reaction,” he says. “And how do we set up protocols and barriers around artificial intelligence.”