A viral clip of Johnny Cash singing ‘Barbie Girl’ is uncanny and indicative of the internet’s latest AI obsession — and its creator wants to incorporate the tech into live performances

  • On Friday, YouTube channel “There I Ruined It” posted a Johnny Cash cover of “Barbie Girl.” 
  • The country singer died in 2003 but AI makes it possible to create music from anyone’s voice. 
  • Dustin Ballard, who runs the channel, said he wants to take AI covers to the live stage.

LoadingSomething is loading.

Thanks for signing up!

Access your favorite topics in a personalized feed while you’re on the go.

Ahead of the premiere of Greta Gerwig’s “Barbie” movie, deceased country singer Johnny Cash has delivered a cover of Aqua’s sunny ’90s single, “Barbie Girl” — an AI version of Cash, that is.

The cover is one of many posted on the YouTube channel “There I Ruined It” — whose motto is to “lovingly destroy your favorite songs.” There, you can find everything from Elvis singing Sir Mix-a-Lot’s “Baby Got Back” to Frank Sinatra covering “Get Low” by Lil Jon.

Dustin Ballard, who runs the channel, told Insider in an email that videos like these take a few days. The use of AI “is an extra layer of work” when creating his remixed songs, but says AI “opens up countless creative possibilities.”

“My process for these is a little different than most people,” Ballard said.”I first record the vocals myself so that I can do my best imitation of the cadence of the original singer. Then I use one of their own songs (like ‘Folsom Prison Blues’ rather than the original ‘Barbie Girl’ music) to add to the illusion that this is a ‘real’ song in the artist’s catalog, though clearly all done in jest. Finally, I use an AI voice model trained on snippets of the original artist’s singing to transform my voice into theirs. I have a guy in Argentina I often call upon for this training (although the Johnny Cash one already existed).”

It’s not just one channel: “There I Ruined It” is a microcosm of the new AI music landscape online. A quick search on YouTube will give you covers by politicians like President Joe Biden and former President Donald Trump or even cartoon characters. On TikTok, you might expect to hear Spongebob’s Patrick belting a gospel song or Squidward crooning Ol’ Blue Eyes.

Although the process of training an AI model on voice samples from scratch may be difficult for the average user, a lot of these covers are now possible thanks to voice cloning programs — voicify.ai, kits.ai, and covers.ai are just a few of many — which allow users to replicate famous artists using already trained AI models. 

With this technology, users can upload acapella versions of their favorite songs which are then converted to a version that sounds like their favorite artist. The creator of a viral AI Drake song who used the technology noted that it’s relatively straightforward to use.

“Right out of the AI, it sounds pretty good. It sounds real,” Jered Chavez told the Verge.

“AI voice training is currently not a very user-friendly process, but it’s rapidly getting easier,” Ballard told Insider. “I’m starting a live band version of There I Ruined It, and we’re even looking at the possibility of having live AI voice conversion onstage — though the legal ramifications of this would need to be looked into. Imagine a petite female singer stepping up to the mic and sounding exactly like Johnny Cash.”

Legal ramifications concerning music and copyright law, are what is driving the debate on AI covers in the music industry. Although AI covers online are usually made for fun, some use the technology to release original music on streaming platforms, impersonating artists like Drake, The Weeknd, and Travis Scott without permission.

Music and copyright lawyers say this could create legal nightmares as these covers flood the internet and are already taking steps to safeguard artists. For example, streaming services were asked by Universal Music Group to block AI companies from training generative AI models on songs from its catalog, Financial Times previously reported.

Kits.ai, which allows users to create music using AI-generated voices, told Insider the issue of protecting artists is one they wanted to solve. The company created a system where artists can provide their voices for commercial use under a license that allows them to collect royalties. The company also charges a subscription to pay artists upfront.

Evan Dhillon, founder of Arpeggi Labs, which runs kits.ai, told Insider in an email: “While most of the usage of ai voice models in music thus far has been exploitative and without artist consent we believe that with the right controls, ai voice models could provide tremendous upside for artists creatively and financially.”

Read More

Hannah Getahun