The rise of smart speakers is bringing more attention to the power of voice, which is then pushing publishers to think more about metadata. While publishers are quite familiar with the importance of metadata for search engine optimization on Google and other portals, the game is getting even more complicated where voice-powered devices only deliver one return rather than a visual list of options.
“Metadata enrichment has been one of our big initiatives, leaning into playlists and curating our music,” Barak Moffitt, evp of content strategy and operations at Universal Music Group, said at an Advertising Week panel this week. “Smart speakers are making music communal again where the Walkman made it personal. It’s changing what you ask.”
For labels like Universal Music Group, entering metadata for their artists, new and old, has been an even more important part of their job. That’s already been necessary with the rise of Spotify, Pandora and other streaming services that offer playlists curated by algorithms, rather than DJs. But it’s becoming even more crucial given the abundance of smart speakers like Amazon Echo, Google Home and Apple’s HomePod that encourage music listeners to play music with a simple command. Entering metadata requires a lot more work from publishers like Universal Music Group, but like most mediums, the end goal is to meet the consumer where they are, and it’s clear that they’re buying smart speakers.
“Overall, it’s the era of consumers having control. Voice has acted as a proxy to get these devices. They’re search engines, and no one can really question what the results are. You don’t really have the option to choose who is controlling the data,” said Dom Joseph, co-founder and CEO of Captify, a search intelligence company.
Pandora’s director of data science, Siddharth Patil, said the transition from mobile screens to interactions with voice have created much more complicated requests as well as feedback from consumers. Patil said that his team has seen that instead of simply providing a thumbs-up or thumb-down on the songs they like, the consumers can now say they want deeper vocals or a song with a faster tempo. Song requests are getting more complicated as well. For example, when a consumer says, “Alexa, play me songs for my workout,” a music label would want to make sure that songs they own and related to that mood would appear. That requires correctly labeling energizing songs such as Britney Spears’s “Work Bitch” or the theme song of “Rocky.”
“Instead of asking for specific artists or songs, they could just describe how they feel or what they are doing. It becomes that much more paramount to not only understand your catalog musicologically but also map it to emotions and activities and other such facets,” Patil said.
Moffitt said his team at Universal Music Group has been investing in sorting songs based on mood or activities, what he called lifestyle characteristics. “Our partners aren’t even accepting all of that data right now, but we’re trying to lead the industry that way, creating a superset of metadata,” Moffitt said.
Of course, the need for metadata extends beyond music. Joseph said it’s going to be a challenge for retailers to compete when devices, especially ones powered by Amazon’s Alexa, only provide back one answer. He gave the example of shopping for a dress via Amazon’s Echo. Indeed, Amazon’s ability to provide consumers their own products in retail as well as in music has those industries a bit wary. Moffit of Universal Music Group said he thinks much of the growth of Amazon Music is due to the release of the Echo Dot.
“I don’t want to trust Amazon with giving back the perfect dress,” Joseph said. “It’d be great if the ecosystem that opens up for voice is not just seen as Amazon or Google.”