A Facebook photo is worth a thousand data points. The social network’s image-recognition tool, introduced this week as a way to read photos to blind people, has given marketers — they’re marketers, after all — ad targeting dreams dancing in their heads.
Facebook’s service, called Automatic Alternative Text, was built to give blind people an audio breakdown of what’s in photos, but any bit of data the social network collects eventually helps refine its ad targeting. Facebook is predominantly an image-based platform, and until this week, it didn’t really have a way to analyze these images aside from the captions people write for them. Now, thanks to artificial intelligence, it can understand photos down to their component parts.
“This technology changes the game, now that there’s a rich, new data source. Images are the most engaging content on the Internet, but they’re also effectively black boxes to marketers or anyone interested in what’s happening in those images,” said Ophir Tanz, CEO of GumGum, an image-recognition firm that works with top brands on putting such data to use in advertising. “The things Facebook is doing are very advanced and very sophisticated. What they just released, the ability, in a semantically appropriate way, to describe what’s happening in a photo, that’s very advanced and starting to approach the holy grail of image recognition.”
For now, the service for the blind can tell who specifically is in a photo and what’s in the landscape — a beach, a forest. Facebook declined to comment for this story.
“Using this technology, Facebook can create additional audiences, particularly lookalike audiences, increasing their targeting capabilities, just looking at your pictures,” said Luis Sanz, COO of Olapic, which uses image recognition to help brands understand consumer sentiment on social media platforms like Instagram. “For example, if many of your pictures contain dogs, I can probably include you in an audience group that likes dogs without you having to take more actions. Or a beach-lovers group if most of your pictures are at the beach.”
The information within photos will usually convey more than what’s just in the caption that Facebook users write themselves. When a brand’s logo is visible in a photo, for example, it’s left unmentioned 80 percent of the time, Tanz said. For instance, most people won’t cite the brand name of the pizza they’re about to bite into, but if the box is anywhere in the frame, image-recognition could detect it.
GumGum can pick out a tiny logo on a wrapper almost anywhere in the shot. Marketers use this data to understand how their sponsorships perform and the level of earned media the brand is getting. For example, a brand like Budweiser could crawl the entire Internet looking for any images from events it sponsors and see if its logo appears, Tanz said.
“If there’s no textual call out, typical social listening is incapable of identifying these mentions,” Tanz said.
Facebook’s tech still has some learning to do, but it has an advantage because of all the other data at its disposal — a massive trove of information and billions of photos it can show its program to help it learn.
“Facebook has a very significant advantage over everyone else when it comes to developing these type of tools because the most important part to properly train these AI systems is the dataset. And Facebook has the largest one,” Sanz said.