No marketer in her right mind has the patience, let alone the resources, to sift through a gajillion online photos to get a sense of how real-life customers view and use her products. And yet, imagine the information companies could potentially unlock about how customers use their wares in the reams of photos they post online.
Enter Carnegie Mellon University. Eric Xing, associate professor of machine learning, computer science and language technologies, and Gunhee Kim, now a post-doctoral researcher at Disney Research, recently created an algorithm that mines photos from social sites like Flickr, Pinterest and Twitpic, to figure out just what people think of the brand based on variables within the photo. (The researchers were unable to sample Facebook, the site with the largest number of shared photos, as Facebook blocks public crawling.) For a study, researchers searched for 48 different brands within the categories of luxury, sports, beer, and fast food, and crawled nearly 5 million images in the process.
Digiday spoke to Kim about the photo algorithm, what photo mining could mean for brands, and how marketers may someday use it to their advantage:
What led you to start researching images?
Our goal is harnessing popular thought or opinion toward the brand by using images. The main difference from previous work with brand perception online is that there’s no previous work on using images — the research was done with text data. Our thought is that text and images complement each other. The picture can show the connection between users and products, the context and which people use the brand.
How does it work?
Previously brands used surveys, but the main problem is that those takes time and they’re expensive. But with an algorithm, you can find out the general opinion with the public instantaneously. Anyone can take photos of things they want to buy or have bought and post it to the Web. I did a query of Louis Vuitton on the Web, and of course there are millions of photos on the Web tagged with Louis Vuitton. Looking through the photos, you can see that the public perception of the brand on the Web. Using that information, we can discover the differences between brands and how people perceive them.
How does the algorithm work?
The simplest way to explain it is that we built software to crawl images on sites like Pinterest and Flickr to find popular brand logos. Then it clusters the images into several groups. For example, Louis Vuitton might have several groups — like bags or clothing. So through those clusters, we detect what most images are presenting; then, we look for where the product is most likely located.
Once we figure out the clusters, we can see the public view. If you look at Nike, they sway toward jogging images, whereas its competitor, Reebok, has more American football images and NFL Jerseys. But Nike has a lot more variety and a higher volume of images, so you can tell they’re more popular.
Any other broader trends emerge?
One trend is that, according to seasons, the images obviously change a lot. Winter has a lot more ski images and so on. The study also saw a lot of activity around events. That’s where people take photos. Rolex, for example, had images associated with horse racing or car events or yachts because they sponsor a lot of events. The images for the brand are highly synchronized by the events they sponsor.
How can marketers use the software?
There are a lot of different applications once the algorithm is built — one possibility is to understand the interaction between a person and product. We then detect the activity of the person. If you take one photo that includes a beer cup, you can figure out the location and then maybe serve an ad.
Image via Shutterstock