Confessions of a location data exec: ‘It’s a Ponzi scheme’

This article is part of our Confessions series, in which we trade anonymity for candor to get an unvarnished look at the people, processes and problems inside the industry. More from the series →

Advertisers are scrutinizing their data pools more than ever in programmatic advertising. In the latest installment of Confessions, in which we exchange anonymity for honesty, we spoke to an executive at a location data vendor who said most of what those companies sell is fraudulent.

Why is there a problem in the way location data is sourced and used in advertising?
Users need to give permission to share location within a given app for the latitude and longitude information to be passed in the bid stream. A very small number of publishers have enabled location sharing while using the app. Of those, a very small number of people will walk into a store with their app open. For the web, the number of people sharing latitude and longitude while walking into a store will be relatively non-existent. It’s unrealistic to scale this sort of data given how hard it is for location data vendors to source it.

How much of the location data in the market is fake?
I met with a programmatic leader at one of the agency networks recently who asked my firm to help him verify the location data in the bid stream because they believe up to 80 percent or more of the lat-long data available there is fake. No one has stopped to think about where that data has come from and why a publisher would choose to sell it all to a vendor who is going to build a business on top of their data. What’s actually happening is these ad tech vendors are trying to pad out the limited data they already own with other data sets from competitive vendors or other unknown sources. Most reputable publishers would rather use their data across their own business than sell it to ad tech vendors, as the revenue potential is greater against their own content.

Does the data even work?
Who goes into a shop with their phone in-hand looking at a publisher site? That’s not how people behave when they’re shopping. And yet there are location data vendors who are selling data sets of people who are more likely to go into a shop after seeing an ad. Additionally, there are players in market that are pumping exchanges with this fraudulent data to satisfy demand from advertisers who want to know that someone visited their store after seeing an ad.

Has the General Data Protection Regulation made it easier to spot the fraudsters?
Not exactly. But the companies previously buying that data have now got a real reason to question the model, rather than brush it under the carpet. Lots of companies thought they could build businesses on measuring store visits post ad exposure. GDPR’s arrival has meant some companies have shuttered that part of their business or their operations in Europe. The apps previously sharing or selling the data have stopped doing so, to safeguard their business, and thus the supply to the location data vendors has been cut off. As most of those developers do not have robust processes in place for managing a GDPR and privacy-compliant data flow to downstream partners.

What about fraud-detection services?
There are ad tech companies that promise ad buyers they will find fake data. Those vendors will usually eradicate that data at a cost-per-thousand, as a data fee. Similar to what happened with viewability and ad fraud. I’m sure there are companies looking to solve this problem, but you’ve got to wonder: Why would I want to pay more to validate the data that I already paid for? If this happened in the financial industry, then people would’ve been locked up for it — it’s like Bernie Madoff’s Ponzi scheme. Companies that detect fraud would not need a reason to exist if the market didn’t pay for the fraud to take place. And once they do exist, of course, it’s about putting a bandage on the problem, rather than fixing it. After all, it’s easier to sell morphine rather than vitamins.

More in Media

Media Briefing: Publishers’ AI task forces evolve into a more distributed model of experimentation

In this week’s Media Briefing, publishing executives share how the task forces they created earlier this year to oversee generative AI guidelines and initiatives have expanded to include more people across their organizations.

News publishers hesitate to commit to investing more into Threads next year despite growing engagement

News publishers are cautious to pour more resources into Threads, as limited available data makes it difficult to determine whether investing more into the platform is worth it.

privacy sandbox

WTF is Google’s Protected Audience?

FLEDGE stands for ‘First Locally-Executed Decision over Groups Experiment’ and makes ad auction decisions in the browser, rather than at ad server level.