Subscribe: iTunes | Google Play | Stitcher | Anchor
You have read the maximum number of free articles.
This content is available exclusively to Digiday+ members.
Craig Silverman’s stories have it all: lies, fraud and billions of stolen dollars. But they’re far from a true crime podcast. The Toronto-based BuzzFeed media editor writes about fake news, the spread of misinformation on platforms and ad fraud, where every participant in the supply chain is a culprit passing on the blame.
“I have never been lied to by more people in my career than since I’ve been dealing with people in digital advertising. It’s unbelievable,” he said on the Digiday Podcast. “There’s a house of cards going on with the amount of fake traffic and monetization going with that. The infrastructure to target and slice audiences has also been weaponized. It’s been a boon for criminals who are stealing billions of dollars a year, for state-sponsored actors who are trying to identify and target audiences with specific messages and then evaluate the effectiveness and adapt to those.”
Silverman discusses the lack of incentive for marketers to speak up against ad fraud, Facebook’s scale problem and more. Edited highlights below.
Why are marketers OK with seeing their dollars stolen?
“The industry has to start itself in the mirror and [ask] ‘why are you OK with so much money being a 100 percent stolen.” It’s not just the marketer whose money is being stolen who doesn’t push back as much as they should. It’s everyone through the supply chain who is not taking responsibility. All the incentives are towards just letting it continue. One of the problems is that we’re not pointing out all the players at every stage of buying and selling inventory who see stuff going on who don’t say or do anything because they’re making money. Nobody wants to see the budgets cut. There’s the old adage that ‘certain amount of your budget is waste or ineffective’ but it’s not waste, it’s theft. That is the difference that I don’t see people in the industry understand. I’ve shown actual chief marketing officers their ads in a fraudulent environment. They thank us but they don’t want to call it out. They don’t want to go to their CEO or CFO and say ‘we spent thousands of dollars with that partner, turns out, they’re sending fake traffic to our sponsored content’ because they don’t want to see their budgets get cut and then they lose their jobs.”
Platforms are trying but it’s not enough.
“The scale [of platforms] is unheard of in human history. On one level, they’ve gotten their arms around the false content. They’ve got partnerships with fact checkers in many countries around the world. They can fact-check articles, images and videos. There has been some effect from that but the fact checkers can’t get to everything. Facebook thinks they’re a year or two away from having AI that’s good at flagging hate speech and falsehoods. They say they’re super effective at ISIS content, for example. The other thing is that they’re hiring about 20,000 people in that broad definition of content moderation and security. What really concerns me is that Facebook or other platforms have realized content moderation is one their core functions as an entity now. But they farm it out to third parties. They think of themselves as an engineering company.”
The social media use is now fractured.
“I don’t know if we’ll ever see anything as big and dominant as Facebook ever again. They’re experiencing a lot of pains because of that. We’ll continue to see people retreating private or semi-private environments like WhatsApp, which has its own issues. It creates a whole new problem around false content and not being able to understand what people are seeing on a daily basis.”
Sign up to get the day’s top stories at 6am eastern.