Facebook’s political ad archive and issue ads policy have frustrated media organizations and marketers over the past few months. But despite the grievances, Facebook’s system has apparently done what it’s intended to do: Flag foreign interference launching misinformation campaigns.
On Aug. 21, Mark Zuckerberg and other Facebook executives held a call with reporters to explain its latest discovery and actions on “coordinated inauthentic behavior.” Facebook said it recently removed 652 Facebook pages, groups and accounts, as well as Instagram accounts that originated in Iran and in Russia. During the call, the executives touted its new products for helping surface the bad actors.
“Our ads verification systems flagged some political ads these actors attempted to run. These efforts work in tandem with our progress to reduce the spread of misinformation and fake news. Put another way, our new products and the improved systems operate at scale, effectively shrinking the haystack, which allows our investigative teams to more effectively sift through and look for the specific needles,” said Guy Rosen, Facebook’s vp of product management.
Twitter also said on Aug. 21 that it suspended 284 accounts for “engaging in coordinated manipulation” and that many may have originated in Iran.
The bad actors aren’t actually buying up a ton of ad inventory. One group had 12 Facebook Pages, 55 Facebook accounts and nine Instagram accounts, but Facebook didn’t detect any ads. The continued presence of misinformation campaigns does, however, bring into question the authenticity of views and engagement on Facebook.
“We cannot be certain if real people are being reached or if true impressions are delivered. Account verification and creation steps must be made stricter in order to keep multiple accounts and pages from being opened from a single individual and unused or inactive accounts should be shut down after a period of time,” said Rigel Cable, associate director of data analytics at digital agency Fluid.
Facebook has gotten more transparent about manipulation and fraud and put more human resources into combating them. That effort includes creating the ad-transparency tool, hiring thousands of people to monitor ads and hosting more regular calls with reporters.
“As I have said before, security is not something that you ever fully solve. Our adversaries are sophisticated and well-funded, and we have to constantly keep improving to stay ahead. But the shift we’ve made from reactive to proactive detection is a big change,” Zuckerberg said in his prepared remarks on the call.
Meanwhile, agency executives are advocating for solutions to these coordinated attacks. Joshua Lowcock, UM’s global brand safety officer, said he’s been having conversations with senior officials in governments and government-backed security organization on the topic of brand safety on Facebook.
“We all agree hate speech and terrorism is wrong. What pressure can we bring to bear? We share that intelligence and use advertisers’ interest and the money we spend,” Lowcock said.
For Lowcock, pressure is not just about pulling advertising dollars but Facebook making other changes to ads such as targeting. On Aug. 21, Facebook removed more than 5,000 targeting options in Facebook’s Ad Manager, which the company announced separately from the news on the misinformation campaign. The majority of the options were for exclusion, where advertisers can select what groups to not target.
“A way we, as an industry, can align is on targeting. We ask, ‘Is any of the targeting something advertisers want? OK, let’s exclude this because it’s not going to hurt our business,’” Lowcock said.