‘Just tell me what the controls are’: What Facebook learned about brand safety

Yes, this is a sponsored article on Digiday. We’re not trying to hide it — indeed, it points to something crucial: Brands think hard about where they place their messages, and Facebook is no exception. Finding favorable environments (in this case, a brand-safe website consumed by digital media pros known for its sometimes brutal honesty) is core to any business strategy. 

But controlling for favorable environments on social platforms can be a trickier challenge than doing the same for a conventional publisher. And as social media has become the dominant way to get the word out, advertisers’ fears of negative adjacencies have reached a fever pitch. 

The upheaval started in 2017, when companies like L’Oréal and Verizon discovered their video ads running alongside ISIS propaganda and other abhorrent content. Overnight, advertisers started agonizing over brand safety — the potential for their ads to appear in contexts damaging to their brands. Social platforms often received the brunt of advertisers’ ire —  and they did have some more work to do. But at least industry-wide conversations were kicking into high gear. “It bred an opportunity for us to continually talk and refine how we’re going about things,” said Louis Jones, EVP of media and data practice at the 4A’s. 

Advertisers spoke clearly: They didn’t have enough control. “[Brands] understand that, ‘okay, I have to take what [social platforms] are serving up to some degree,’ said Jones. “But as long as you can give me an assurance that I can stay out of the deep and dirty end of the pool, I’m good. Just tell me what the controls are.”

Brands had already craved detailed insight into where their ads were appearing, and robust user engagement metrics, for years. The Media Rating Council introduced measurement guidelines for social platforms as early as 2015. Meanwhile, some social platforms took their own steps to foster safer environments. Facebook, for instance, introduced its community standards— guidelines designed to restrict objectionable content like violence, child exploitation and hate speech. But questionable publishers and content creators in the programmatic industry were still attracting some advertisers like moths to a flame with their huge, monetizable followings. Moreover, brands and agencies — especially those buying through programmatic platforms — often didn’t have much insight into where their content was appearing. 

Demonetization was one major instrument that social platforms started using to prevent bad behavior from being rewarded. In 2017, Facebook introduced its Partner Monetization Policies —  a new layer of rules that publishers and creators needed to meet before their content could make money. But an obvious question arose: By creating too many content restrictions, did social platforms run the risk of driving publishers and engaged users away? Would brands trade safety for diminished audience reach? 

In truth, brands and publishers across the board have concluded that users are more engaged on platforms with safe environments. “The dynamics and interests of our community are actually pretty aligned with the market,” said Abigail Sooy, Facebook’s director of safety and spam operations. “They want the same outcome.” 

Even in situations where brand safety controls seem likely to diminish reach, brands have usually erred on the side of caution. “Risking brand safety for more scale isn’t something that brands are likely to do,” explained Steven Woolway, SVP of business development at DoubleVerify, a third-party measurement specialist that partners with Facebook to provide brand safety tools to advertisers. 

So how does a platform decide on the precise mechanics (and scope) of its brand safety controls? Can it remove unsuitable content while encouraging diverse perspectives? Can it provide options to monetize while limiting the chance of advertiser dollars bankrolling illicit programming? The approach must be multifaceted. “About 85 percent of [social content] is absolutely fine,” said Jones of the 4A’s, which has worked with agencies and platforms to develop brand safety guidelines. “Another 10 percent is questionable because some people have extreme views. But that last 5 percent is where the trouble comes in.” 

Facebook uses a combination of tech and human systems, including AI-driven content recognition tools to quickly detect and remove community standards-violating posts — violence, nudity and hate speech. And, in its regularly released community standards enforcement report, Facebook regularly discloses metrics on how it’s been performing when it comes to preventing and removing standards-violating content. Just as importantly, Facebook relies on industry collaboration, such as its memberships to the Brand Safety Institute and the Global Alliance for Responsible Media. These partnerships and tools are crucial given the size of Facebook’s user base, iterating through thousands of new submissions per second.

Facebook also employs 30,000 professionals who work on safety and security. That includes 15,000 content reviewers  who pore over more than two million pieces of content every day. “We want AI to do as much as possible,” said Zoé Sersiron, Facebook’s product marketing manager for brand safety. “But obviously we need humans to be very much involved.”

“A piece of broccoli can also look like marijuana,” explained Sooy. “Humans may be better at telling the difference in some cases, and machines in others. It’s a balance between the two, and they work hand in hand.” It often takes a human understanding of context, culture and nuance to catch the more insidious material. “Posting a picture of a weapon does not violate community standards,” explained Patrick Harris, Facebook’s VP of global agency development. “But someone holding a weapon at the camera with text saying ‘I’m coming to get you’ is against policy.” 

Moreover, those teams are divided along the lines of topical specialties. “Whether it’s a team that’s thinking predominantly about hate speech or adult sexual exploitation, that team has the responsibility and the accountability to look at that specific area end to end,” explained Sooy. Hate speech is one area where Facebook has been particularly aggressive. In March, the platform removed a slew of accounts in the UK and Romania that were spreading false stories and videos designed to stir up political hate and division.

In one of Facebook’s biggest brand safety reforms, advertisers have been given more control over where their ads are placed. On platforms including Audience Network, Instant Articles and in-stream videos, agencies and brands can see which publisher content their ads would appear within —  and that includes before, during and even after a campaign. They can use Facebook’s inventory filter to precisely control the extent to which their ads appear within sensitive or controversial content. They can also prevent ads from running on specified pages, apps or websites by creating “block lists,” and earlier this year Facebook integrated with brand safety-focused third parties to help advertisers manage them. 

To give brands more oversight over where content appears on the platform — along with the security of a second opinion — Facebook has, since January 2019, permitted them to work with third-party measurement specialists. “It’s important to advertisers to not have Facebook be the sole reporter in control over brand safety,” said DoubleVerify’s Woolway. 

Brands should get to decide the risks and rewards of their own social footprint, no matter where they’re advertising. In practice, most simply gravitate toward safer environments. “Brands are willing to sacrifice some reach to be on the very safe side,” said Sersiron. “Creating those environments is a job that is never done, but a safer Facebook is better for everyone.”

https://digiday.com/?p=346443

More from Digiday

A week since the U.S. election and social media is becoming more fragmented than ever

It’s no stretch to say that the flavor of political discourse across these platforms (or lack thereof) played a role in driving engagement over the last week or so.

Media buyers say programmatic spend on Spotify is increasing as platform builds its own SSP

As the audio platform streams toward profitability, media buyers say brands are changing the way they buy its ad space.

Media Briefing: Publishers’ Q3 earnings show revenue upticks despite election ad pullback

Q3 was a mixed bag for publishers, with some blaming the U.S. presidential election for an ad-spend pullback.