How Facebook’s brand safety audit with the Media Rating Council will work
Amid increased pressure from advertisers and campaign groups to quell the volume of hate speech on its platform, Facebook last week committed to undergo an audit by the Media Rating Council to assess its brand safety controls and its partner and content monetization policies.
In a blog post last week, Facebook said this update — plus earlier announced changes, such as its intention to label “newsworthy content” it would otherwise take down for violating its policies — were a “direct result of feedback from the civil rights community collected through our civil rights audit.” Facebook has said both publicly and privately that it does “not make policy changes tied to revenue pressure.”
Facebook committed to its latest audit “a few days” before publishing the June 29 blogpost, said David Gunzerath, MRC svp and associate director.
In terms of when an audit was decided upon, a Facebook spokesperson pointed Digiday toward a Financial Times op-ed from Facebook CEO Mark Zuckerberg published in February 2020. He wrote, “We’re also looking at opening up our content moderation systems for external audit.” Facebook has also been discussing an MRC audit with marketers and cross-industry initiative the Global Alliance for Responsible Media, which is run by the World Federation of Advertisers for some time the spokesperson said.
“We are beginning scoping discussions with the MRC now, we will share an update on the timing of this audit once finalized with the MRC,” said the Facebook spokesperson.
The MRC has submitted its proposal to Facebook, but the exact scope of the audit has not been decided at this stage. In its blog post, Facebook mentions the audit will assess its ability to apply brand safety controls within other partner publishers’ content that includes advertising slots and appears in-stream, in Instant Articles or on the Facebook Audience Network. Facebook said it also expects the audit to cover its partner monetization policies and content monetization policies — the rules publishers and creators must abide if they want to make money from their Facebook content through ad revenue — and how it enforces them.
Gunzerath said the MRC would also like Instagram to be included in the audit.
“We think it’s important that our audit be full scope, with as broad a level of coverage as is possible of the various platforms on which Facebook sells advertising,” Gunzerath said.
Additionally, The audit will look at how Facebook’s brand safety processes and controls adhere to industry guidelines. MRC’s brand safety standards are currently assessed against two frameworks from advertising industry group The 4A’s: “The Advertising Assurance Brand Safety Floor Framework” and the “Advertising Assurance Brand Suitability Framework.”
The “floor” framework details a “dirty dozen” of ad categories almost every advertiser would never want to appear against, such as explicit pornography, spam or illegal drugs. The “suitability” framework is more customizable, depending on an individual advertiser’s perceived level of risk to their brand.
GARM is also working on creating a modified version of the 4A’s frameworks — which were developed in 2018. MRC CEO George Ivie said the intention is to implement the most current and widely accepted floor and suitability structure so it could update the audit to include GARM’s framework once it is published.
Rob Rakowitz, GARM lead, said the initiative is working to drive transparency and clarify quality controls around the content pools open for advertising, ensure that there is a clear process for categorizing harmful content and to ensure platforms deploy moderators and technology in a way that gives the industry confidence that it’s a priority.
The MRC audit will determine whether Facebook has applied an advertising adjacency standard into its brand safety protections to protect advertisers from those dirty dozen categories.
Auditors will use a combination of “designed activity testing” — creating artificial news feed environments and simulating the insertion of objectionable content for the purpose of exercising the controls — and tests on the live, real-world version of Facebook. The second part will also test Facebook’s infrastructure for how accurately it measures its own brand safety performance. (YouTube has a complicated-looking algebraic formula for how it measures its brand safety error rate value.)
MRC’s audits are carried out by external accounting firms, primarily by EY in the past. The company being audited picks up the tab and the cost — which depends on the scope of the audit and how many consultant hours are required — can range from hundreds of thousands of dollars to over a million dollars. Ivie declined to comment on the expected cost of this audit, beyond saying, “It’s a good-sized audit.”
“People who use Facebook who care about safety — and the same of Google, Twitter, et cetera — generally don’t have a high tolerance for being associated with bad content: If it happens one time, it’s not a good thing,” said Ivie. “We have to execute some very strong testing to get this done.”
It’s unclear as to how long the audit might take.
Discussions between Google and the MRC about its brand safety audit began in 2018. In September that year, the MRC released its “enhanced content level context and brand safety” guidelines. The pre-assessment was completed in 2019 and YouTube’s audit is still ongoing. Gunzerath said the timeline for the audit is “six months or slightly longer,” but that doesn’t include a pre-audit (which may or may not take place.) Plus the conclusion of an audit doesn’t necessarily mean an immediate decision on an accreditation as there may be issues that need to be addressed or clarified.
Meanwhile, Facebook has other MRC audits in process, concerning its integration of third-party viewability vendors on Facebook and Instagram and a separate audit looking into the detection of sophisticated invalid traffic on both Facebook and Instagram. However, The Wall Street Journal reported in May, Facebook’s MRC audits have run into some issues over the way the platform measures and reports video ad metrics.
Audits are also not one-time events.
“We don’t go away once we audit and accredit,” said Gunzerath. “We audit again the next year. If we need to update the criteria, we will update the criteria.”
More in Marketing
Key takeaways from Digiday’s 2024 Gaming Advertising Forum
Now that gaming has gone from a buzzword to a regular presence in brands’ media mix, marketers are more closely scrutinizing the value and ROI of their investments in this channel — and the platforms are rising to the challenge. Here are some of the biggest takeaways from this week’s Gaming Advertising Forum.
‘The most controversial rebrand of the year’: Understanding the tightrope that legacy brands like Jaguar walk during a rebrand
Jaguar’s attempt at a sleek, ultra-modern rebrand replete with art-house aesthetics has been the talk of the water cooler – excuse me, LinkedIn – this week.
The Trade Desk finally confirms it: Meet Ventura, the OS to cement its grip on CTV
The Trade Desk is indeed building a CTV operating system. So much for shutting down those rumors. Weeks ago, CEO Jeff Green insisted they were off-base.