Cheat Sheet: How GARM and MRC work together on platform brand safety

platforms

Digital advertisers want reliable and trustworthy reporting from the big social media platforms to insulate their brands from harmful and unsavory content posted to their sites and apps. Right now, two large industry bodies — GARM and MRC — are working together and in parallel efforts to move the industry globally toward verified and standardized approaches for measuring against brand safety goals when it comes to advertising on social platforms.

Here’s an overview of who they are, why they’re working together and where they’re at in that slow-moving process.

Who is GARM and what’s its connection to brand safety on the platforms?

GARM stands for Global Alliance for Responsible Media. It’s a partnership among the social media platforms — YouTube, Facebook, Instagram, Twitter, TikTok, Snap and Pinterest — big global ad trade groups like Interactive Advertising Bureau and the 4As and brands from P&G and Unilever to Dell and Chanel. Formed in 2019 under the auspices of the World Federation of Advertisers, the goal of the organization is to address brand safety-related problems that emerge when advertising is adjacent to and financially supports, if indirectly, harmful content involving topics like violent imagery, child sexual exploitation, disinformation and hate speech or weapons and drugs. In April, GARM published its first report showing what’s happening according to brand safety measures across those platforms. For instance, the report — which showed more than 5.3 billion pieces of content were removed by the participating platforms during the year prior to publication — includes data based on two new measurements devised by GARM partners, Violative View Rate and Advertising Safety Error Rate.

Who is MRC?

Media Rating Council, or MRC, was created back in the early 1960s in the early days of broadcast TV. An industry-funded group with a lot of the same sorts of members as GARM, it has its roots in verifying media measurement metrics and processes from companies including independent measurement providers as well as digital platforms. Over time MRC’s verification has ranged from old-school Nielsen TV ratings to content-level brand safety processes for video ads as well as display ad impression metrics that have nothing to do with brand safety.

So, why does GARM want to work with MRC when it comes to brand safety?

Think of GARM and MRC as partners in a delicate diplomatic mission to gently encourage — and pressure with the force that only ad dollars can apply — the platforms into agreeing to outside oversight of their brand safety and transparency reporting.

First, a bit of background: Right now, the data that the platforms provide for GARM’s reports showing what’s happening according to brand safety measures across their sites is not verified by an independent entity. Instead, the platforms self-reported the information for that inaugural GARM report. And in the cases of some platforms such as Facebook, which already puts out its own content standards and enforcement reports, much of the same data provided to GARM actually comes from transparency reports companies already publish.

GARM wants that data supplied by the platforms for GARM reports to be verified by an independent organization. Because the MRC already oversees this sort of stuff, they’re the natural choice.

But GARM’s concerns are about more than data from the platforms, right?

Yep. GARM is pushing for all its platform partners to commit to three levels of brand safety audits:

  1. Brand safety controls and operations: This audit level would assess whether there are sufficient internal controls and processes in place for measuring against brand safety guidelines.
  2. Brand safety integrations with outside vendors: This audit would look at the processes that platforms have in place for areas like proper data transfer when integrating third-party ad measurement firms such as DoubleVerify, Moat or IAS
  3. Brand safety transparency reporting: This audit level addresses the brand safety data supplied by the platforms used in GARM reports

It’s worth noting that MRC incorporates controls and operations as integral components of all its audits, brand safety and otherwise, while GARM considers the internal controls at platform firms to be separate brand safety audit components from the other two categories. So sometimes MRC and GARM use different terms for various aspects of audits which can add to the complexity of these issues.

So, where are the platforms at in this GARM-MRC process?

Most of the platforms participating in GARM have yet to agree to any outside audit of any GARM or MRC brand safety measures. But here’s where there is some movement as it relates to GARM:

Facebook: Although this story originally reported that Facebook had agreed to MRC conducting an audit of its brand safety transparency reporting for GARM, the company told Digiday after this story was published that it had not committed to the MRC conducting an audit of its brand safety transparency reporting, which may become a component of a brand safety-related metrics audit set to get underway with MRC later in June.

And another process is underway as it relates to the more consumer- and media-facing Content Enforcement Standards Reports that Facebook already puts out. On May 19, Facebook said it had selected EY (Ernst & Young) to conduct an audit to validate its assessment of the metrics used for its self-published CESR reports. That matters because EY handles most of the audits of platform ad metrics that MRC oversees. Indeed, MRC actually hires other auditing firms including Deloitte and EY to conduct the nuts and bolts of its auditing.

YouTube: YouTube is also more engaged in the brand safety measurement process than other platforms, but has yet to commit to an audit of the brand safety transparency reporting it supplies for GARM. The company has, however, been accredited by MRC for Content Level YouTube Brand Safety Processes for Video Ad Serving through Google and YouTube ad systems. Last year the video platform began working on updating its brand safety processes to align with GARM’s standards.

In general, it’s a piecemeal process and these two platforms are at different stages and approaching it differently. Meanwhile, no other platforms have committed publicly to any form of independent verification for brand safety measures related to GARM or MRC.

So is anything else holding up the process?

General reluctance to participate in independently-led audits that require inspection of data processing and tech is a major obstacle for all the platforms. But bureaucracy could be slowing things down a bit, too. Until GARM’s reporting requirements are finalized and then incorporated into MRC’s brand safety standards and audits, MRC cannot begin any audits to verify data supplied by platforms for GARM reporting.

That has yet to happen according to the MRC.

This article has been updated to reflect that Facebook has not committed to the MRC conducting an audit of its brand safety transparency reporting for GARM. An earlier version of this story reported that Facebook had committed to such an audit, but after its publication, a Facebook spokesperson told Digiday that it has only agreed to a component of brand safety auditing that does not include the transparency reporting it supplies to GARM.

https://digiday.com/?p=415202

More in Marketing

What does the Omnicom-IPG deal mean for marketing pitches and reviews?

Pitch consultants predict how the potential holdco acquisition could impact media and creative reviews heading into the new year.

AdTechChat organizers manage grievances amid fallout of controversial Xmas party

Community organizers voice regret over divisive entertainment act at London-hosted industry party, which tops a list of grievances.

X tries to win back advertisers with self-reported video stats

Is X’s big bet on video real growth or just a number’s game?