Getting Facebook, YouTube, TikTok, Twitter and others to independent GARM brand safety verification is a diplomatic dance
For a global partnership between the top social media platforms and brands to create brand safety measurement standards to move ahead, the side holding the purse strings wants a lie detector test.
Nearly two years after its formation, the World Federation of Advertisers’ Global Alliance for Responsible Media, or GARM, has made progress toward creating a standardized set of brand safety measures agreed upon by platforms and advertisers. In April, the group — whose members include YouTube, Facebook, Instagram, Twitter, TikTok, Snap and Pinterest as well as big-spending global brands like Anheuser-Busch InBev and Unilever — published its first Aggregated Measurement Report, which featured some entirely new brand safety metrics. However, the milestone is marred by the participating platforms supplying their own, unverified measurements.
GARM aims to change that.
The next step is getting the platforms on board to allow an independent auditing firm to sign off on the transparency reporting data supplied to GARM by the platforms. Although this story originally reported that only Facebook had committed to doing so, Facebook told Digiday after this story was published that it has only agreed to a component of brand safety auditing that does not include the transparency reporting it supplies to GARM. Meanwhile, the rest of the platforms are also holdouts. There is no indication yet that any of the platforms participating in GARM will agree to allowing third-party auditing of the information they provide to the brand safety transparency group.
“These are ongoing conversations that the GARM Steer Team is having with each of the platforms to make sure that they follow through with their [Media Rating Council] accreditation in a way that is sustainable and appropriate,” said Rob Rakowitz, initiative lead at GARM.
He said that some platforms have fewer resources such as people or budget to fulfill audit requirements, for example. “It would be in everybody’s best interest that there is third-party verification on these numbers,” he said, adding that GARM members expect independent audits eventually to be an integral part of the reporting process.
“They will not be dodged,” said Rakowitz of the potential audits.
The state of platforms playing ball
Facebook publicly stated in July 2020 it would allow for the industry’s go-to measurement verification body, Media Rating Council, to evaluate its compliance with GARM’s brand suitability framework, in the hopes of garnering accreditation for the MRC brand safety guidelines for monetized content. However, those words have not yet resulted in tangible results. While Facebook had agreed to an audit for brand safety-related metrics with MRC later this month, and MRC planned to incorporate transparency reporting standards GARM plans to make official in the future as part of that audit eventually, Facebook has not committed to that component of the audit. Separately, Facebook is planning an independent audit of its self-published content enforcement and standards reports.
Separately, but on related trajectories, both Facebook and YouTube are working with MRC on brand safety-related audits that are not GARM-specific. Facebook is set to commence an MRC audit related to brand safety metrics in June. And Google-owned YouTube has received accreditation from MRC for its brand safety processes that evaluate content on the platform at the individual video level for ads bought through YouTube’s reservation program or through Google’s ad tech. But it has yet to commit to an audit of the brand safety transparency reporting it supplies for GARM. Last year the video platform did begin working on updating its brand safety processes to align with GARM’s standards.
“YouTube remains committed to partnering with GARM to support its mission to develop an industry-wide approach towards building a more sustainable and healthy digital ecosystem for everyone. We are in discussions with the MRC to explore our next accreditations, but have not committed to an independent audit of our metrics at this time,” a YouTube spokesperson told Digiday.
There’s a bit of bureaucracy adding complexity and slowing the process. MRC cannot conduct an audit to verify data supplied by platforms for the GARM report until GARM’s reporting requirements are finalized and then incorporated into MRC’s brand safety standards and audits. That has yet to happen, according to the MRC.
By contrast, while GARM participant TikTok believes in the accountability and transparency mission, the company isn’t ready to commit to a third-party audit. “We don’t really have a stance on it now,” said Dave Byrne, global head of brand safety and industry relations at TikTok, regarding third-party audits of the brand safety data it provides to GARM. But he added that GARM gives platforms a forum “to be transparent in a way that advertisers can hold them accountable, but it never feels like a conflict; it feels like a collaborative working environment.”
Convincing platform partners to agree to outside audits is “of course, a tender process,” said Luis Di Como, evp global media at Unilever, which is a founding member of GARM. While he said GARM advertiser members demand independent oversight of platforms’ first-party brand safety reporting, Di Como acknowledged, “This cannot be done overnight.”
A sign of progress
Overall, the GARM process aims to reconcile the platforms’ disparate efforts to moderate content and provide brand safety-related measurements. For instance, GARM’s aggregated measurement report translates content violation categories the platforms use internally or in their own branded transparency reporting into standard categories. What Facebook deems “Hate Speech” and “Bullying and Harassment” and Twitter calls “Hateful conduct” are all labeled by GARM as “Hate speech & acts of aggression.”
At this stage, Rakowitz and Di Como both stressed the value of the newly agreed-upon metrics included in the group’s inaugural report. The new Violative View Rate measures the percentage of views that contain content considered to be in violation, while another new standard used by YouTube for the report, the Advertising Safety Error Rate, gauges the percentage of total impressions on content that is in violation of content monetization policies aligning with GARM standards.
GARM’s report presents a macro-level view of aggregated data showing what’s happening according to brand safety measures across a platform. But the existence of the new metrics already appears to be influencing how the platforms and others report at the campaign level. The existence of those newly-created standardized metrics — which GARM hopes MRC will eventually verify for all platforms — “is definitely having a knock-on effect on post-campaign reporting,” said Rakowitz. “We are hearing from not only the content verification companies, but platforms themselves that some of these metrics will be introduced into campaign reporting,” he continued.
The knock-on effects may extend further and go beyond advertising. Ultimately, as regulators and legislators lump together big tech platforms and their alleged harmful societal impacts and demand transparent reporting about hate speech and disinformation, GARM standards could help the platforms align on how they report that information to governments, too, suggested Rakowitz.
“Advertisers, CMOs and media leads are not the only stakeholders,” he said.
This article has been updated to reflect that Facebook has not committed to the MRC conducting an audit of its brand safety transparency reporting for GARM. An earlier version of this story reported that Facebook had committed to such an audit, but after its publication, a Facebook spokesperson told Digiday that it has only agreed to a component of brand safety auditing that does not include the transparency reporting it supplies to GARM.
More in Marketing
What does the Omnicom-IPG deal mean for marketing pitches and reviews?
Pitch consultants predict how the potential holdco acquisition could impact media and creative reviews heading into the new year.
AdTechChat organizers manage grievances amid fallout of controversial Xmas party
Community organizers voice regret over divisive entertainment act at London-hosted industry party, which tops a list of grievances.
X tries to win back advertisers with self-reported video stats
Is X’s big bet on video real growth or just a number’s game?