Marketing teams are revisiting brand suitability on social media in 2022
Brands and people want to know that social media apps are safe places to connect, free from exposure to harmful content. Brand suitability describes the practice of determining a particular brand’s tolerance of advertising alongside safe but sensitive content. Heading into 2022, brand suitability will continue to be at the forefront of the advertising industry’s focus on social media.
In a recent Digiday and Meta focus group, brand and agency participants offered their thoughts and insights into the state of brand safety and suitability on major social media platforms under Chatham House Rule. Their conversation spotlighted the issues they are keeping top of mind. It highlighted how brands are approaching suitability, factors influencing advertising decision-making around suitability and available technology tools brands and people can use to control content they encounter.
By agreement at the beginning of the focus group, participants’ names and affiliations have not been disclosed.
The suitability factor
Social media companies are guided by their policies for what is and isn’t allowed on their platforms. Brand safety on social media involves removing harmful content which no brand would want to be associated with and establishing what content should be monetized.
The Global Alliance for Responsible Media (GARM) sets the industry guidelines for reducing the availability and monetization of harmful content online. Social media companies are making progress regarding safety, beginning with user and advertiser guidelines that define what is and is not allowed on the app, holding every user accountable to those rules.
The focus group highlighted a vital distinction between brand suitability and safety from the social media perspective. Brand suitability is more subjective and determined on a brand-by-brand basis. Content deemed generally safe by a social platform may be unsuitable for specific brands, particularly if it doesn’t align with their industry or values. For example, a video game developer with content geared toward adults may be comfortable advertising alongside an article about alcohol, whereas a more family-friendly brand would not.
Suitability factors that marketers are keeping top of mind
Social media companies are working with GARM’s suitability framework to help brands and users achieve consistency in controlling suitability.
Social platforms provide partner-monetization and content-monetization policies, which determine what can be monetized. These policies apply to all publishers, but certain advertisers may not find those policies alone to be effective enough in controlling suitability.
In the Digiday and Meta focus group, it became clear that social commentary and news headlines about how social media companies are handling significant issues — such as misinformation and hate speech — influence brand decisions on where and how to advertise on social media. The focus group also highlighted that while brand clients have had reservations about advertising on social media apps due to suitability concerns, most still continue to use platforms to drive awareness, reach and revenue.
Another point came to the fore: Because brand suitability is so nuanced, universal technology tools such as ad block lists are meeting some, but not all advertiser needs. This is due to a broad exclusion of words, whose mention causes advertisers to avoid that content. A report by YouTube video partner Pixability discovered that using restrictive tools such as excessive keyword blocking can block suitable content along with what a brand has deemed unsuitable. For example, a CPG advertiser employing the word “knife” in a block list might miss engaging with viewers who watch cooking videos. Brands, particularly those with diverse audiences, are still looking for more granularity in suitability controls. Brands seeking to reach a specific audience want more control over what types of content they don’t want their ads to run alongside.
How social platforms are empowering advertisers with technology tools
Meta’s AI-powered topic-exclusion tools have allowed marketers to choose content-level exclusions around news, politics, gaming and religious material. Publisher lists for in-stream ads have also given brands the ability to select a list of publishers based on suitability guidelines and launch campaigns exclusively on the content from those publishers.
The company is now advancing these standards, testing new topic-exclusion controls for brands that include the ability to filter out audiences engaging with the topics of news and politics, debated social issues, crime and tragedy.
In early testing of these new controls, Meta found that advertisers avoided news and political adjacency 94% of the time, tragedy and conflict adjacency 99% of the time and debated social issues adjacency 95% of the time.
Advancing the path to brand favorability
The focus group pinpointed goals in other ways as well — one being that moving forward, all major social media companies need to be held to the same standards when it comes to evolving advertiser and user safety and suitability standards. This included a call for social platforms to be more proactive in catching and deleting harmful content earlier and be more transparent in doing so.
Social media companies including Meta have been responding accordingly. In its latest public-facing Community Standards Enforcement Report, the company found that prevalence of hate speech on Facebook continued to decrease for the fourth quarter in a row. The prevalence of hate speech from June to September 2021 was 0.03%, down from 0.07%-0.08% from October to December 2020.
Furthermore, the focus group noted that social media platforms could hold themselves accountable by undergoing third-party safety and transparency audits. Partnering with third-party auditors ensures that platform data is measured accurately and reported correctly. Releasing audit results is also a key component in remaining transparent with advertisers and users. To ensure Meta is measuring and reporting results correctly, the company is currently undergoing an audit by accounting firm EY covering Q4 2021, with results set to be released in spring 2022.
The key takeaway was that social media companies and advertisers have to continue working together. Social platforms have an opportunity to continue advancing their solutions for brands and users seeking to make their online environments suitable — a practice that is evolving. And, as brand and consumer standards evolve, social media platforms continue to develop, test, and adapt content suitability tools to improve campaign outcomes for brands and the overall experience for users.
More from Digiday
TikTok quietly tests product links in posts as it looks to boost its reputation for shopping
TikTok is letting some creators add product links from third-party affiliate networks, including Amazon, Walmart and Target, directly to their posts through a new integration.
Biggest creator lessons from the 2024 election: podcast showdown, TikTok trends and news influencers
This political cycle, election campaigns increasingly integrated influencer strategies, particularly through long-form podcasts on YouTube and Spotify and short-form content on TikTok.
Media Buying Briefing: Some creator shops are ripe for agency M&A as market consolidates
Most agencies have either acquired influencer agencies and platforms or grown those technology or talent-related offerings in-house at this point. Who’s left that could make a move?