Brand safety continues to be a thorny issue without any single straightforward fix. Even the seemingly most simple brand-safety measure can be exceedingly complicated.

For years, advertisers and agencies have blocked their ads from appearing against content that contains certain keywords, such as profanity and slurs. Those keyword lists can contain other terms that may appear to be innocuous in isolation but inflammatory in the wrong context.

Fearful of an ad appearing in a controversial context, a term like “gay” may be included in a keyword list because an advertiser does not want to appear alongside content promoting hate crimes. However with that inclusion comes to the risk that the advertiser avoids all content containing the term, such as a profile of a gay celebrity, and alienates that audience.

“By and large, keywords are still a pretty blunt tool to use,” said Andrew Goode, evp and head of programmatic at Havas Media North America.

Vice recently drew attention to the issue of keywords being too blunt a tool. At the publisher’s NewFront presentation on May 1, Vice announced that it will no longer allow ads to be blocked from running on pages containing keywords such as “gay,” “Asian,” “Muslim,” “climate change,” “immigrant” and “fat.” While agency execs applauded Vice for taking a stand on the issue, they are wary of individual publishers making such judgments on behalf of advertisers. “Brand safety is and always has been a subjective area,” said Goode, who advocated for advertisers to look beyond just keywords when it comes to brand safety.

Nonetheless, keyword lists continue to be an important component of advertisers’ brand-safety measures; Goode described them as “the final filter rather than the only filter.” But to ensure they are an effective filter, they need to be more refined in their application so that an advertiser is not avoiding all articles containing the term “gay” but is making sure not to support the ones that are using it in a derogatory way. “It’s those semantic combinations that really matter,” said Joe Barone, managing partner for brand safety in the Americas at GroupM.

Agencies, such as GroupM, Havas Media and Publicis Media, have been working with ad verification companies to take better consideration of context when deploying keyword lists to protect clients’ brand safety concerns. Those efforts began a few years ago by applying semantic analysis to advertisers’ keyword lists in order to look out for other words on the page that can indicate the context in which the keyword is being used. “The next step is sentiment analysis,” said Barone.

Verification vendors can use sentiment analysis to evaluate whether an article is positive or negative in relation to a given keyword when determining whether an ad should or should not be blocked from appearing on the page. “Some of the newer approaches that the verification companies are employing are designed to identify the nuance in the sentiment as opposed to the raw keyword,” said Barone.

Sentiment analysis can be used to make judgments when an article’s sentiment isn’t binary. In addition to judging articles as being positive or negative, an article can be categorized as neutral, which would be OK for most clients though not OK for risk-averse advertisers, Barone said.

However, sentiment analysis comes with its own complexities. To analyze whether an article is positive, negative or neutral, a sentiment analysis tool may need a wider array of information than the article itself, such as the comments on the page and metadata related to the images or videos that also appear on the page. Verification vendors “have sentiment analysis tools, but they don’t have support from the publishers to provide a consistent framework of metadata to do the best possible sentiment analysis that they can,” said Yale Cohen, evp of digital investment and standards at Publicis Media Exchange.

Given all the information necessary to provide the most accurate sentiment analysis, applying sentiment analysis to advertisers’ keyword-based blocking at scale remains “a ways off because it takes a lot of content categorization,” said Barone.

However, publishers would be incentivized to cooperate with that content categorization, if only to ensure that they are not being unfairly evaluated for brand safety. “What we’re looking for would be a common categorization engine because we want to treat all publishers equally,” Barone said.

  • LinkedIn Icon