Advertisers seek assurances from Instagram after brand safety issue, but won’t pull ads

The prospect of ads appearing next to posts about suicide content has made some advertisers wary of buying ads on Instagram. Facebook’s brand safety problems, are now Instagram’s problems.

Marks & Spencer, The Post Office, Dune and the British Heart Foundation were named in a report by the BBC yesterday evening (Jan. 23) that found their ads had appeared against graphic content about suicide on the social network. The investigation came after it emerged a teenager who had viewed posts on suicide on Instagram had taken her own life.

But rather than pull ads like they did in 2017 when the brand safety issue spilled over into the mainstream, advertisers are taking a different approach now. None of the advertisers mentioned in the report have pulled spend, according to separate statements from the businesses. The idea that online platforms monetize inappropriate content has become such a recurrence over the last two years, that the panic that previously engulfed these issues has been replaced with pragmatism.

M&S hadn’t made any changes to its media plan in the immediate aftermath of the report. It will, however, pay a closer eye to how Instagram rids itself of inappropriate content.

“Brand safety is an absolute priority for us and we have a clear set of Responsible Marketing principles that our partners must adhere to,” said a spokesperson for the high street retailer. “We are seeking additional assurances from Instagram that it has the robust procedures in place to moderate and remove inappropriate content.”

The Post Office took a similar stance. The advertiser would welcome working with social media platforms to tackle the issue, said a spokesperson.

“Our adverts appear on people’s feeds based on responsible and appropriate information, such as their location or age,” said the spokesperson. “As a company, we would never target our ads based on inappropriate or harmful content. Although we cannot know the kinds of other accounts that an individual user may be following, we are committed to doing the utmost so that our adverts only appear next to suitable and healthy content.”

Footwear seller Dune said it would work closely with Instagram to investigate the issue.

Carolan Davidge, director of marketing and engagement at the British Heart Foundation, expanded on the point: “The harmful content is totally unrelated to the BHF and we will be asking Instagram to act swiftly to prevent such content from being so easily accessible, shared and to protect people from viewing it.”

The concerns are echoed by the broader advertising industry.

ISBA, the trade body for advertisers in the U.K., reiterated its push for the formation of another industry body that would certify content policies and processes to ensure media owners are tackling inappropriate content as well as certify content policies and processes, audit transparency reporting and provide an appeals process. “While we have seen some positive developments the industry needs to go much further,” the trade body said in a statement. “We encourage Facebook and the wider industry to work together to avoid pushing this problem from one platform to another.”

Question marks have lingered over the quality of content on Instagram for some time. Child abuse videos were found by Business Insider on the social network’s IGTV long-form video service last year, for example. Despite these problems, Instagram’s ad business continues to swell rapidly, buoyed by its grip on younger social media users. Users under 35 years old make up more than 70 percent of Instagram’s more than 800 million active accounts worldwide, revealed Hootsuite last September. There are few places advertisers can buy those audiences at that scale.

Posts on mental health and self-harm are more challenging to police than other forms of inappropriate content because sometimes they actually help those who are in distress to know that others have been in similar situations and found help, said a spokesperson for Instagram. Similar to ads in Facebook’s news feed, ads on Instagram are based on interests, not the content. Other features allow advertisers to opt out of placements using category exclusion, publisher lists, delivery reports and block lists.

Brand safety has become a priority for online platforms ever since it was revealed Google had inadvertently monetized videos produced by terrorists in 2017. The PR fallout from these scandals can be just as damaging for those businesses as the potential hit to their commercial coffers. But the brand safety controls Facebook has developed have left advertisers underwhelmed, particularly as the brand safety stakes rise as more video comes to the platform.

“Any time you aren’t directly controlling where your ads are popping up, you are risking a serious brand safety crisis,” said Dan Deeks-Osburn, strategy director, at digital agency Impero. Today we’re talking about mental health issues, but it wasn’t long ago brands discovered they were unwittingly funding terrorists via YouTube’s automation options.”

https://digiday.com/?p=319721

More in Media

Digiday+ Research Lifestyle Subscription Index 2024: Time, Vogue and The Atlantic choose between divesting or investing in subscriptions

The 2024 Subscription Index examines and measures publishers’ subscription strategies across several different digital touch points. This third installment of the research series looks at some of the top lifestyle-focused publications in the U.S.

How news publishers are adapting post-election, with Yahoo News’s Kat Downs Mulder

The veteran news executive joined the Digiday Podcast to discuss how this year’s U.S. presidential election is affecting news publishers.

Assessing the fallout of Google’s ad tech antitrust trial

Parsing the probable, possible, and plain absurd, including what a divested entity may look like.