Facebook has announced a number of measures in the last few weeks to tackle the spread of harmful content or posts that support terrorism on its platform. While the efforts are encouraging, industry insiders are questioning the amount of resources Facebook is investing to these initiatives and would like to see more collaboration between the platform and verification partners.

U.K. Prime Minister Theresa May has accused the social network, along with other internet-based services, of allowing terrorist ideology to spread in the wake of recent attacks in London and Manchester. To help combat this, Facebook has introduced new measures to identify harmful content, including using technology like image and text recognition. It also has a team of 150 people primarily focused on countering terrorism. Facebook has said it plans to grow its global Community Operations team by 3,000 over the next year.

“It feels like lipstick on a pig. Facebook has got a really ugly problem, but it is solving it by giving us something sexy, like artificial intelligence, to look at,” said Scott Gill, managing director at regional publisher cooperative, 1XL. “The fact is, they have 150 people to counter the behavior of a 2 billion strong user base. It’s folly.”

Gill points out that Facebook is a capitalist enterprise that exists to make money, but it doesn’t face the stringent regulations that the rest of the media industry in the U.K. does. “That number belies a totally woeful lack of legislation and regulation on them as a media platform,” he said, adding that “it’s incumbent on government to correct capitalism.”

The difficulty is that with user-generated content at its core, Facebook’s ability to control the content is limited, and regulating user-generated content is problematic, said Richard Reeves, managing director at the Association of Online Publishers.

“I absolutely applaud Facebook for its continued efforts to counter terrorism,” he said. “No matter how much money and effort Facebook invests into counter-terrorism strategies, it can never guarantee it.”

In Germany, legislation is working through the courts that could result in fines for social networks of up to €50 million ($56 million) if they fail to remove harmful fake news or defamatory content within 24 hours. Ideally, this would require monitoring by third-party companies, which Facebook has been reluctant to allow.

Kevin Longhurst, head of trading and partnerships at IPG Mediabrands’ investment arm, MAGNA, welcomed Facebook’s efforts at restricting harmful content but said the platform can be more proactive in allowing third-party companies to help them deliver solutions, particularly when it comes to brands appearing next to potentially unsafe content. Allowing verification partners like Moat, DoubleVerify or Integral Ad Science more access to Facebook’s ecosystem could add another layer of security so ads are prevented from being served beside salacious content. Similar tech integration could help prevent the spread of extremist content.

“When we question Facebook about whether they will allow these companies greater access to their ecosystem, we’re told there are privacy concerns and that giving companies visibility to users’ data could compromise Facebook’s relationship with the consumer,” said Longhurst. “That’s the challenge that needs to be overcome.”

“It’s important to note we’ll never get 100 percent brand safe on any platform; the key is for platforms to push as hard to get to that 99.9 percent,” he added.

Ultimately, he said, it’s the impact on the bottom line that will spur action. Last week, GroupM revised its ad-spend forecast on digital platforms including Facebook in the U.K., forecasting spend will grow by 11 percent to nearly £10.5 billion ($13.4 billion) this year, a reduction from an earlier forecast of 15 percent. One key reason was that brand-safety fears would force advertisers to make more prudent decisions about spending.

  • LinkedIn Icon