‘The status quo is not good enough’: UK tightens regulation around video sharing

European politicians have been tightening the screws on U.S. tech platforms on multiple fronts. Now, a European-wide directive is reportedly extending the same regulatory responsibilities that broadcasters face to video-sharing platforms, imposing fines or service restrictions if they fail to comply.

Under the new Audio Visual Media Services directive, video-sharing and livestreaming platforms including Instagram, Facebook and YouTube will be fined either £250,000 ($301,000) or 5% of the company’s revenue, whichever is more — although the amount is still being decided — if they show harmful videos, including violence, child abuse and pornography.

Platforms will face an investigation from U.K. media regulator Ofcom which, as well as imposing fines, can suspend or restrict the tech platforms’ services in the U.K. if they fail to comply with enforcement measures. That could mean enduring search engine blocking or having senior management held personally liable, for instance. Here’s what to know about the directive.

What areas does the directive target?
A lot of the finer details are still under consultation, but early drafts from the Department for Digital, Culture, Media and Sport outline eight measures it expects video-sharing platforms to comply with, including more effective age-verification systems, reporting mechanisms and parental control systems. This will bring the platforms in line with some of the regulations that broadcasters already face.

Video-sharing platforms need to comply with the wider Online Harms White Paper, a broader scope of legislation revealed in April to hold companies more accountable for protecting individuals online. Ofcom will be an interim regulator until an “online harms” regulator takes up the role.

What sort of impact will this have?
Platforms will likely have to be more proactive around content moderation, increasing human and tech resource to monitor content on their platforms, according to agency sources. They will also need to share annual reports on their progress.

A similar law has been implemented in Germany where legislators can fine social platforms if they do not remove criminal content, which can include hate speech, defamation and fake news, within 24 hours of being reported. So far, early signs have been encouraging, despite fears from free-speech activists.

While there’s a general concern that the government can lack an in-depth understanding of how digital algorithms work, industry experts say intervention is now inevitable. “There is the fear the government will come in with blunt tools,” said Jake Dubbins, co-chair of the Conscious Advertising Network, a coalition of over 70 organizations to work against unethical practices in the ad industry, “but right now the status quo is not good enough.”

Broadcasters have long grumbled that tech platforms don’t have to follow the same regulattions that they do, while still claiming on the one hand that, as advertising platforms, they are “TV-like environments.” According to agency executives, this regulation will be seen to help mitigate risk even further and give advertisers more confidence in investing in tech platforms.

“Over the last six to 12 months, 95% of the brands we deal with have worked through a conversation and set of parameters with their agencies around what they deem to be acceptable for brand safety and put mechanisms in place to safeguard against that,” said one ad agency executive at a holding group, who requested anonymity. “There’s no global definition of brand safety.”

While concerns around brand safety on platforms are slowly abating, there are areas where it’s getting more complicated too.

“The size of issue in terms of adjacency [to inappropriate content] is getting more difficult with Facebook and Instagram as they move more to individual feeds,” said Kieley Taylor, managing partner, global head of social at GroupM. “It’s becoming murkier.” Here, third-party verification vendors can play a useful role.

How did we get here?
There have been a plethora of cases where tech platforms have been pointed at for shirking responsibility around the spreading of harmful content. In March, Facebook got into hot water over its footage of the mass shooting in Christchurch, New Zealand, which was viewed 4,000 times before being removed. In January, reports into the suicide of teenager Molly Russell was linked in part to viewing content about self-harming on Instagram. According to advertising executives, the platforms have not gone far enough to self-regulate.

“The way to make the platforms change course the most quickly is regulatory pressures,” said Taylor. “Not doing something would impact their bottom line more swiftly than showing due diligence to remove bad actors.”

What happens next?
The next stage is thrashing out all the finer details like placing stricter restrictions on verifying age, deciding a time frame for how long platforms are liable when removing content and how to impose service blocking.

“There will be a conflict or friction around what data you are willing to give and how that impacts age verification, whether that’s a reputable third-party verification at scale,” said Dubbins.

While the directive won’t come into effect until September 2020, tech platforms — and groups like TechUK and Internet Association that represent them in the U.K. — are consulting with the government to make sure the regulations are specific and also fair.

https://digiday.com/?p=343191

More in Media

AI fatigue sets in among workers and company leaders

About half of business leaders report declining company-wide enthusiasm for AI integration and adoption, according to a recent EY pulse survey.

Media Briefing: The top trends in the media industry in 2024

This week’s Media Briefing takes a look at the top trends from 2024, from AI licensing deals to referral traffic challenges.

WTF is agentic AI?

Generative AI is being shoulder barged out of the way by the latest term du jour: “agentic AI.”