‘The status quo is not good enough’: UK tightens regulation around video sharing
European politicians have been tightening the screws on U.S. tech platforms on multiple fronts. Now, a European-wide directive is reportedly extending the same regulatory responsibilities that broadcasters face to video-sharing platforms, imposing fines or service restrictions if they fail to comply.
Under the new Audio Visual Media Services directive, video-sharing and livestreaming platforms including Instagram, Facebook and YouTube will be fined either £250,000 ($301,000) or 5% of the company’s revenue, whichever is more — although the amount is still being decided — if they show harmful videos, including violence, child abuse and pornography.
Platforms will face an investigation from U.K. media regulator Ofcom which, as well as imposing fines, can suspend or restrict the tech platforms’ services in the U.K. if they fail to comply with enforcement measures. That could mean enduring search engine blocking or having senior management held personally liable, for instance. Here’s what to know about the directive.
What areas does the directive target?
A lot of the finer details are still under consultation, but early drafts from the Department for Digital, Culture, Media and Sport outline eight measures it expects video-sharing platforms to comply with, including more effective age-verification systems, reporting mechanisms and parental control systems. This will bring the platforms in line with some of the regulations that broadcasters already face.
Video-sharing platforms need to comply with the wider Online Harms White Paper, a broader scope of legislation revealed in April to hold companies more accountable for protecting individuals online. Ofcom will be an interim regulator until an “online harms” regulator takes up the role.
What sort of impact will this have?
Platforms will likely have to be more proactive around content moderation, increasing human and tech resource to monitor content on their platforms, according to agency sources. They will also need to share annual reports on their progress.
A similar law has been implemented in Germany where legislators can fine social platforms if they do not remove criminal content, which can include hate speech, defamation and fake news, within 24 hours of being reported. So far, early signs have been encouraging, despite fears from free-speech activists.
While there’s a general concern that the government can lack an in-depth understanding of how digital algorithms work, industry experts say intervention is now inevitable. “There is the fear the government will come in with blunt tools,” said Jake Dubbins, co-chair of the Conscious Advertising Network, a coalition of over 70 organizations to work against unethical practices in the ad industry, “but right now the status quo is not good enough.”
Broadcasters have long grumbled that tech platforms don’t have to follow the same regulattions that they do, while still claiming on the one hand that, as advertising platforms, they are “TV-like environments.” According to agency executives, this regulation will be seen to help mitigate risk even further and give advertisers more confidence in investing in tech platforms.
“Over the last six to 12 months, 95% of the brands we deal with have worked through a conversation and set of parameters with their agencies around what they deem to be acceptable for brand safety and put mechanisms in place to safeguard against that,” said one ad agency executive at a holding group, who requested anonymity. “There’s no global definition of brand safety.”
While concerns around brand safety on platforms are slowly abating, there are areas where it’s getting more complicated too.
“The size of issue in terms of adjacency [to inappropriate content] is getting more difficult with Facebook and Instagram as they move more to individual feeds,” said Kieley Taylor, managing partner, global head of social at GroupM. “It’s becoming murkier.” Here, third-party verification vendors can play a useful role.
How did we get here?
There have been a plethora of cases where tech platforms have been pointed at for shirking responsibility around the spreading of harmful content. In March, Facebook got into hot water over its footage of the mass shooting in Christchurch, New Zealand, which was viewed 4,000 times before being removed. In January, reports into the suicide of teenager Molly Russell was linked in part to viewing content about self-harming on Instagram. According to advertising executives, the platforms have not gone far enough to self-regulate.
“The way to make the platforms change course the most quickly is regulatory pressures,” said Taylor. “Not doing something would impact their bottom line more swiftly than showing due diligence to remove bad actors.”
What happens next?
The next stage is thrashing out all the finer details like placing stricter restrictions on verifying age, deciding a time frame for how long platforms are liable when removing content and how to impose service blocking.
“There will be a conflict or friction around what data you are willing to give and how that impacts age verification, whether that’s a reputable third-party verification at scale,” said Dubbins.
While the directive won’t come into effect until September 2020, tech platforms — and groups like TechUK and Internet Association that represent them in the U.K. — are consulting with the government to make sure the regulations are specific and also fair.
More in Media
Media Briefing: Publishers’ Q4 programmatic ad businesses are in limbo
This week’s Media Briefing looks at how publishers in the U.S. and Europe have seen programmatic ad sales on the open market slow in the fourth quarter while they’ve picked up in the private marketplace.
How the European and U.S. publishing landscapes compare and contrast
Publishing executives compared and contrasted the European and U.S. media landscapes and the challenges facing publishers in both regions.
Media Briefing: Publishers’ Q3 earnings show revenue upticks despite election ad pullback
Q3 was a mixed bag for publishers, with some blaming the U.S. presidential election for an ad-spend pullback.