‘The status quo is not good enough’: UK tightens regulation around video sharing
European politicians have been tightening the screws on U.S. tech platforms on multiple fronts. Now, a European-wide directive is reportedly extending the same regulatory responsibilities that broadcasters face to video-sharing platforms, imposing fines or service restrictions if they fail to comply.
Under the new Audio Visual Media Services directive, video-sharing and livestreaming platforms including Instagram, Facebook and YouTube will be fined either £250,000 ($301,000) or 5% of the company’s revenue, whichever is more — although the amount is still being decided — if they show harmful videos, including violence, child abuse and pornography.
Platforms will face an investigation from U.K. media regulator Ofcom which, as well as imposing fines, can suspend or restrict the tech platforms’ services in the U.K. if they fail to comply with enforcement measures. That could mean enduring search engine blocking or having senior management held personally liable, for instance. Here’s what to know about the directive.
What areas does the directive target?
A lot of the finer details are still under consultation, but early drafts from the Department for Digital, Culture, Media and Sport outline eight measures it expects video-sharing platforms to comply with, including more effective age-verification systems, reporting mechanisms and parental control systems. This will bring the platforms in line with some of the regulations that broadcasters already face.
Video-sharing platforms need to comply with the wider Online Harms White Paper, a broader scope of legislation revealed in April to hold companies more accountable for protecting individuals online. Ofcom will be an interim regulator until an “online harms” regulator takes up the role.
What sort of impact will this have?
Platforms will likely have to be more proactive around content moderation, increasing human and tech resource to monitor content on their platforms, according to agency sources. They will also need to share annual reports on their progress.
A similar law has been implemented in Germany where legislators can fine social platforms if they do not remove criminal content, which can include hate speech, defamation and fake news, within 24 hours of being reported. So far, early signs have been encouraging, despite fears from free-speech activists.
While there’s a general concern that the government can lack an in-depth understanding of how digital algorithms work, industry experts say intervention is now inevitable. “There is the fear the government will come in with blunt tools,” said Jake Dubbins, co-chair of the Conscious Advertising Network, a coalition of over 70 organizations to work against unethical practices in the ad industry, “but right now the status quo is not good enough.”
Broadcasters have long grumbled that tech platforms don’t have to follow the same regulattions that they do, while still claiming on the one hand that, as advertising platforms, they are “TV-like environments.” According to agency executives, this regulation will be seen to help mitigate risk even further and give advertisers more confidence in investing in tech platforms.
“Over the last six to 12 months, 95% of the brands we deal with have worked through a conversation and set of parameters with their agencies around what they deem to be acceptable for brand safety and put mechanisms in place to safeguard against that,” said one ad agency executive at a holding group, who requested anonymity. “There’s no global definition of brand safety.”
While concerns around brand safety on platforms are slowly abating, there are areas where it’s getting more complicated too.
“The size of issue in terms of adjacency [to inappropriate content] is getting more difficult with Facebook and Instagram as they move more to individual feeds,” said Kieley Taylor, managing partner, global head of social at GroupM. “It’s becoming murkier.” Here, third-party verification vendors can play a useful role.
How did we get here?
There have been a plethora of cases where tech platforms have been pointed at for shirking responsibility around the spreading of harmful content. In March, Facebook got into hot water over its footage of the mass shooting in Christchurch, New Zealand, which was viewed 4,000 times before being removed. In January, reports into the suicide of teenager Molly Russell was linked in part to viewing content about self-harming on Instagram. According to advertising executives, the platforms have not gone far enough to self-regulate.
“The way to make the platforms change course the most quickly is regulatory pressures,” said Taylor. “Not doing something would impact their bottom line more swiftly than showing due diligence to remove bad actors.”
What happens next?
The next stage is thrashing out all the finer details like placing stricter restrictions on verifying age, deciding a time frame for how long platforms are liable when removing content and how to impose service blocking.
“There will be a conflict or friction around what data you are willing to give and how that impacts age verification, whether that’s a reputable third-party verification at scale,” said Dubbins.
While the directive won’t come into effect until September 2020, tech platforms — and groups like TechUK and Internet Association that represent them in the U.K. — are consulting with the government to make sure the regulations are specific and also fair.
How publishers are handling returns to the office going into 2022
Here are the latest updates to media companies’ return to office plans. Some have already started bringing people in, while others are leaving timelines open-ended, after setting dates earlier this year.
‘This is scary stuff’: Cookie compliance efforts continue to fall short even three years after GDPR
Study finds GDPR-violations are rampant as user consent is often ignored.
Member ExclusiveDigiday+ Research: A majority of publishers don’t want to go back to full-time office work
With vaccination rates having leveled off and the omicron variant rising, optimism about office returns has given way to a more pessimistic outlook.
SponsoredWhat marketers are getting wrong about TV advertising (and how to get it right)
The explosion of new streaming platforms has led to a curious phenomenon in marketing, resulting in about a million think pieces trumpeting new opportunities in TV advertising. And yes, there is a huge opportunity waiting for brands when it comes to both linear and OTT advertising. But most of this well-intentioned (if overly excited) guidance […]
‘It’s too early to sell’: Why Axios is set on investing in internal growth, versus pursuing M&A in 2022
In 2022, Axios will focus on building its local media, subscription and SaaS licensing businesses to keep up the trend of 40% YoY revenue growth.
‘Catalyst for growth’: GroupM’s Brian Wieser bumps up his 2021 and 2022 global and U.S. ad forecasts
The latest global ad revenue forecast from WPP’s GroupM is out — and if it’s accurate, media is going to have a pretty great 2022 with almost 10 percent growth.