Last week, Germany formally proposed a law to fine social networks up to €50 million ($54 million) if they fail to remove harmful fake news or defamatory content — what it’s calling “criminal content” — from their platforms within 24 hours.
Heiko Maas, the federal minister for justice and consumer protection, specified that criminal content includes defamation, slander, threats and criminal misinformation. Unlawful content is a deliberately broad spectrum and could include everything from infringing intellectual property to altering facts to promote racist populism. As part of the proposal, the platforms would also have to publish quarterly status reports, detailing how they handled complaints, how many they received and how their teams are staffed.
Here are the nuts and bolts of what to know about this proposal.
This builds on a current law
“This is an escalation of an existing framework,” said Eitan Jankelewitz, a partner at media specialist law firm Sheridans, explaining that as part of the current e-commerce directive in Europe, online platforms that host content aren’t expected to moderate it. Once platforms have knowledge of unlawful content, usually flagged by people who believe it is infringing on their copyright, or flagged as hate speech, they must “act expeditiously” to requests to take it down.
“’Expeditiously’ is open-ended; there’s not much certainty,” he said. “In this case, it is saying that ‘expeditiously’ is 24 hours.” Essentially, this proposal aims to make the social platforms act more quickly in responding to complaints.
Self-regulation hasn’t worked
“At the end of 2015, Google, Facebook and Twitter took part in a ministry task force,” said Philip Scholz, a spokesperson from the ministry of justice and consumer protection. “They undertook voluntary commitment to delete criminal content from their platforms within 24 hours.” But, Scholz tells Digiday, there hasn’t been enough evidence that the platforms have dealt with user-submitted complaints of hate crime quickly or effectively enough.
Government-funded research found that Facebook deleted 39 percent of hateful posts in January. Between July and August last year, it deleted 46 percent. The report found YouTube removed 90 percent, while Twitter removed just 1 percent. The ministry is setting a target of 70 percent. “Therefore, it is now clear that we must increase the pressure on social networks,” wrote Maas in the proposal.
Germany is working on a tight deadline
Scholz is optimistic that the bill will be passed before September, when Germany holds its general election. For the next few months, stakeholders, including the social platforms, can comment on the proposal. If this isn’t passed by September, the process will begin again, potentially under new leadership. If this gets passed, the minister plans to take it to the European Commission to propose a pan-European law.
Platforms are mobilizing
It’s possible platforms will lobby against this becoming law. If it’s passed, they may have to staff up to deal with responding to complaints more quickly. The most likely route is that once a complaint has been made platforms will choose to remove the content quickly anyway. “I would expect the type of approach where platforms choose to lose some content rather than get clobbered with a fine of €50 million,” said Jankelewitz. Personal fines of up to €5 million ($5.4 million) may be issued to managers working within the social platforms. “Generally, they will go with what the complainant says.”
Facebook claims that by the end of the year it will have 700 people in Berlin working on reviewing content, and it said it is looking into the legislative proposal. Currently, it is relying on users to flag potentially harmful or inaccurate stories and is working with startup Correctiv to investigate claims, but it’s appealing for more publishers in Germany to work with it. According to reports, Twitter has made recent changes to identify and limit abusive accounts, add extra filtering options like allowing users to exclude certain words or phrases, and providing a “safe search” option that excludes potentially offensive material.
For many influencers, speaking out on Roe v. Wade is an obvious choice
Influencers are concerned about losing potential brand deals because they don’t want to work with those that don’t share their values on choice.
Gannett reviews employee blowback to social media policy memo after Roe overturn
After receiving criticism for forbidding its journalists from posting opinions on the Supreme Court striking down Roe last week, Gannett is reviewing employee perspectives.
Companies turn to employee resource groups to manage internal discourse around the abortion ruling
Companies are using ERGs to facilitate employee conversations, as well as executive leadership via companywide emails to employees stressing their support for wellbeing and the availability of managers for support.
SponsoredWhy the caliber of content is paramount for advertisers
Agata Brodniewska, brand safety manager, Dailymotion Content is king when attracting consumers but is equally essential when courting advertisers. While both stakeholders want many of the same things, they most notably want relevant content they can count on to deliver an accurate and honest message without confusion or misinformation. This is especially important for advertisers […]
Member ExclusiveMedia Briefing: The pros and cons of three commerce pricing models
In this week’s Media Briefing, media editor Kayleigh Barber breaks down the different pricing models that commerce publishers use.
Bloomberg Green’s expansion increases its service-oriented coverage
Bloomberg's climate vertical is adding new products and coverage areas to lean into solutions-oriented journalism.