What to know about Germany’s fake-news crackdown
Last week, Germany formally proposed a law to fine social networks up to €50 million ($54 million) if they fail to remove harmful fake news or defamatory content — what it’s calling “criminal content” — from their platforms within 24 hours.
Heiko Maas, the federal minister for justice and consumer protection, specified that criminal content includes defamation, slander, threats and criminal misinformation. Unlawful content is a deliberately broad spectrum and could include everything from infringing intellectual property to altering facts to promote racist populism. As part of the proposal, the platforms would also have to publish quarterly status reports, detailing how they handled complaints, how many they received and how their teams are staffed.
Here are the nuts and bolts of what to know about this proposal.
This builds on a current law
“This is an escalation of an existing framework,” said Eitan Jankelewitz, a partner at media specialist law firm Sheridans, explaining that as part of the current e-commerce directive in Europe, online platforms that host content aren’t expected to moderate it. Once platforms have knowledge of unlawful content, usually flagged by people who believe it is infringing on their copyright, or flagged as hate speech, they must “act expeditiously” to requests to take it down.
“’Expeditiously’ is open-ended; there’s not much certainty,” he said. “In this case, it is saying that ‘expeditiously’ is 24 hours.” Essentially, this proposal aims to make the social platforms act more quickly in responding to complaints.
Self-regulation hasn’t worked
“At the end of 2015, Google, Facebook and Twitter took part in a ministry task force,” said Philip Scholz, a spokesperson from the ministry of justice and consumer protection. “They undertook voluntary commitment to delete criminal content from their platforms within 24 hours.” But, Scholz tells Digiday, there hasn’t been enough evidence that the platforms have dealt with user-submitted complaints of hate crime quickly or effectively enough.
Government-funded research found that Facebook deleted 39 percent of hateful posts in January. Between July and August last year, it deleted 46 percent. The report found YouTube removed 90 percent, while Twitter removed just 1 percent. The ministry is setting a target of 70 percent. “Therefore, it is now clear that we must increase the pressure on social networks,” wrote Maas in the proposal.
Germany is working on a tight deadline
Scholz is optimistic that the bill will be passed before September, when Germany holds its general election. For the next few months, stakeholders, including the social platforms, can comment on the proposal. If this isn’t passed by September, the process will begin again, potentially under new leadership. If this gets passed, the minister plans to take it to the European Commission to propose a pan-European law.
Platforms are mobilizing
It’s possible platforms will lobby against this becoming law. If it’s passed, they may have to staff up to deal with responding to complaints more quickly. The most likely route is that once a complaint has been made platforms will choose to remove the content quickly anyway. “I would expect the type of approach where platforms choose to lose some content rather than get clobbered with a fine of €50 million,” said Jankelewitz. Personal fines of up to €5 million ($5.4 million) may be issued to managers working within the social platforms. “Generally, they will go with what the complainant says.”
Facebook claims that by the end of the year it will have 700 people in Berlin working on reviewing content, and it said it is looking into the legislative proposal. Currently, it is relying on users to flag potentially harmful or inaccurate stories and is working with startup Correctiv to investigate claims, but it’s appealing for more publishers in Germany to work with it. According to reports, Twitter has made recent changes to identify and limit abusive accounts, add extra filtering options like allowing users to exclude certain words or phrases, and providing a “safe search” option that excludes potentially offensive material.
Kill Your Algorithm: Listen to the new podcast featuring tales from a more fearsome FTC
Kill Your Algorithm, a Digiday podcast special exploring the implications of a more aggressive Federal Trade Commission, delves into the agency's settlement with period tracking app Flo and why some think it wasn't tough enough.
Future PLC CRO on how its proprietary ‘secret weapon’ can help shoppers amid upcoming chaotic holiday season
Webby is "confident" the company will bring in more e-commerce revenue for its affiliate partners this year than the nearly $1 billion in sales in 2020.
Member ExclusiveMedia Briefing: How sportsbooks are placing bigger bets on sports media outlets
In this week's Media Briefing, media editor Kayleigh Barber looks at how sports betting companies are pushing more money to publishers.
SponsoredHow advertisers are navigating advanced TV and premium video convergence
Nicole Schumacher, vice president of product marketing, Xandr Advertisers have a number of priorities and considerations as premium video content for viewers evolves. Media types are converging as audience behaviors diverge, adding nuance and complexity to each phase of campaign workflows. It’s the age of innovation for all types of video advertising, including convergence — […]
As the FTC takes aim at tech giants, the regulator just lost key tech and data privacy leaders
The FTC has just nine technologists, and three recent departures could stymie its hiring goals.
Omnicom Media Group signs onto Disney’s new clean-room offering as it also launches a brand purpose initiative
The media agency network's brand purpose initiative hits on misinformation, fraud, ethics and DE&I issues; it's also the first agency signed up to Disney's new clean-room offering.