Germany’s controversial law requiring the tech platforms to remove hateful content has turned up only a few offenders in the last six months. But some worry that this is because the platforms have gone overboard in blocking questionable content to avoid fines.
The Network Enforcement Act, or NetzDG, was passed before Germany held its federal election last September and took effect in January. It requires platforms like Facebook, Twitter and YouTube to take down perceived illegal content within 24 hours or seven days, depending on the charge, or risk eye-watering €50 million ($60 million) fines.
Regulation of tech platforms can become a blunt tool for an issue that requires nuance. The law is vague on what constitutes hateful content, leaving platforms to make the final call. And with the risk of big fines, tech giants have reason to be more zealous with content takedowns.
“It’s like taking a sledgehammer to fix a wristwatch rather than a surgical instrument,” said Scott Vernick, a partner at law firm Fox Rothschild in the U.S.
Germany’s Federal Office of Justice said that by May it had received under 400 complaints of content that needed to be removed by platforms, far below its estimates of 25,000 complaints and 500 fine proceedings a year.
The law requires companies to publish transparency reports every six months, detailing how many requests they had and how the law has impacted them. The first will be published July 31. Until then, though, that low number has led to speculation that the platforms have been more zealous in blocking content, potentially impinging on free speech.
“We are seeing the effects of NetzDG, particularly as the larger platforms have started blocking content,” said Felix Hilgert, a senior associate at German law firm Osborne Clarke.
Members of far-right and far-left political parties alike oppose NetzGD. Germany’s left-leaning party Alliance 90/The Greens said the low number is evidence that users believe a federal authority shouldn’t be in the role of judging speech. How easy platforms have made reporting hate speech will also impact the number of complaints.
The law has led to other concerns. Politicians of far-right group Alternative for Germany have had content temporarily blocked for inciting racial hatred. Germany’s satirical magazine Titanic had its Twitter account suspended for two days for parodying AfD’s tweets. Some worry the law will give more exposure to hate speech that it was designed to suppress and that deleting questionable content will also destroy evidence that might be needed should a case come to court.
“The law is hard to implement,” Hilgert said. “I’m not aware of any case law yet that would help provide guidance to social networks on where to draw the line on certain issues. The practical difficulties were wholly foreseeable.”
The platforms have been staffing up to weed out hate speech. Google said it’s on track to meet its commitment to add 10,000 people globally to fight hate speech, including some NetzDG reviewers based in Hamburg. The platform has been using a combination of human and machine learning to flag violent and extremist content. Since December, it has been slowly training machines to automatically recognize hate speech, but it’s complicated by the fact that videos promoting terrorism tend to follow a formula while hate speech draws on traits like context, slang and intent.
Globally, Facebook is doubling the size of its safety, security and content review teams from 10,000 to more than 20,000 over the course of this year, with 1,500 people based in Germany reviewing content on behalf of the platform. We asked Twitter for comment; we’ll update this story when it responds.
Germany’s anti-hate speech law will draw more scrutiny after U.K. media watchdog Ofcom last week backed calls for greater governmental regulation on the online platforms. Platforms have avoided the same regulation as broadcasters and newspaper because they claim they are not strictly publishers. Ofcom has been cautious about regulation for fear of wading into the tangled free-speech debate. Ofcom plans to set out fuller ideas about regulation in the fall, but as Germany’s example shows, parliament will need to balance of freedom of expression while fighting content that is abhorrent to the ideals of a free and democratic society, said Vernick.
“One has to be realistic with the time frame for investigating posts,” he said. “The law shouldn’t extend beyond the nation state’s border.”
More in Media
Media Briefing: European publishers speak out on advertisers’ punishing brand safety practices
This week’s Media Briefing recaps what publishers had to say behind closed doors during last week’s Digiday Publishing Summit Europe about the brand safety/suitability practices that are penalizing their ad businesses.
At the Digiday Publishing Summit Europe, publishers move away from trend chasing
Despite the publishing industry’s grim outlook, many outliers are simultaneously fighting to change and reluctantly accepting the status quo.
Inside Dow Jones’s AI governance strategy, with Ingrid Verschuren
During the Digiday Publishing Summit Europe, Dow Jones’s evp of data and AI detailed the role that the publisher’s AI steering committee plays in its use of generative AI technologies.