YouTube’s bad-content problem hasn’t gone away, but as the heat turns up on Facebook for data breaches, the spread of misinformation and Russian meddling in the 2016 U.S. presidential election, some say the Google-owned video platform has been overlooked.
YouTube’s problem with offensive content persists even after its “adpocalypse” last year that revealed bad actors monetizing hateful messages and lewd videos aimed at children, and people posting pedophilic comments. The company said it would hire 10,000 human moderators and take a new approach to advertising, and most of those advertisers came back within a few months, according to ad-tracking firms. But as CNN reported just last week, advertisers are still popping up on controversial channels promoting Nazis, pedophilia, conspiracies and the like.
“We think there are enormous issues with YouTube and privacy and Google more broadly,” said Josh Golin, executive director of the Campaign for a Commercial-Free Childhood, which earlier this month called on the Federal Trade Commission to investigate YouTube for illegally tracking kids. “The conversation has been so focused on Facebook and is an important conversation, but it’d be a mistake if we saw Facebook as one bad actor.”
Meanwhile, researchers have argued that YouTube is actually designed to serve up ever more conspiracy videos and extreme and incendiary videos in a self-reinforcing process.
“There is no reason to let a company make so much money while potentially helping to radicalize billions of people, reaping the financial benefits while asking society to bear so many of the costs,” researcher Zeynep Tufekci wrote in a March New York Times op-ed.
Google didn’t comment on the record.
Facebook may be the biggest social network by users, but another researcher, Jonathan Albright, has argued that YouTube plays an important role in social media, sowing doubt and discord and undermining the democratic process by serving up conspiracy theory videos to visitors.
YouTube has had its share of tough coverage. The Times ran front-page stories on how the platform was letting offensive videos appear in front of kids and helping fuel state-backed Russian news channel RT. But when the Cambridge Analytica story blew up, it eclipsed everything. In March, the public learned that the Donald Trump-linked research firm used 87 million Facebook users’ data without their permission, threatening the very democratic process of the nation.
“Privacy is a hard issue to understand; it’s not particularly sexy,” Golin said. “Cambridge Analytica and Russians and the election makes it more sexy.”
“It had everything — evil CEO, prostitutes, Russians — it’s out of a crazy movie,” a longtime digital media vet said. “Add on top of that the fact that, yeah, we report the story neutrally, but the fact is, Facebook has been pissing off journalists and media companies for a long time, and revenge is sweet.”
The biggest question Congress was asking was whether the Russians colluded with Trump in the 2016 election campaign, and Facebook was known to be a particularly big focus of Trump’s effort to get elected. Evidence of the Cambridge Analytica data breach just fueled the fire. The idea of bad actors stealing people’s data clashed especially badly with the idea of Facebook as a place where people share baby pictures and other personal news with friends and family.
Facebook bungled public relations around the crisis, and its image suffered. A Reuters/Ipsos online poll conducted in March, days after the Cambridge Analytica story was reported widely, found that 41 percent of Americans trust Facebook to obey laws that protect their personal information, compared to 66 percent who said they trust Amazon, 62 percent who trust Google and 60 percent who trust Microsoft Corp. A separate survey during the month of March of Americans 18-29 by Harvard Kennedy School, 27 percent said they trust Facebook and Twitter to do the right thing versus 44 percent who said the same of Google.
YouTube, by contrast, is more safely ensconced in Google, which is doing everything right from a PR perspective. Google has a better reputation with news organizations, nurtured by years of grant-making and technical assistance.
Google was among the tech giants hauled in front of Congress last fall to answer questions about Russian meddling on their platforms. But Google insisted that Russian manipulation on its services was “relatively small.” Google also spent $18 million on lobbying last year — more than Facebook’s $11.5 million and any of the tech giants. YouTube has “largely been left off the hook” by Congress and the FTC, said Jeff Chester, executive director of the Center for Digital Democracy, which advocates for consumers on digital privacy and consumer protection issues. “They been able to get away with it because they have smooth PR, an extremely complicated business model,” he said.
“Google is better at lobbying both in D.C. and the press and publishers,” said Jason Kint, CEO of Digital Content Next, a trade association for publishers. “Despite having different free services, Google and Facebook have the same underlying business model, which has led to many of our industry’s woes. Google, even more than Facebook, has propped up an ecosystem which hasn’t rewarded the companies who trade in high-quality news and entertainment for trusted audiences.”
Things might change, though. Now that the Facebook hearings are over, Congress is expected to turn its sights to Google and Twitter. Critics say they expect the groundswell of concern about privacy will help renew attention on YouTube as a result.
“We got a huge amount of media attention and probably more because of the concern about Facebook,” said Golin of his organization’s FTC complaint. “We are having a conversation about privacy we weren’t having six months ago.”