‘A perfect storm’: The Wall Street Journal has 21 people detecting ‘deepfakes’
To combat the growing threat of spreading misinformation ahead of the U.S. 2020 general election, The Wall Street Journal has formed a committee to help reporters navigate fake content.
Last September, the publisher assigned 21 of its staff from across its newsroom to form the committee. Each of them is on-call to answer reporters’ queries about whether a piece of content has been manipulated. The publisher has issued criteria to committee members which help them determine whether the content is fake or not. After each query from a reporter, members write up a report with details of what they learned.
This is part of the Journal’s plan to root out so-called “deepfakes,” content that has been manipulated by artificial intelligence, which has been a growing concern for publishers like Reuters and The Washington Post over the last year.
Heightened political tensions, increasingly sophisticated technology and the speed at which fake content can spread, has elevated the importance in spotting these fakes.
“We hear there will be a massive proliferation in technology in a couple of years which will coincide with the U.S. general election,” said Francesco Marconi, the Journal’s research and development chief. “That’s a perfect storm, technology is evolving so quickly, it’s an important inflection point. We want to be proactive.”
Like most underbelly areas of the internet, Marconi first came across the term “deepfake” on Reddit after seeing this academic paper on manipulating a video of Barack Obama in 2018. This, combined with a lot of activity in universities and research centers into the growth of deepfakes, spurred the R&D team to form the WSJ Media Forensics committee, a joint effort between the Standards & Ethics team and R&D.
The Journal also invites academics and researchers to give regular talks on new detection technology. It has written newsroom guides and holds training sessions — Marconi estimates between 120 and 150 WSJ journalists have participated. It also monitors the different detection tools being developed by major tech companies and startups. For now, training is not mandatory. Although the technology and tools are improving to spot fakes, standard journalistic process, like thoroughly checking sources, still apply.
Understanding the scale of the problem is tricky, but there are some indications. Cybersecurity startup DeepTrace has detected over 8,000 fake pornographic videos on adult entertainment sites. Like a lot of new technology, AI-manipulated videos have roots in the porn industry. In 2018, Google searches for “deepfakes” were 1,000 times higher than in 2017. A study from the Massachusetts Institute of Technology in January into the makeup of all academic papers about AI, found that neural networks (the main approach used for these doctored videos) were mentioned in 25% of the papers, more than any other machine-learning method.
What makes it even trickier is a lot of doctored videos are not malicious but done for comedic or satirical effect. The Salvador Dalí Museum in St Petersburg, Florida, for instance, has used AI to bring the artist back to life to greet visitors.
“There are good intentions, but there will always be bad actors,” said Marconi. “We saw a proliferation of types of manipulated media has negative consequences to journalism and society. We decided it was a threat to our news-gathering process.”
Marconi wasn’t able to share how many fake videos the Journal’s committee has come across or details of the process it has developed to spot them due to newsroom policy.
The next battlefield is audio. Marconi cited an example of manipulated audio of Donald Trump and Barack Obama speaking Mandarin. “It’s a different monster, and it’s scary; there’s no way the human ear can tell.”
Like with a lot of offshoots from AI, headlines have over-hyped the phenomenon, but Marconi attributes this more to artificial intelligence itself. “At least from the deep-fake standpoint, it’s better for there to be a lot of awareness than to miss it completely,” he said.
Headlines have bubbled up about a doctored video of a speaker of the U.S. House of Representatives, Nancy Pelosi. The video, which surfaced in May, tried to make the politician appear drunk. Facebook, keen to avoid editorial responsibility, was criticized for not removing the video, clips of which had 2.5 million views, admitting an “execution mistake.” The video now seems to have disappeared. In retaliation, fake videos of Mark Zuckerberg spread around the internet.
As the proliferation of deepfakes grows, Facebook and other platforms’ stance on balancing freedom of speech with restricting the spread of misinformation will again come under pressure. But avoiding censorship is an increasingly thorny issue, he said: “How do you balance censorship with freedom of speech in the age where machines can create this content?”
Pernod Ricard thinks the Facebook advertiser revolt won’t be enough to curb hate speech online, so it’s developing an app to help
Pernod Ricard is developing an app that will let people flag hate speech they see online.
‘Hug them close and punch them in the nose’: How upstart Protocol, eager to get inside crowded tech beat, struggled and cut to survive
The company had an ambitious goal: To do for tech news coverage what Politico had done for politics coverage a decade earlier.
Member Exclusive‘Allow the creators to create’: EOS hands influencers the wheel to drive effectiveness of its TikTok campaigns
In the latest Digiday+ Talk, Soyoung Kang talked about EOS's relationship with TikTok's creative influencers and how her team has used its paid TikTok campaigns to drive organic growth on its own channel.
SponsoredWhy data clean rooms are a start, but not enough
Clean rooms are intended to be a “safe space” for brands to collaborate with walled gardens, but the greater opportunity for all brands is bringing together all of their data to create a single source of truth that they own and can continually enrich.
‘My white colleagues are looking to me for answers’: Confessions of a Black ad tech exec
While the ad tech has taken strides toward being more inclusive, it has also suffered setbacks, according to a senior Black exec.
‘It is important for us to take a leadership role’: How esports giant FaZe Clan is working to root out bad behavior in the gaming community
Lee Trink, CEO of the $240 million esports collective, on its expansion plans and no tolerance rule on divisive language.