News publisher Reuters created its own manipulated video in order to train its journalists in how to spot fake content before it gets shared widely.
Over the course of a few days, Reuters and a specialist production company created a so-called “deepfake” video of a broadcaster reading a script in a studio. Reuters then shared the video with its user-generated content team of around 12 producers asking if they noticed anything odd about it. Several people with knowledge of the manipulation spotted it had been manipulated, noticing a mismatch between audio and lip-synching, as well as inconsistencies where the reader looked as she was lisping but didn’t sound like it. The speaker also sat unusually still. Those who weren’t expecting an altered video noticed something was off in the audio but struggled to define it.
Deepfakes use artificial intelligence to anticipate facial movements and computer-generated imagery to effectively face swap and doctor a video. The specialist production company Reuters worked with estimates there are around 10,000 deepfakes online, but the majority of these are in the adult entertainment industry. Aside from a few well-known examples of Donald Trump, Barack Obama and Angela Merkel, finding everyday examples to study the fake formats is tricky, so Reuters created its own.
According to Google Trends data, people started searching for the term deepfake in December 2017, and interest in search terms peaked in February 2018. Although some reports say the scale of the threat is overhyped.
“There’s not a slew of deepfakes on my desk, but I don’t want to wait till there are,” said Hazel Baker, Reuters head of user-generated content news-gathering. “There have been some fear-mongering headlines, but there is a genuine threat. Every day, we see attempts to share fake videos. There’s a sliding scale of deceit; the intent is important.”
In the last two years, Reuters has doubled the number of people who work on verifying video content from six to 12. According to Baker, the global team verifies around 80 videos a week. How long it takes to verify video content varies; the team only spends time verifying content that it believes to be true.
During the recent spike in tensions between India and Pakistan last month, Reuters found around 30 videos that turned out to be fake, said Baker, mostly old videos posted with new captions and shared again, like airplane jet displays from 2014 claiming to be threats of attacks. Other fake videos include clips from press conferences of Indian President Narendra Modi where it looks like he’s answering different questions.
All content Reuters publishes is verified by humans, but it uses tech to help, like cross-referencing location on Google maps, or reverse image searching. For instance, while looking for eyewitness videos following the mass shooting in Christchurch, New Zealand, it came across a video which was said to show the moment a suspect was shot dead by police. It ran the keyframes through reverse search and quickly discovered it was from a totally different incident in Florida last year. The suspect in the Christchurch shooting was not killed. “It’s not enough to debunk it, but you have to authenticate it too,” said Baker.
“For a breaking news event, often the first camera on the scene isn’t a professional one,” said Baker. “We can’t afford to ignore this material; it’s important for clients.”
More in Future of TV
Future of TV Briefing: The case for and against The Trade Desk’s CTV platform
This week’s Future of TV Briefing looks at The Trade Desk’s plan to roll out a connected TV platform next year.
Queries mount as The Trade Desk takes an unprecedented step into TV’s adland
Industry peers want to now more about the DSP’s trading deals and broader GTM strategy as it heralds greater CTV efficiencies.
Future of TV Briefing: A Q&A with Coca-Cola’s generative AI head about that holiday ad
This week’s Future of TV Briefing features an interview with Coca-Cola’s Pratik Thakar about the brand’s AI-generated holiday ad that has been getting a lot of attention, for better and worse.