AI Briefing: How state governments and businesses are addressing AI deepfakes
As a Digiday+ member, you were able to access this article early through the Digiday+ Story Preview email. See other exclusives or manage your account.This article was provided as an exclusive preview for Digiday+ members, who were able to access it early. Check out the other features included with Digiday+ to help you stay ahead
With two months left before the U.S. presidential elections, state and federal officials are looking for more ways to address the risks of disinformation from AI and other sources.
Last week, the California Assembly approved legislation to improve transparency and accountability with new rules for AI-generated content, including access to detection tools and new disclosure requirements. If signed, the California AI Transparency Act, wouldn’t go into effect until 2026, but it’s the latest in a range of efforts by various states to begin addressing the risks of AI-generated content creation and distribution.
“It is crucial that consumers have the right to know if a product has been generated by AI,” California state senator Josh Becker, the bill’s sponsor, said in a statement. “In my discussions with experts, it became increasingly clear that the ability to distribute high-quality content made by generative AI creates concerns about its potential misuse. AI-generated images, audio and video could be used for spreading political misinformation and creating deepfakes.”
More than a dozen states have now passed laws regulating the use of AI in political ads, with at least a dozen other bills underway in other states. Some, including New York, Florida, and Wisconsin, require political ads to include disclosures if they’re made with AI. Others, such as Minnesota, Arizona and Washington, require AI disclaimers within a certain window before an election. And yet others, including Alabama and Texas, have broader bans on deceptive political messages regardless of whether AI is used.
Some states have teams in place to detect and address misinformation from AI and other sources. In Washington state, the secretary of state’s office has a team in place to scan social media for misinformation, according to secretary of state Steve Hobbs. The state is also underway with a major marketing campaign to educate people on how elections work and where to find trustworthy information.
In an August interview with Digiday, Hobbs said the campaign will include information about deepfakes and other AI-generated misinformation to help people understand the risks. He said his office is also working with outside partners like the startup Logically to track false narratives and address them before they hit critical mass.
“When you’re dealing with a nation state that has all those resources, it’s going to look convincing, really convincing,” Hobbs said. “Don’t be Putin’s bot. That’s what ends up happening. You get a message, you share it. Guess what? You’re Putin’s bot.”
After X’s Grok AI chatbot shared false election information with millions of users, Hobbs and four other secretaries of state sent an open letter to Elon Musk last month asking for immediate changes. They also asked X to have Grok send users to the nonpartisan election information website, CanIVote.org, which is a change OpenAI already made for ChatGPT.
AI deepfakes seem to be on the rise globally. Cases in Japan doubled in the first quarter of 2024, according to Nikkei, with scams ranging from text-based phishing emails to social media videos showing doctored broadcast footage. Meanwhile, the British analytics firm Elliptic found examples of politically related AI-generated scams targeting crypto users.
New AI tools for IDing deepfakes
Cybersecurity firms have also rolled out new tools to help consumers and businesses better detect AI-generated content. One is from Pindrop, which helped detect the AI-generated robocalls resembling President Joe Biden during the New Hampshire primaries. Pindrop’s new Pulse Inspect, released in mid-August, allows users to upload audio to detect whether synthetic audio was used and where in a file it was detected.
Early adopters of Pulse Inspect include YouMail, a visual voicemail and Robocall blocking service; TrueMedia, a nonpartisan nonprofit focused on fighting AI disinformation; and the AI audio creation platform Respeecher.
Other new tools include one from Attestiv, which released a free version last month for consumers and businesses. Another comes from McAfee, which last week announced a partnership with Lenovo to integrate McAfee’s Deepfake Detector tool into Lenovo’s new AI PCs using Microsoft’s Copilot platform.
According to McAfee CTO Steve Grobman, the tool helps people analyze video and audio content in real time across most platforms including YouTube, X, Facebook and Instagram. The goal is to give people a tool to “help a user hear things that might be difficult for them to hear,” Grobman told Digiday, adding that it’s especially important as consumers worry about disinformation during the political season.
“If the video is flagged, we’ll put up one of these little banners, ‘AI audio detected,’” Grobman said. “And if you click on that, you can get some more information. We’ll basically then show a graph of where in the video we started detecting the AI and we’ll show some statistics.”
Instead of uploading clips to the cloud, on-device analysis improves speed, user privacy and bandwidth, added Grobman. The software can also be updated as McAfee’s models improve and as AI content evolves to evade detection. McAfee also debuted a new online resource called Smart AI Hub, which aims to educate people about AI misinformation while also collecting examples of crowd-sourced deepfakes.
According to McAfee’s consumer survey earlier this year about AI deepfake concerns, 43% of U.S. consumers mentioned the elections, 37% were worried about AI undermining public trust in media, and 56% are worried about AI-facilitated scams.
Prompts and Products — AI news and announcements
- Google added new generative AI features for advertisers including tools for shopping ads. Meanwhile, the research consultancy Authoritas found Google’s AI Overviews feature already is impacting publisher search visibility.
- Meta said its Llama AI models have had 10x growth since 2023, with total downloads nearing 350 million and 20 million in just the past month. Examples of companies using Llama include AT&T, Spotify, Niantic, DoorDash and Shopify.
- Major publishers and platforms are opting out of Apple’s AI scraping efforts, according to Wired.
- Adobe released a new Workfront feature to help marketers plan campaigns.
- Yelp filed a new antitrust lawsuit against Google, which claims Google’s using its own AI tools to further give it an advantage.
- U.S. Rep. Jim Jordan subpoenaed the AI political ad startup Authentic, which happens to employ the daughter of the judge that oversaw Donald Trump’s hush money trial. The startup’s founder criticized Jordan’s move as an “abuse of power” promoting a “baseless right-wing conspiracy theory.”
- Apple and Nvidia are reportedly in talks to invest in OpenAI, which is reportedly raising more funding. Nvidia also reported its quarterly earnings last week, with advertising mentioned as one of the industries driving demand.
- Anthropic published a list of its system prompts for its Claude family of models with the goal of providing more transparency for users and researchers.
Q&A with Washington state secretary of state Steve Hobbs
In an August interview with Digiday, Washington state secretary of state Steve Hobbs spoke about a new campaign to promote voter trust. He also talked about some of the other efforts underway including how the state is using AI to fight misinformation, why he wants to further regulate AI content, and the importance of voters knowing where to find accurate information to check facts. Here are some excerpts from the conversation.
How Washington is using AI to track misinformation
“We’re just informing people about the truth about elections. We also use Logically AI to find threats against election workers. So I’ve had a threat against me. We’ve turned over a potential foreign actor, a nation-state actor that was operating and spreading disinformation. So it’s a tool that we have to have. I know there’s criticism towards it, but my alternative is hiring 100 people to look at the internet or social media, or wait for the narrative to hit critical mass. And by then it’s too late.”
On regulating AI platforms and content
“Social media platforms need to be responsible. They need to know where their money’s come from. Who is this person giving me money to run this ad? And is this a deep fake? I don’t know if they’re going to find out. There’s a responsibility there. They really need to step up to play. I’m hoping the federal government will pass a bill to hold them accountable.”
Why voters should verify everything
“When it comes to social media and the news that you’re getting from social media, pause. Verify who this person is. Is it a bot? Is it a real person and the information that you’re getting is verifiable? Can you see it on other news sources? Is it backed up by other sources? [Americans] are target number one for nation state actors. They want you to take their information and immediately share it, immediately spread it… Don’t be a target.”
Other AI-related stories from across Digiday
- How IBM and the US Open are using Watsonx to create more AI-generated tennis content
- Media Briefing: The 2024 media glossary, pt. 2
- Boosted by sports interest and AI, Stagwell expands AR platform with first MLS partnership
- Amid layoffs and cost cutting, Time CEO Jessica Sibley is expecting a ‘very strong second half’
- Duolingo wants to make its green owl mascot ‘as famous as Pikachu’ with its first pop-up store
More in Media
Media Briefing: Efforts to diversify workforces stall for some publishers
A third of the nine publishers that have released workforce demographic reports in the past year haven’t moved the needle on the overall diversity of their companies, according to the annual reports that are tracked by Digiday.
Creators are left wanting more from Spotify’s push to video
The streaming service will have to step up certain features in order to shift people toward video podcasts on its app.
Digiday+ Research: Publishers expected Google to keep cookies, but they’re moving on anyway
Publishers saw this change of heart coming. But it’s not changing their own plans to move away from tracking consumers using third-party cookies.