Agencies weigh the pros and cons of generative AI as political advertising grows
It’s only a matter of time before generative AI content becomes a bigger part of ad campaigns.
With the political ad landscape expected to surpass all sorts of record spending levels in 2024 — more than $15 billion in political ad revenue, per Statista — agencies today need to consider the impact of generative AI on political advertising. Of that $15 billion, the upcoming presidential election is already gearing up to be one of the most expensive races in the U.S. More than a dozen Republican candidates are vying for nomination — and spending in just digital ads alone has already reached $380,000 15 months ahead of the election, according to AdImpact.
Spending aside, agencies active in the political space have noted positive uses for AI, like using it to create content quickly or to maximize voter outreach, but they’re also concerned about the spread of misinformation and the lack of regulation around AI-produced content.
One thing is clear: All 10 agencies and companies Digiday spoke with believe AI will play a role in the political space in the future. It will come down to a matter of how it is used.
“Every so often there is an election cycle where someone gets a competitive advantage from using technology in a new way,” said Mike Treff, CEO of Code and Theory. “This time it’s AI’s turn.”
Here are some arguments for and against AI in political advertising — as well as some of the unknowns that the industry faces.
The good
The advantages to applying AI can range from better personalization and access to real-time insights to efficient campaign management and lower advertising costs. With how quickly tools are being developed, candidates and groups that can leverage AI effectively may gain an “upper hand,” said Mike Nellis, CEO of Authentic and founder of AI campaign tool Quiller.
“I’m really excited to see generative AI technology help us run more efficient campaigns,” Nellis said. “Our generative AI can develop fundraising emails in under 20 seconds.”
- Targeting can become more accurate and engaging, and voter outreach will expand with less investment in personnel. Larry Adams, founder of agency LVA, said AI can help brands create dynamic creative and segmentation decisions — and it “can help political campaigns reach specific demographics with tailored messaging, maximizing the effectiveness of their ads.”
- Real-time analysis of ad performance, issues and sentiments can help provide insights and data. “We have to know early if campaigns are being effective and make changes quickly,” Adams added. “AI can analyze social media and public sentiment to gauge which issues are resonating with voters.”
- Computational power and data can help optimize research, database segmentation and messaging. Code and Theory’s Treff said AI can boost the speed of data computation, and this could impact polling and messaging efficacy: “The candidates who leverage AI correctly will be able to be much faster and nimble.”
- These developments could also lower costs for campaigns and level the playing field for candidates. “AI-driven processes can lower the costs of creating and distributing content, including text, photos, videos,” said Nuno Andrade, chief innovation officer for Media Culture.
- Content creation, testing and ad generation can speed up and get automated. This could also help campaigns localize their content and create multiple languages using AI image and text generators. “It’s easy to imagine a voice model being used to create messages about localized issues in a candidate’s own voice, being used to respond quickly to breaking news or events or creating personalized messages to supporters or target voters,” said Andrew La Fond, vp, executive director of media and connections at R/GA.
The bad
Generative AI can come with risks and dangers, especially as it is being used in new ways where there is little regulatory or industry guidance. As Mitchell West, director at ad intelligence company Vivvix CMAG, explained, “The caution around AI is that it could be used in political ads to create false images or false statements that could be used to attack a candidate.”
Stagwell and the Center for American Political Studies at Harvard’s monthly Harris Poll in July showed that 69% of voters believe presidential campaigns are already using AI in their advertising — and 81% think presidential campaigns should disclose their use of AI.
- There is ongoing discussion around the transparency and disclosure of using AI among social media creators, brands and agencies. Some say that influencers risk losing trust with their audience if there is no disclosure around using AI, and agencies believe it could affect brand safety and reputation if they are not transparent about it.
- Misinformation and deep fakes may spread rapidly. “It has happened in the past few cycles, and I don’t know that our current campaign regulations understand where this is going,” said Tyler Goldberg, director of political strategy at Stagwell’s Assembly. Problematic content can spread fast as well; Media Culture’s Andrade said deep-fake images and videos are sophisticated and “can create realistic fake footage of politicians and other public figures saying or doing things they never did.” Tools like ChatGPT that use large language models are also good at churning out large amounts of misinformation and use it to reach specific audiences, Andrade added.
- Voters and consumers can lose trust in candidates and the process of democracy with the spread of misinformation, filter bubbles and other dangerous content. “New AI tools can force-multiply these efforts at lower costs and are particularly useful for nation-states to overwhelm a target audience with an avalanche of AI-driven deep fakes, which can destroy trust and further exacerbate the impact of echo chambers,” said Sarah Boutboul, intelligence analyst at risk intelligence platform Blackbird.AI. R/GA’s La Fond added: “It may cause many politically disengaged, inconsistent voters to feel disgusted by all sides and disheartened about the election process. Ultimately many may just sit out the election and try to tune out all the garbage.”
The unknown
- What if things get scary, and AI takes over? Treff warned that these tools can be taken to the extreme: “Big-Brother levels of misrepresentative content created to appease individuals, agnostic of a candidate’s actual positions. … If it’s taken all the way to one side of the spectrum, there will be no authenticity among the candidates.”
- It will become hard to distinguish between AI-generated content and other harmful or fake content as AI gets more sophisticated, said Andrade. “Seeing through the AI spin of many platforms and providers could prove to be even more challenging in the months ahead, especially if the current political ad-spend projections are realized,” added Stephen Magli, CEO at AI Digital.
- Regulation is mostly non-existent or lagging, so it will take time create rules around misuse of AI. Assembly’s Goldberg pointed out the campaign and election rules, like disclosure requirements, primarily apply to linear TV, radio and cable – not digital media. “The problem is technology is moving so fast, that even if those regulations were instituted 10 years ago, … digital advertising for politics was mainly about fundraising,” he said. “You weren’t talking about watching a 30-second video ad on your laptop or on your phone or whatever it may be.”
More in Media Buying
Data licensing lawsuit adds a legal wrinkle to Omnicom’s planned acquisition of IPG
There’s been a lot of speculation about the value of Acxiom to Omnicom’s acquisition of IPG, but an ongoing court case over the data warehouse adds another layer.
Holding pattern: Omnicom, IPG and the deal that’s leaving marketers on edge
How Omnicom’s proposed acquisition of IPG keeps marketers guessing.
Here are the numbers to know in Omnicom’s potential purchase of IPG
The acquisition is expected to yield $750 million in annual cost synergies within two years.