AI Briefing: How AI misinformation affects consumer thoughts on elections and brands
For nearly a decade, brand safety has been the ad world’s white whale — constantly evading the harpoon of those looking to steer clear of dangerous or salacious content. But the proliferation of generative AI has conjured up an even scarier kind of monster: a multi-headed hydra.
The fight is already on. To boost its efforts around brand safety, IPG Mediabrands is adding more tools for identifying harmful content while also helping advertisers avoid appearing near it. One way is through an expanded partnership with Zefr, a brand-safety startup that tracks content across Facebook, Instagram and TikTok. Along with new ways to pre-block high-risk social content, the companies are creating custom dashboards to help advertisers avoid user-generated content in sensitive categories across text, images, video and audio. Sensitive categories include AI-generated content and misinformation related to U.S. politics, climate denialism, health care and brand-specific content.
“We already have a lot of tools in the programmatic space to help manage misinformation, manage brand safety [and] suitability, but there has always been a void when it comes to UGC in walled gardens,” said Ruowen Liscio, vice president of global commerce and innovation partnerships at Kinesso.
By targeting harmful content, the companies hope to not just help advertisers but also cut off ad funding for such content. According to Zefr chief commercial officer Andrew Serby, misinformation-related content from AI and other sources stays on platforms because it’s funded by ad dollars. But combatting that funding first requires identifying the misinformation and its sources at scale.
To understand consumer perceptions about misinformation — and the ads that appear by it — IPG’s Magna conducted research about how people viewed harmful content and how it affected their perceptions about brands and platforms. Only 36% of respondents to a survey featured in the research thought it was appropriate for brands to appear next to AI-generated content. Ads that appeared next to misinformation were also seen as less trustworthy, and brand perception was hurt even when people weren’t sure if content was real or not.
Although political content was easiest for survey participants to identify, only 44% correctly identified the fake political content, 15% were incorrect and the rest were unsure. AI-generated content — including images of U.S. presidents playing Pokémon and Pope Francis wearing Balenciaga — fooled 23% of respondents and left 41% unsure. Meanwhile, 33% of respondents incorrectly identified misinformation about climate change and 25% were wrong about healthcare-related misinformation.
“What was most important for us that came out of the research is just the ability to understand the quantified impact of what happens when brands appear next to misinformation,” said Kara Manatt, evp of intelligence solutions at Magna.
Companies in the business of AI-generated content are also researching consumer sentiment. In a new report from Adobe, 80% of U.S. adults think misinformation and harmful deepfakes will impact upcoming elections. According to the survey, 78% of respondents thought election candidates shouldn’t be allowed to use AI-generated content in campaigns, while 83% think the government and tech companies should work together to address problems with AI-generated misinformation. The survey results, released last week, include answers from 6,000 people in the U.S. and several European countries.
The findings come amidst debates about whether tech companies should be liable for information on their platforms. The U.S. Supreme Court is also considering a legal battles online content including whether government officials should be allowed communicate with tech companies about disinformation on various platforms. Meanwhile, Rest Of The World, a global media nonprofit, also published a new website for tracking election-related AI content across major platforms in nearly a dozen countries.
Concerns exist across numerous online platforms including X. Even as DoubleVerify claimed the platform formerly known as Twitter was 99% brand-safe, a report from ISD found examples of dozens of AI-generated misinformation images that were posted by verified accounts hours after Iran’s drone strike on Israel and viewed 37 million times within hours.
Adobe’s report helps illustrate the importance of people and companies having tools to identify what’s true and what’s not. Misinformation fueled by generative AI is “one of the most critical threats facing us as a society,” said Andy Parsons, senior director of the Content Authenticity Initiative at Adobe. In an interview last week, Parsons told Digiday that it’s important that people continue to trust verified news sources and don’t begin to question everything when the lines between truth and fiction become too blurred.
“There’s this liar’s dividend, which is once you can question anything and nothing can actually be believed to be true,” Parsons said. “Then how do you even verify that news is news or that you’re not seeing somebody else’s worldview? Or that you’re not being duped with even social media content [even if] it’s from a news source. And then what is the news source if you can’t believe anything you see because it may have been manipulated?”
In other words, there are as many questions as the hydra has heads — if not more.
Prompts and Products: Other AI news last week:
- Along with debuting its Llama 3 model, Meta rolled out several enhancements for its Meta AI chatbot across apps including Facebook, Instagram and a new website for the ChatGPT rival.
- With a new “answer engine,” Brave browser added another generative AI tool for search.
- Snap announced it will start watermarking AI content and updated other parts of its safety/transparency policy.
- Google announced new generative AI image tools for demand-gen campaigns.
- A new bill called the California Artificial Intelligence Transparency Act (CAITA) passed the State Senate committee.
- Digitas announced a new generative AI platform called Digitas AI, which aims to use LLMs.
- Stability AI debuted its new Stable Diffusion 3 API model with new capabilities for platforms like Midjourney.
- A24 received criticism for using AI-generated images in ads for its new “Civil War” film.
Other AI-related news from across Digiday
- ‘Beginning to be the practical’: GE global CMO Linda Boff on the evolution of AI in marketing
- AI takes center stage at Possible conference
- VaynerMedia CEO Vaynerchuk: Media, creative agencies must reunite to create ‘common sense’ marketing solutions
- Q1 ad rundown: there’s cautious optimism amid impending changes
- How influencer agencies are adapting to TikTok’s SEO incentives
More in Media
AI Briefing: Why Walmart is developing its own retail-specific AI models
Walmart debuted its own set of retail-specific AI models to help power the company’s “Adaptive Retail” era of personalized shopping and customer service.
Media Briefing: Publishers confront the AI era during the Digiday Publishing Summit
This week’s Media Briefing recaps what publishers had to say about AI platforms during the Digiday Publishing Summit’s closed-door town hall sessions.
Mastercard, Samsung and 7-Eleven are 2024 Greater Good Awards winners
The honorees of this year’s Greater Good Awards, presented by Digiday, Glossy, Modern Retail and WorkLife, recognize the importance of empowering communities and fostering economic opportunities, both globally and closer to home. Many of this year’s entrants and subsequent winners also collaborated with mission-driven organizations to amplify their efforts in education, inclusion and sustainability. For […]