12 seats left:

Join us Dec. 1-3 in New Orleans for the Digiday Programmatic Marketing Summit

SECURE YOUR SEAT

The ad industry’s plan to define what counts as AI

As a Digiday+ member, you were able to access this article early through the Digiday+ Story Preview email. See other exclusives or manage your account.This article was provided as an exclusive preview for Digiday+ members, who were able to access it early. Check out the other features included with Digiday+ to help you stay ahead

When Caroline Giegerich talks to marketers about AI-generated video, the discussion always seems to orbit the same point: risk. Not the dystopian kind — the reputational kind.

Giegerich, the IAB’s vp of AI, recalled a moment that emphasized it. A friend at The Weather Company recently showed her an image of an RV created by an AI tool and asked what she thought. It looked fine. Then they pointed out the glitch: the RV had two HVAC units on the top of it — something that would not be possible for an RV of that size.

“It was AI-generated,” she said. “I would’ve never known.”

The example was harmless. But it neatly captured the anxiety running through marketing departments right now: where’s the boundary between automation that helps and automation that hurts? A made-up RV is one thing. An AI-generated image that shows a soda brand as the cause of rotting teeth is another. Those are the kinds of errors that can erode years of brand building in an instant.

These concerns aren’t new. But the arrival of OpenAI’s Sora app, which can turn simple text prompts into photorealistic shareable videos, make them impossible to ignore. The speed and scale of AI content creation have focused marketers to think harder about checks and balances — what gets labeled, who verifies it and how it gets disclosed.

At the IAB, Giegerich and her team are trying to bring some order to that chaos. They’re drafting AI transparency and disclosure guidelines centered on consumer trust for ads across channels.

The goal isn’t to label everything touched by AI, Giegerich said, but to flag when its use could genuinely mislead audiences. Over-labeling risks desensitizing consumers, which could subsequently dilute trust instead of strengthening it. 

To strike that balance, the IAB has convened a working group of brands, agencies and platforms to establish a shared baseline for disclosure: a set of principles defining when and how AI use should be surfaced. Each major social video platform — Meta, TikTok and YouTube — already has its own labelling policies but detection is inconsistent unless that data is embedded in the creative itself. 

The IAB’s aim is to make transparency portable and consistent across the ecosystem, creating a shared system of accountability before regulators step in to do it for them. In Giegerich’s view, that’s the only way to make trust scalable in a world where AI can make anything look real(ish).

“If we don’t align now we’ll end up with 20 different versions of transparency, and one of them will mean anything,” she continued.

The Media Ratings Council (MRC), has also set its sights on creating a comprehensive, standalone AI standard, rather than adding an AI update to all its current standards, aiming to be in market this time next year.

While the full scope and details are yet to be fleshed out, MRC’s senior vice president, digital research and standards, associate director, Ron Pinelli, said the AI standard will likely include:

  • Updates for measurement of delivery and attribution with zero-click search and agentic AI
  • IVT [invalid traffic] considerations related to AI user agents, IP address masking and human pattern simulation
  • Brand safety and IVT considerations related to generated content
  • Requirements for transparency regarding AI buying agents in auction systems
  • Content provenance measurement, labelling and reporting
  • Requirements for training, human intervention, data quality, monitoring and disclosure regarding model and AI use for identity, IVT and brand safety detection

“We expect it to be rigorous (and independent audits are always required), but not necessarily an incremental lengthy process,” Pinelli added. “It will be less about a new accreditation/audit type, but more of enhanced requirements for AI use within existing and future audits.” 

The World Federation of Advertisers (WFA) has taken a different approach: less focus on standards, more focus on forums.

“We’ve issued white papers, or guidelines or best practice, but what we don’t have in the plans is to build out any industry framework,” said WFA’s head of policy, Gabrielle Robitaille. “It’s less about setting standards and more about surfacing; this is currently best practice. As AI evolves, we’ll continue to shape and evolve, and provide guidance in terms of how brands are thinking about it.”

It has a members-only AI community for client-side brands, which consists of more than 900 senior marketers, from marketing VPs and global media leads to insights directors and procurement execs. There’s also an AI steer team of 10 brands, which are more advanced in their AI adoption (think L’Oreal, Unilever, Diageo-type advertisers), which meet monthly to provide a pulse check on priorities and concerns. Then there’s a quarterly community which engages in person and virtually in town hall-type settings.

“Lots of brands were setting up their own AI governance forums internally, bringing together those different functions,” Robitaille said. “We wanted to replicate that as a forum for brands to share opportunities they’ve identified, and how to navigate challenges.”

Over time, these conversations will shape an industry position on AI-generated video. The unofficial stance, though, is already taking shape inside agencies where teams are steering ad dollars away from the growing sprawl of AI-made “slop” that no brand wants to be caught next to.

“Our guidance to advertisers has been, ‘we’ll keep an eye on it and create [exclusion lists] where we can,’” David Dweck, general manager of media agency Go Fish Digital, told Digiday.

Because using AI to speed up a workflow is one thing. Using it to become the workflow — like the rise of “faceless creators” churning out endless content behind the scenes — is another. 

“The whole concept, the whole thesis that people trust people, goes away when you bring AI into it,” said Karen Ram, vp, social content and strategy at Canvas.

There are some marketers happy to put paid spend behind an all-AI creator, however. 

“We’ve worked with faceless channels, we’ve worked with virtual creators, and we’ve worked with creators who leverage AI and content creation,” said Scott Sutton, CEO of Later. Faceless AI creators are “100%” on the table, he added, though they’re better thought of as a basement tier influencer rather than as the peers of TikTokers or YouTubers.

“I see the faceless creator channel working best for bottom-of-funnel campaigns where the goal is sales and product discovery more than it is brand awareness,” he said.

While organizations like the MRC work to formalize industry standards, agencies are relying on their own creator vetting systems to see them through. Worries about brands’ ability to screen creators, especially as advertisers spend larger amounts on larger numbers of influencers, has already led to several agencies building or buying such systems.

New Engen, for example, has begun working with tech firm DoDilly to provide brand safety monitoring on influencer content. Creators’ use of AI, however, is a matter that requires human oversight, Hayashi said.

In the absence of industry-wide guidance, practitioners are left relying on knowing an AI creator has crossed the line when they see it. “This is the programmatic world all over again,” concluded Dweck.

More in Marketing

In Graphic Detail: How AI search is changing brand visibility

We explore recent data illustrating the search dilemmas facing marketers — and the potential solutions.

Confessions of a crypto strategy exec who wants to spend ad money on X, but can’t

When not even the bigger spenders on X can give their ad dollars to the platform, that’s a concern.

Ad tech’s economy depends on float — and it’s getting pricier to keep it moving

The crunch has created an opening for a small set of financiers built specifically for ad tech’s quirks.