Future of Marketing Briefing: Marketers confront a new kind of brand safety problem in AI video
This Future of Marketing Briefing covers the latest in marketing for Digiday+ members and is distributed over email every Friday at 10 a.m. ET. More from the series →
The latest thing keeping marketers up at night about AI isn’t deepfakes or disinformation — it’s SpongeBob.
The cartoon character has become an unlikely mascot for the flood of AI-generated videos taking over feeds, blurring the lines between parody, harmless fun, copyright infringement and misinformation. In the ensuing panic, it’s made one thing clear: AI-generated videos like news and user-generated content before them, exist on a sliding scale — from harmless to outright harmful — and people are watching it.
The real question is where marketers draw the line. Because while it’s fashionable to lump everything under the label of “AI slop”, marketers are learning it’s not that simple. Yes, AI slop exists but treating it as a blanket for all AI-generated content misses the point. One person’s slop to avoid is another’s must-watch.
In the end, it’s about discernment, knowing when automation becomes noise and when it can still serve the story.
“From the conversations we’re having with agencies and advertisers, the general view is that this is a hot topic that’s on their radar but they don’t have answers yet,” said Steven Filler, U.K country manager at digital video ad company ShowHeroes. “It’s got to a point where they’ve realized they need to act soon given how much of this content is escalating”
It’s the same old brand safety debate, resurfacing in the generative AI era — and, as before, it’s sending marketers back to the cottage industry of measurement and verification firms built to help them make the call.
Over the past month, Zefr has hosted a series of workshops with marketers and agency leaders to help them make sense of it all — breaking down the types of AI-generated content driving views across platforms and working with these execs to figure out how they might decide what they’re fine appearing alongside, and what they’d rather steer clear of.
But those decisions don’t stay fixed for long. What feels safe today can become problematic tomorrow as new AI-driven trends surface by the hour. The speed at which the content is being produced means marketers have to keep watching.
That’s why Zefr built a tool to continuously track AI-generated material appearing in ad campaigns, similar to how traditional brand systems flag risky content across platforms. It gives marketers a view into where their ads are showing up and whether that adjacency feels like an opportunity or a liability.
“This is going to end up being the next big brand safety problem,” said Andrew Serby, chief commercial officer at Zefr.
Eventually, though, it won’t only be a safety issue. It will become a brand suitability one — a test of how much chaos, creativity and algorithmic weirdness a brand is willing to stand next to.
“Whatever signals that content is emanating , from believability to virality to contention, it should be independent of the fact that it was generated by an AI,” said Anudit Vikram, chief product officer at digital video optimization company Channel Factory. “As long as those signals align with a marketer’s brand then it’s their choice to say whether or not they want to be associated with that content.”
He and his team are in the early stages of helping marketers do that very thing.
Step one is showing them whether a video is AI-made at all. That sounds easy enough — plenty of AI videos have telltale seams — but the fidelity is improving fast. The monitoring stack has to keep up, pulling signals from frame-level artifacts and lip-synch oddities to audio cues, watermarking and metadata patterns.
From there Channel Factory adds that AI classification into a broader analysis of the content and the channel it’s on, looking at factors like industry categorisation, age, language, gender classification. The goal is to give marketers a clearer picture of what AI-generated videos they want their ads against and what they might want to avoid, before Spongebob, or whatever comes next, turns into the next brand safety snafu. Maybe not today, or tomorrow, but that moment will come. It always does.
“Most of them are really just trying to understand what the future will look like with AI-generated video,” said Lindsey Gamble, creator economy expert and advisor, who is having those conversations now. “There are just too many risks, like brand safety, where their content might appear, and potential copyright violations, so most aren’t planning to take action right now. They’re really just waiting to see what other brands do, but generally hesitate until platforms or tools build out solutions that address more of these risks.”
A widening spectrum
In some ways, all of this — the tools, the workshops, the meetings — underscores an uncomfortable truth about modern marketing. It doesn’t just manage risk, it monetizes it. Entire businesses now exist to help brands navigate an environment built on perpetual anxiety, where the wrong adjunct or viral post can spark reputational fallout overnight. It’s not cynical exactly — rather its the cost of doing business in an ecosystem where technology evolves faster than the safeguards meant to contain it.
“We use a framework with clients to identify how far they want to push it and what their guardrails are,” said Salazar Llewellyn, editorial director at ad agency DEPT. “Our approach is always human-led, craft-first, augmented with AI. You can’t automate good judgment. You can use data to inform, but it’s essential to understand what content is good and is worthy of your brand association, and which is just flooding feeds and platforms with this ‘slop’.”
At least, that’s the idea. In reality, it’s not always clear when AI is even being used. Disclosure rules are inconsistent — when they exist at all — and creators don’t always follow them. That opacity makes it even harder for mallets to draw clear lines between what’s acceptable and what’s not, especially as AI content becomes both harder to detect and easier to buy.
YouTube, as always, offers the clearest picture of that tension.
It’s a platform where faceless channel creators build legitimate direct-to-consumer businesses alongside others who use the same tools to farm views within YouTube’s acceptable boundaries.
The result is a rising tide of AI-generated channels of wildly varying quality, from faceless creators like Kurzgesagt, whose videos marry precision and craft, to an ocean of others detached from any editorial judgement or intent to tell the truth,
And that spectrum is only going to get wider. The platforms will make sure of it. The more tools they release, the more people can create, and the more content gets made, the more engagement — and ad revenue — they can capture. The machine keeps feeding itself.
A moment of reckoning
For now, though, most marketers are still in watch-and-wait mode. The launch of OpenAI’s Sora app, which helped socialize AI video creation, caught many off guard. In that moment, they saw both the risk and the reward of it, especially as those videos began spreading across the wider internet and being monetized elsewhere.
Rather than reacting immediately, they’re taking a beat — building frameworks, refining theses and drafting policies that will shape their advertising strategies in the year ahead.
“I would be surprised if brands aren’t implementing these policies in their ad campaigns from Q1 next year,” said Serby.
—reporting by Seb Joseph, Krystal Scanlon and Jess Davies
YouTube draws a line on AI as OpenAI’s Sora sparks backlash
AI has become such a divisive topic for creators, especially with the latest Sora standalone app launch from OpenAI. On the one hand, there are creators leaning into the tools to scale and enable them to create richer, more compelling content. On the other hand, there are the creators that feel AI content goes against everything they believe in: authenticity.
YouTube, for its part, is trying to strike a balance. Its latest moves suggest a platform eager to distinguish itself from the chaos surrounding OpenAI — and to reassert that creators remain its core constituency.
“We think there’s something to be said for taking the more responsible path,” said Sarah Jardine, senior strategist at SEEN Connects. “If we fail to protect creator IP, we’re at risk of homogenising culture and creativity.”
Jardine’s referring to YouTube’s recent policy updates, including its crackdown on low-effort, AI-generated slop content and the rollout of a new likeness-detection system for creators in its partner program. The tool flags videos that appear to use a creator’s image, whether through altered or synthetic versions and lets them request removals. It’s an early but notable step toward giving creators control over how their likeness is used in the age of generative video.
OpenAI, meanwhile, has taken the opposite tack. Sora’s rollout allowed users to generate videos of real people, living and dead, without consent — a choice that quickly backfired after users began producing disrespectful depictions of Martin Luther King Jr. and other public figures. The company’s belated decision to “pause” such generations at the request of King’s estate, underscored a larger problem: its policies were being built in real-time in response to PR crises rather than principle.
Varun Shetty, vp of media partnerships at OpenAI, explained its stance in an emailed statement:“We’re engaging directly with studios and rightsholders, listening to feedback, and learning from how people are using Sora 2. Many are creating original videos and excited about interacting with their favorite characters, which we see as an opportunity for rightsholders to connect with fans and share in that creativity. We’re removing generated characters from Sora’s public feed and will be rolling out updates that give rightsholders more control over their characters and how fans can create with them.”
By contrast, YouTube’s play looks less like moral grandstanding and more like pragmatic ecosystem management.
“By building safeguards for creator likeness and tightening monetisation for low-effort AI work, YouTube is protecting the quality of its ecosystem — and, crucially, the relationships between creators, audiences, and brands,” Billion Dollar Boy’s co-founder and chief innovation officer Thomas Walters. “The approach contrasts sharply with OpenAI’s recent struggles to define coherent IP and consent policies.”
— Krystal Scanlon
Numbers to know
- $150 billion: the figure by which Google’s stock drop on Wednesday following the launch of OpenAI’s ChatGPT Atlas browser
- 17.2%: Year-over-year percentage increase in quarterly revenue Netflix achieved ($11.51 billion), despite missing Wall Street earnings expectations, which caused its share price to drop 8%.
- 61%: Percentage of global TikTok users that have made a purchase via TikTok Shop
- 36%: Percentage of marketers which say UGC is extremely important to their social media strategy, compared to just 2% which feel the same way about AI content
What we’ve covered
From hatred to hiring: OpenAI’s advertising change of heart
From CEO Sam Altman declaring his dislike for ads, to hiring an ad-platform engineer – Digiday walks through the steps which brought OpenAI to this inevitable u-turn.
TikTok’s ongoing U.S. uncertainty has marketers rethinking next year’s budgets
Despite TikTok’s future in the U.S. somewhat secured — albeit China still has to sign off the deal — some marketers are already taking a cautious approach to 2026 until they know exactly what situation they’ll be dealing with.
Amazon’s next frontier in advertising: the cloud infrastructure it runs on
While securing ad dollars was always a nice bonus, Amazon is betting a lot on its latest launch: a managed cloud network built specifically to handle high-speed, data-intensive transactions that make programmatic advertising possible.
Google’s AdX unit has begun striking deals with media agencies
Despite AdX’s reputation as being a tough nut to crack, Google’s Ad Exchange unit has been offering media agencies post-auction discount deals since January this year.
What we’re reading
How Sam Altman tied tech’s biggest players to OpenAI
The Wall Street Journal reported about how OpenAI’s CEO Sam Altman went on a dealmaking spree of Silicon Valley, essentially playing all the tech giants against each other, in a bid to fuel the company’s own agenda and growth plans.
This year was initially dubbed “the year of the agents” — which stirred up significant concern for AI taking over human jobs. But as The Information reported, since this promise of total autonomy hasn’t actually been achieved, the industry should really lower their expectations of how fast, and how much this will really change and impact business capabilities.
Five ways of thinking about OpenAI’s new browser
When OpenAI launched its long-awaited browser, ChatGPT Atlas on Wednesday, Platformer’s Casey Newton gave a realistic review of what it’s actually like, and how it compares to traditional browsers like Google’s Chrome.
Paramount’s David Ellison revealed a new structure for its ad sales operations, whereby its incoming chief revenue officer, Jay Askinasi (former Roku senior sales exec) will be responsible for ad sales, while the ongoing role of its current ads chief, John Halley, is still unclear, according to Variety.
More in Marketing
Some creators say brands are delaying their holiday deals later than ever this year
After front-loading budgets in the first half of the year, brands strike last-minute deals with creators ahead of the holiday shopping season.
Agency new business crunch now permanent, say execs
Agencies report unreasonable deadlines and time commitments from clients are becoming more common, while new research reveals marketer and agency despair at pitch process.
How one Midwestern department stores sees itself as a ‘hidden gem’ for ‘Instagram brands’
Iowa-based Von Maur considers itself an underdog among department stores. But the retailer says it has unique qualities that are attracting hip brands like Dagne Dover, Ana Luisa and Lulus.