for the Digiday Programmatic Marketing Summit, May 6-8 in Palm Springs.
Marketing strategists search for a solution to AI’s all-too predictable outputs
Surprisingly early in the era of AI’s influence on the creative process, a problem is arising: sameness and homogeneity. Besides not using AI to help, are there other workarounds?
Marketing strategists are now habitual users of generative AI. But as tools like ChatGPT and Claude embed deeper into the work of agency execs and freelancers, more strategy practitioners are finding that the predictability of their responses is putting a lid on productivity gains.
And as they search for a workaround or architectural solution to the so-called “sameness trap”, ad execs are confronting questions over how they add value for clients, as well as how much they take — and expect — from AI tools, and who gets credit for originality.
‘My second brain’
Ad creatives have often turned to external tools for an assist to their thinking, from brainstorming card games to art books or creative memoirs. Oliver’s U.K. chief strategy officer Nick Myers keeps a copy of Bill Bernbach’s Book by his desk, for example. But with modern workplaces increasingly atomized and remote, many strategists have resorted to AI tools as a form of replacement.
“I’m not operating in a strategy team, so I’ve always had to kind of build a virtual team around myself,” said Zoe Scaman, the founder of strategy studio Bodacious. Scaman, who said she frequently uses tools like NotebookLM and ChatGPT, puts the tools to use for research, document analysis, and as a sounding board for ideas. “Claude is basically my second brain,” she told Digiday.
At creative agency Zeal, chief strategy officer Lorna Hawtin said the team uses a “Banging Brief Bot” designed within Claude to critique strategy briefs against the company’s house style.
“We’ve taken the stabilizers off. We’re using [AI] for fact finding, cultural deep dives… [and] semiotic analysis,” she said.
Myers said his team uses LLM tools to create focus groups of AI agents, primarily via Pencil, the AI suite developed by its Brandtech-owned sister company of the same name.
Scaman, who has written extensively on her Substack newsletter about the use — and limitations — of AI in her work, developed several project within Claude for critiquing her ideas and work; one was dubbed “My Own Worst Critic.”
“I put ideas in there, and the whole point is that Claude is supposed to tell me if it’s shit,” she explained. “It’s a really good way to push myself further and to say, actually, ‘That’s not good enough, I can do better.’ I write quite prolifically, but about 80% of the writing I do is killed by that project.”
She developed another based on the published writings of sci-fi author Ursula K. Le Guin (who died in 2018), to offer up unusual suggestions to written prompts. “I call it ‘Ursula Bot’,” said Scaman. “I put a lot of my stuff through there and I get ‘her’ to push it into different territories.”
The ‘sameness trap’
Any LLM, even one trained on The Left Hand Of Darkness, can’t produce genuinely new ideas, though. That limits their usefulness to strategists asked to provide a fresh angle on a client problem.
“You don’t get those sideways, magic moments. There’s an electricity that can sometimes pull two thoughts together from seemingly distance poles in your brain,” said Hawtin. “You miss that edge [with AI].”
In part, that’s a consequence of their design. Because tools like ChatGPT work by selecting the most probable sequences of words and data, their answers to prompts trend toward the average. It’s compounded by the similarity in training data used by popular LLMs like ChatGPT and Claude.
“An environment like Uncommon puts a bonus on creativity, uniqueness, things you’ve never thought of before,” said Maximilian Weigl, Uncommon’s co-founder and chief strategy officer. “Larger models can’t [provide that] because very often they bring you back to a presumed mean, something that’s probable for many people.”
Some strategists have looked to jury-rig solutions to the issue. Myers, for example, said his team had begun using a larger range of LLMs to build AI agent personas, to spread the net wider and dodge the “sameness trap”. Scaman said she’d spent 18 months working to “jailbreak” Claude through personalized prompts and data. Others simply don’t use the tools for certain tasks.
The divergence model
But workarounds can’t solve a problem that’s deep in the architecture of LLMs like Llama, Claude and ChatGPT. Some practitioners believe a more technical approach is the answer.
In 2025 researchers at Carnegie Mellon University developed an open-source methodology, “NoveltyBench,” for testing the variety of responses given by LLMs. The researchers found that larger AI models offered less diverse responses and that even with intentionally designed prompts, tools like Llama and Gemini exhibited “a fundamental lack of distributional diversity.”
Sydney-based startup firm Springboards has been developing a small language model (based upon an open-source model built by Alibaba-owned Qwen atop a dataset of 30 billion “parameters,” compared with the 2 trillion used by the largest Llama models) to exhibit a higher degree of variation and unpredictability in its responses.
“It’s designed for people to let them expand the latest knowledge, or spark an idea in their head and do so in a way which doesn’t give you the same answer as every single person,” said Pip Bingemann, co-founder and CEO of Springboards, which counts Australian ad agencies Cummins & Partners and BMF among its customers for Flint.
While major LLMs average a 2.88 out of 10 using the NoveltyBench framework, Flint scored 7, according to Bingemann. He and co-founder Amy Tucker, a former Twitter and Shopify exec, call Flint a “divergence” model for its ability to generate responses that differ from the mean.
“If you’re writing jokes or screenplays you might not want the same as everyone else. It’s like deciding what’s for dinner: maybe you don’t want to have the same damn recipe suggestions all the time,” said Bingemann. Although an alpha version of the model is available, he said Springboards plans to release an API later this year.
“It pushes you out of your comfort zone,” said Uncommon’s Weigl, who tested the model prior to its release last week. In short, Flint offers a technical solution to the “trap” of predictability.
But should strategists be looking for technical fixes at all, when clients hire them — ultimately — for their ability to provide original thinking? Myers suggested that ad creatives would be better off curating their own sources of inspiration and novelty.
“Are strategists going to be replaced? If you’re not feeding yourself, then yes,” said Myers. “But if you can bring something to the table and use the tool as a partner, you have a better chance of having a future.”
Weigl suggested that rather than becoming another mental crutch for strategists, a tool like Flint could instead provide “healthy competition.”
“The best work probably comes from competition in some shape or form,” he added. “There is an advantage in having something that pushes you.”
More in Marketing
OpenAI turns on cost-per-click ads inside ChatGPT
The move come as the platform looks to hire its first Advertising Marketing Science Lead.
Digiday+ Research: Marketing workflows benefit from AI, but trust is still a barrier to adoption
Research shows that while marketers see AI’s benefits, trust and complexity issues are barriers to widespread adoption.
David’s Bridal invests in its creator strategy as part of its post-bankruptcy comeback
David’s Bridal is investing more heavily in creators as part of a broader push to modernize the business.