Save 50% on a 3-month Digiday+ membership. Ends Dec 5.
For marketers with a keen eye on brand safety and suitability, inclusion lists are something of a gold standard. And like gold, they’re expensive.
Increasingly, media agencies are using generative AI to reduce the lift associated with inclusion lists. “In the past, you [could have] taken hours doing this,” said Tim Lathrop, vp of platform digital at Mediassociates. Now, he added, “you can essentially build a list within minutes.”
When an advertiser uses an inclusion list, it restricts programmatic spend to a list of publishers and sites. It’s the inverse of an exclusion list, which simply removes publishers a marketer doesn’t want to spend with.
In theory, this ensures ad spend only goes where it should – and insures against nasty MFA shocks. In practice, it’s a time-consuming – and therefore expensive – measure.
“The amount of effort that it takes to stand one up and keep it up to date is substantial,” said Forrester ad-tech analyst Evelyn Mitchell-Wolf. As a consequence, it’s a practice used by a significant minority of media practitioners. Per Forrester’s Q3 2025 CMO Pulse Study, 42% of U.S. consumer marketing decision-makers use publisher inclusion lists.
Given the volume of marketers’ brand safety concerns, some media buyers see inclusion lists as one of the only ways to spend open web inventory while satisfying client concerns. Mindshare, for example, is “heavily in on inclusion,” according to Alexis Faulkner, chief transformation officer at Mindshare.
“There’s some really good quality inventory in there that is still performing, but it gets lumped in with all the shit,” said Faulkner. “Unless you can use technology to understand that and understand quality signals, not just safety signals, we’re doing a disservice to a whole medium of media.”
Cutting down the time taken compiling a brand-specific inclusion list (or the time taken maintaining an agency’s central inclusion list) could see the practice gather momentum.
In basic terms, agencies like Mediasscociates are taking their established inclusion lists and comparing them against up-to-date information from a service like The Trade Desk’s OpenSincera. Then, they’re using a generative AI tool like ChatGPT alongside client briefs to identify gaps in their list – publishers they’ve missed off in a given category – for further consideration by buyers or planners. Assembly, for example, uses Microsoft’s Copilot, per head of programmatic Wayne Blodwell.
While the rhythm of the process is the same as before, the AI element allows agencies to expand a list faster than they’d be able to with only human analysts. “If you’re dealing with 100,000 plus domains, it’s very hard to go through it manually,” said Blodwell.
According to Taji Zaminasli, co-founder and managing partner at media agency AxM, “It’s really helpful at uncovering ideas we might not have tested out.”
It’s not foolproof, however. “We always need to keep a really fine eye on it because some of the recommendations aren’t a fit,” noted Zaminasli, who told Digiday her agency had been using the strategy for over a year. She estimated it had made the compilation process 30% faster.
“We’re still working with LLMs – they’re not perfect,” said Lathrop.
Though AxM, Assembly and Mediassociates hadn’t held back, other media agency execs said their skepticism of gen AI had led them to limit its use. One executive, who exchanged anonymity for candor, said their agency was using gen AI tools to categorize and sort the publishers within an inclusion list – but not to compile them.
“It’s a little premature to do exactly what we would hope it could do,” they told Digiday. “Even with guardrails in place, you’re still putting a lot of trust into the subjective opinion that AI spits out for you. That’s why we still rely on manual and human interaction with the sites listed.”
For others, brand suitability questions are too important to be taken out of human hands. Louise Owens, chief performance officer at Kinesso, said the agency used an “AI console” with generative-AI sourced media planning recommendations, but that inclusion lists were still a reserved responsibility.
Reducing the time taken compiling inclusion lists doesn’t answer the other questions industry skeptics hold about the practice. The number of publishers contained on such lists vary depending on client needs, ranging from as low as 600 to as high as 50,000. By taking a deliberately limited approach to ad spending, advertisers might reduce the reach of their campaigns unnecessarily – and concentrate spend among a small number of already established publishers.
“The challenge with that sort of approach is it doesn’t really care about the small to medium size publishers,” said ad tech consultant Jonathan D’Souza-Rauto.
And automating the process could grant buyers and brands the time to make more nuanced decisions about edge cases – or to open things up, should their plan prove to be using too shallow a pool of publishers. Mike O’Sullivan, general manager of product at The Trade Desk, said: “Even if there were a case of under-delivering [on campaign performance], just loosen the restrictions.”
More in Media Buying
Ad Tech Briefing: Pragmatism, not idealism, will determine the fate of Google’s ad tech empire
Judge Brinkema signals a cautious, pragmatic path as the curtain begins to fall in the remedies phase of Google’s ad tech trial.
‘We got scared’: Confessions of an ad tech exec’s AI agent experiment
Agencies, ad-tech companies and publishers are racing to test AI media agents. Not all those tests are successful — even some that are.
Amazon quietly blocks more of OpenAI’s ChatGPT web crawlers from accessing its site
The e-commerce giant has quietly blocked more OpenAI-related bots from crawling Amazon.com, according to updates in its publicly visible robots.txt file.