As agencies evolve AI tools for influencer vetting, they’re also discovering the tech’s limitations
Influencer agencies have embraced generative AI applications over the last year, as they seek to cut the time taken to arrange creator involvement in brand campaigns.
The client reaction to those solutions has been mixed. But in recent months, agencies operating in this space have found one area with clear application for AI tools — brand safety.
Creator vetting can consume up to “a few days to a couple of weeks, depending on the depth of analysis,” said James Clarke, senior director, digital and social at PepsiCo Foods U.S. AI solutions aim to cut that time down to a matter of minutes.
Setups differ between agencies, but most pushing ahead have combined AI-assisted search tools with generative AI applications that analyze the content of social posts. The systems are used to find creators that are a good fit for a brand and disqualify ones whose output crosses a client’s guidelines.
Marketers have grown more sensitive and cautious when it comes to choosing influencer partnerships in recent years. According to agency executives, guardrails might range from depictions of alcohol consumption, swearing or political speech. According to Aynsley Moffitt, director of product and growth at Open Influence, the “majority” of clients limit vetting to the last six months of a creators’ social activity, though some requested the company’s help to preview up to five years’ worth of content.
“This comprehensive approach helps protect brand reputation while identifying creators who will be authentic advocates for our clients,” Moffitt said.
“It’s a first line of defence,” added Ben Jeffries, CEO and co-founder of creator agency Influencer.
The time saved might enable an agency to take on more briefs, or manage a higher number of creator relationships — allowing them to service larger campaigns.
“It’s an efficiency and scale play,” Mae Karwowski, the founder of WPP creator agency Obviously, told Digiday. “The more time we can save on the ‘first pass’ of content or review of creators’ profiles, the more time we can spend on strategy and optimization.”
Demand among advertisers for a time-saving AI solution to creator vetting — and the number of companies offering such a solution — is on the rise. Viral Nation has offered such a solution since 2023, but agencies aren’t the only ones responding.
Lightricks, a software company that publishes software such as Facetune, is currently developing an AI-based creator vetting tool called SafeCollab. The company embarked on an open beta stage of development in November after working with six brand advertisers, including PepsiCo, through 2024.
The software uses both public social posts and access granted by creators that use Lightricks’ Popular Pays influencer platform to scan and analyze an influencer’s activity, said Corbett Drummey, vp of brand partnerships at Lightricks. Instagram and TikTok; X and YouTube were next to be tested.
“It basically does a cursory Google internet search for them, summarizes it, and then it will summarize the [social] content that it indexed,” he explained.
According to PepsiCo’s Clarke, the beta test using SafeCollab was one of several ongoing trials using AI for creator marketing.
“By leveraging emerging technologies… we are confident that our teams will be able to move faster, operate more efficiently, and increase the effectiveness of creator vetting,” he said in an email. Clarke declined to share the “red lines” PepsiCo used to disqualify creators.
Each background report generated using the software costs several hundred dollars, Drummey said, though he declined to provide precise financials. Lightricks’ product is intended as a self-service application for in-house teams, but agencies such as Influencer, Obviously and Open Influence each offer similar solutions for creator selection.
Creator marketing agency Props offers a solution built with API access to Google’s Gemini model, while Influencer has used a proprietary software built upon API access to ChatGPT to service its entire client roster since October.
Prior to that point, Influencer’s staffers would have to OK a creator by manually trawling back through their feeds. Influencer’s Jeffries said the AI solution had helped speed up the work of clearing influencers for partnerships, monitoring campaign outputs, and informed the agency’s post-campaign aftermath reports.
Automating creator vetting carries larger implications than just time savings, though. Large language models’ habit of hallucinating responses, or of reproducing the biases of their original inputs, provide shaky foundations for decision-making.
Creator marketing agency Props’ solution, called Ollie, occasionally misinterprets the images it’s asked to analyze, said Megan Matera, director of client success. In one example she gave, a creator had posted an image showing them drinking beer at a spa. “Ollie had flagged that as ‘they were bathing in beer’.” Matera said.
In the case of SafeCollab, Corbett said his team was in the process of adding custom filters to its software, because its default settings flagged too many posts as risky. He said that clients had told the company that the result was ”too alarmist,” adding that “we’re going to have to change the way we present this stuff.”
To guard against such issues, none of the applications built by Lightricks or its agency peers are able to make decisions over which creators get the green or red light. Instead, the systems flag problematic posts a human staffer to make a call over.
“We always have our human team review all work done via AI,” said Obviously’s Karwowski. “Having experienced team members review any generated insights means we can provide real-time feedback when something seems off,” added Open Influence’s Moffitt.
Still, there’s an uncomfortable parallel between the rise of AI-assisted influencer selection — and brand marketers’ over-use of programmatic brand safety filters, which have been credited with effectively defunding news organizations.
With influencer marketing becoming more and more “programmatic” — both in idea and execution — automating the task of creator vetting might risk repeating that harm, defunding creators whose activity is deemed to fall outside the realm of the acceptable without their knowing.
Jeffries suggested that human intervention in AI-powered selection systems would always be necessary.
“Influence marketing is not just a media buy, it’s also a creative buy,” he concluded.
More in Marketing
DE&I recalibration from the likes of Amazon, Meta, Publicis sparks questions around faltering commitments
For all the talk of embedding diversity into day to day operations and continued commitments to inclusion, there are questions about the intentions behind these changes.
What the agentic AI era means for ad agencies, with Omnicom’s Jonathan Nelson
The CEO of Omnicom Digital discussed the pending IPG acquisition while in Las Vegas for the Consumer Electronics Show.
Marketing Briefing: What happens to marketers when the cultural ‘cheat code’ of TikTok is gone?
TikTok has been a cultural spigot of sorts for marketers in recent years. So what happens when that spigot is shut off?