With AI bias still a sticking point for clients, agencies mix human and technical fixes

As a Digiday+ member, you were able to access this article early through the Digiday+ Story Preview email. See other exclusives or manage your account.This article was provided as an exclusive preview for Digiday+ members, who were able to access it early. Check out the other features included with Digiday+ to help you stay ahead

Since the advent of generative AI roughly two years ago, one concern has persisted among marketers contemplating the advantages of the tech — bias.

Agencies such as DDB, Monks and Huge, as well as generative AI marketing platform Pencil, hope to bolster their clients’ confidence through a mix of human corrections and technical fixes. The latter has this week unveiled a spread of new solutions its executives hope will quell client worries and provide a welded fix, if not a solution, for the issue.

Pencil, owned by marketing services group Brandtech, has added a “Bias Breaker” feature to its AI tool suite, begun offering advisory services to help clients write bespoke AI ethics and legal policies, and incorporated anti-bias units into its training scheme for clients.

Though Brandtech group partner and head of emerging technology, Rebecca Sykes, said the new solutions can’t remove human bias in creative decisions, they can give marketers a better “starting point.”

“We’re raising the floor, not the ceiling,” she told Digiday.

Pencil’s software pulls from several different AI training models, including Meta’s Llama, Google’s Imagen and Adobe Firefly, to fuel its outputs. But given the biases present in the datasets behind those models, it carries a risk of producing marketing assets that exhibit racial, gender or age bias.

For example — if the majority of images of business executives held within a training model show white men in suits — a user that inputs a prompt requesting an illustration of a chief operating officer will likely only be shown men of a certain demographic. 

In basic terms, Bias Breaker is a prompt engineering feature that generates an additional line to be added to an AI prompt, requesting that it also draws upon diverse characteristics — such as age, gender or race — based on a probabilistic approach.

Sykes uses the metaphor of a set of dice being rolled to decide the additional line, with each die representing an aspect of inclusivity or diversity. She explained: “When you put your simple prompt in, you roll the dice behind the scenes, and between zero, one, and two forms of inclusivity are added to your prompt. That’s zero, so that you don’t always over-correct, one, so that you have representation, and two, so that you can have the possibility of intersectionality.”

Once the “dice roll” has been completed and the adjustment automatically made, the user’s able to employ a “more sophisticated” prompt. It’s a post-hoc fix for issues inherent to the training models that Pencil draws upon, but one Sykes believes should allay client concerns around bias. “This is not a solution,” said Sykes. “It’s a first step in moving towards a really responsible deployment of AI with ethics front and center.”

Concerns about the legal status of AI-derived media assets, or the ethics of using them in a marketing campaign, and of bias risks have been on the minds of marketers since AI emerged as a viable investment following the launch of ChatGPT in late 2022. Though Sykes said that questions about the legality of generative AI tools had faded (she said clients are “working their way through their own risk tolerance”), agency execs say bias comes up frequently in client discussions.

“Bias is a very real concern,” said George Strakhov, global head of creative technology at DDB, in an email. “We have conversations with clients on this all the time.”

“If the data used to train AI models is biased or one-sided, the AI system can learn and perpetuate those biases. This can lead to skewed results that reinforce existing inequalities or stereotypes,” said Marc Maleh, chief technology officer of Interpublic Group shop Huge.

It’s a worry that extends beyond marketing production concerns. In a Gartner survey of over 100 audit executives conducted in August 2023, 54% recognized diversity, equity and inclusion as a key risk they planned to investigate in 2024; 42% highlighted unreliable outputs from AI tools.

Geert Eichhorn, executive innovation director at Monks, said that it had tackled bias concerns through bespoke approaches to the issue. “Typically we mitigate [concerns] by planning and prompting for specifics,” he wrote in an email. “For example, for a brand’s updated stock library we were asked to generate 200 people in 200 different locations (within the same country) and we used census data to split those 200 people in gender, ethnicity and disabilities.”

His colleague Anoesjka van Niekerk, a legal director in charge of Monks’ global legal AI framework and part of its AI Core team, added that the company’s legal, privacy and infosec departments each work to vet the tools it uses.

At DDB Strakhov said that a human-first approach to heading off bias would be more effective than relying on more tech.

“There is bias that affects people who are using ‘out of the box’ tools just as they [the tools] are,” he said. “If you invest time to fine tune, augment and build extra safety systems on top of the default models (as we do), then it’s not really a problem any more. You just have to put in the time and effort and then build a process where there is oversight — both from a trained human and from another AI.”

At Faith, the AI unit of agency VCCP, head of social and innovation and managing partner Alex Dalman said it took a similar approach on a recent project for client Cadbury’s. “We use human oversight, primarily — we don’t believe that GenAI is a replacement for human experience and expertise, it’s a tool that we have to deploy carefully,” she said in an email.

At Huge, Maleh said its team aims to create bespoke training models for clients which mitigate bias via an intensive testing program and technical guardrails that form “a continuous feedback loop against bias.” But the agency’s work begins by working with clients to write guidelines that define what’s acceptable and what’s not.

Despite expressing confidence that Pencil’s “Bias Breaker” can mitigate risk for clients, Sykes agrees that the human touch is still the most important backstop.

“We don’t believe in 100% automation of anything,” she said. “You will still have a human who says ‘that’s the one for us.’ We still have that filter.”

https://digiday.com/?p=555067

More in Marketing

Advertising Week Briefing: Marketers are taking the athlete influencer opportunity seriously

Brands see sportspeople as a means of reaching engaged, focused fan communities – particularly amid rising interest in women’s sports, college sports and an expanding name, image and likeness (NIL) industry.

WPP and Roblox strike new global content and advertising partnership

Elements of the Roblox–WPP partnership will include the joint development of a certification program intended to help marketers become Roblox experts, as well as the formation of an “advisory council” to help develop measurement standards for Roblox’s three-dimensional in-game advertising inventory.

Sony’s ‘Ghost of Yotei’ shows how games and film/TV adaptations are increasingly going hand in hand

Sony’s announcement of “Ghost of Yotei” adaptations from the very beginning shows that the publisher is recognizing how IP adaptations can be an invaluable tool to extend the lifespan of even a relatively new game.