‘Everything is AI now’: Amid AI reality check, agencies navigate data security, stability and fairness
As a Digiday+ member, you were able to access this article early through the Digiday+ Story Preview email. See other exclusives or manage your account.This article was provided as an exclusive preview for Digiday+ members, who were able to access it early. Check out the other features included with Digiday+ to help you stay ahead
The future of the generative AI hype cycle is up in the air, especially after a report from Goldman Sachs questioning the actual value of AI tools. Still, these tools and platforms, whether they’re built on generative AI or glorified machine learning, have flooded the marketplace. In response, agencies are wading through them via sandboxes — safe, isolated and controlled spaces for testing — as well as internal AI task forces and client contracts.
While artificial intelligence itself dates back some time, the industry’s generative AI arms race started last year with the promise that the technology would make marketers’ jobs easier and more efficient. But the jury is still out on this promise as generative AI remains in the nascent stage, and faces challenges with things like hallucinations, biases and data security. (And that’s not to say anything of the energy issues associated with AI.) Additionally, AI companies sit on heaps of data, which could make hacking more of a concern.
“There’s so many ad platforms out there. Everything is AI now. Is it really? Trying to vet that up front and being thoughtful to that, that’s something we spend a lot of time with,” said Tim Lippa, global chief product officer at marketing agency Assembly.
At this point, generative AI has gone beyond large language models like OpenAI’s ChatGPT, seeping into everything from search functionality on Google and social media platforms to image creation. Agencies, too, have rolled out their own AI experiences for internal use as well as client facing operations. For example, back in April, Digitas rolled out Digitas AI, its own generative AI operating system for clients. (Find a comprehensive timeline of generative AI’s breakout year here.)
For all the hullabaloo around generative AI, everything is still in testing mode, according to agency executives. It’s especially important to consider that some AI efforts are based around generating quick headlines or making the C-suite happy by quelling their fears of missing the boat on generative AI.
“Some of these solutions right now are still struggling when it comes to [intellectual property] and copyrights and how they protect that, and if they can disclose the data sets that they’re using or training,” said Elav Horwitz, evp and global head of applied innovation at McCann Worldgroup. Recall, for example, that OpenAI’s chief technology officer Mira Murati made headlines in March for refusing to offer details around what data was being used to train Sora, OpenAI’s text-to-video generator.
One of the big issues with generative AI is hallucinations, according to Horwitz. It’s something McCann has been in talks with OpenAI about, trying to nail down exactly what the tech company is doing to resolve that issue because it keeps coming up again and again, she said.
McCann has enterprise-level agreements with major players in the space, including ChatGPT, Microsoft Copilot, Claude.ai and Perplexity AI, all of which have been deemed secure environments by the agency’s legal, IT and finance teams. (Financial details of these agreements were not disclosed.) Only after the platforms are deemed secure can the solutions be offered to internal stakeholders. And even then, Horwitz added, the agency builds its own sandbox environment on its own servers to ensure the safety of sensitive information before inking any deals with AI partners.
McCann is also currently testing Adobe Custom Models, a content production tool from Adobe. “We can actually use our own visual assets as part of it. We know it’s safe and secure because it’s been trained on our own data. This is when we know we can use it commercially as well,” Horwitz said. The data is the agency’s own through research or client information, she added.
It’s a similar story at Razorfish, where the agency has agreements with larger platforms that keep its own and its clients’ data sandboxed. There’s an approved vendor list to ensure the AI platforms the agency partners with haven’t been trained on licensed or royalty-free assets, according to Cristina Lawrence, Razorfish’s evp of consumer and content experience.
“Or we need to make sure that the confidential data that are used for the tools are not used for training fodder for the LLMs, which we all know is something that they do,” she added.
Taking it a step beyond sandboxes, Razorfish has legal protections in place that require clients to sign off that they’re aware that generative AI is being used for client work. “You have to understand we have multiple levels of check steps because this is very new, and we want to be completely open and transparent,” Lawrence said.
Again, generative AI is still a new space for marketers. Tools like ChatGPT were originally released to the general public, with the platforms learning as the tech progresses and changes, Lawrence said. There’s yet to be a societal consensus on how AI should be regulated. Lawmakers have been mulling over the intersection of AI and privacy as of late, concerned about privacy, transparency and copyright protections.
Until that consensus is reached, the onus is on brands and their agency partners to put up guardrails and parameters to ensure data security and scalability, and to navigate AI’s inherent biases, per agency execs.
“My favorite is always to make sure that the images and what’s going to come out of the creative side has got the right number of fingers and toes and all of those things at the core,” said Lippa. “Everything slapped an AI logo over everything they do over the last year. In some cases, it really is. In some cases, it’s really not.”
More in Marketing
Eco-friendly brands are combatting ‘green fatigue’ by focusing more on product efficacy in marketing
Brands are finding they can combat ‘green fatigue’ by focusing on product efficacy rather than ingredients.
Trump, the manosphere and the marketer’s creator dilemma
The rapid churn of digital culture amplifies both the benefits and risks of engaging with influencers, forcing marketers to confront long-avoided questions with fresh urgency — inside and outside the manosphere.
Should brands be so online? Nutter Butter’s extreme social persona speaks to changing brand dynamics
Why Nutter Butter’s internet speak social strategy isn’t likely to alienate other generations.