As AI regulations loom, tech companies add new ways to improve their standards

With government officials exploring ways to rein in generative AI, tech companies are looking for new ways to raise their own bar before it’s forced on them.

In the past two weeks, several major tech companies focused on AI have added new policies and tools to build trust, avoid risks and improve legal compliance related to generative AI. Meta will require political campaigns disclose when they use AI in ads. YouTube is adding a similar policy for creators that use AI in videos uploaded. IBM just announced new AI governance tools. Shutterstock recently debuted a new framework for developing and deploying ethical AI.

Those efforts aren’t stopping U.S. lawmakers from moving forward with proposals to mitigate the various risks posed by large language models and other forms of AI. On Wednesday, a group of U.S. senators introduced a new bipartisan bill that would create new transparency and accountability standards for AI. The “Artificial Intelligence Research, Innovation, and Accountability Act of 2023” is co-sponsored by three Democrats and three Republicans including U.S. Senators Amy Klobuchar (D-Minn), John Thune (R-S.D.), and four others.

“Artificial intelligence comes with the potential for great benefits, but also serious risks, and our laws need to keep up,” Klobuchar said in a statement. “This bipartisan legislation is one important step of many necessary towards addressing potential harms.”

Earlier this week, IBM announced a new tool to help detect AI risks, predict potential future concerns, and monitor for things like bias, accuracy, fairness and privacy. Edward Calvesbert, vp of product management for WatsonX, described the new WatsonX.Governance as the “third pillar” of its WatsonX platform. Although it will initially be used for IBM’s own AI models, the plan is to expand the tools next year to integrate with LLMs developed by other companies. Calvesbert said the interoperability will help provide an overview of sorts for various AI models.

“We can collect advanced metrics that are being generated from these other platforms and then centralize that in WatsonX.governance,” Calvesbert said. “So you have that kind of control tower view of all your AI activities, any regulatory implications, any monitoring [and] alerting. Because this is not just on the data science side. This also has a significant regulatory compliance side as well.”

At Shutterstock, the goal is also to build ethics into the foundation of its AI platform. Last week, the stock image giant announced what it’s dubbed a new TRUST framework — which stands for “Training, Royalties, Uplift, Safeguards and Transparency.”

The initiative is part of a two-year effort to build ethics into the foundation of the stock image giant’s AI platform and address a range of issues such as bias, transparency, creator compensation and harmful content. The efforts will also help raise standards for AI overall, said Alessandra Sala, Shutterstock’s senior director of AI and data science. 

“It’s a little bit like the aviation industry,” Sala said. “They come together and share their best practices. It doesn’t matter if you fly American Airlines or Lufthansa. The pilots are exposed to similar training and they have to respect the same guidelines. The industry imposes best standards that are the best of every player that is contributing to that vertical.”

Some AI experts say self-assessment can only go so far. Ashley Casovan, managing director of the AI Governance Center at the International Association of Privacy Professionals, said accountability and transparency can be more challenging when companies can “create their own tests and then check their own homework.” She added that creating an external organization to oversee standards could help, but that would require developing agreed-upon standards. It also requires developing ways to audit AI in a timely manner that’s also not cost-prohibitive.

“You’re either going to write the test in a way that’s very easy to succeed or leaves things out,” Casovan said. “Or maybe they’ll give themselves an A- to show they’re working to improve things.”

What companies should and shouldn’t do with AI also continues to be a concern for marketers. When hundred of CMOs met recently during the Association of National Advertisers’ Masters of Marketing summit, the consensus was around how to not fall behind with AI without also taking too many risks. 

“If we let this get ahead of us and we’re playing catch up, shame on us,” said Nick Primola, group evp of the ANA Global CMO Growth Council. “And we’re not going to do that as an industry, as a collective. We have to lead, we have so much learning from digital [and] social, with respect to all the things that we have for the past five or six years been frankly just catching up on. We’ve been playing catch up on privacy, catch up on misinformation, catch up on brand safety, catch up forever on transparency.”

Although YouTube and Meta will require disclosures, many experts have pointed out that it’s not always easy to detect what’s AI-generated. However, the moves by Google and Meta are “generally a step in the right direction,” said Alon Yamin, co-founder of Copyleaks, which uses AI to detect AI-generated text.

Detecting AI is a bit like antivirus software, Yamin said. Even if tools are in place, they won’t catch everything. However, scanning text-based transcripts of videos could help, along with adding ways to authenticate videos before they’re uploaded.

“It really depends how they’re able to identify people or companies that are not actually stating they are using AI even if they are,” Yamin said. “I think we need to make sure that we have the right tools in place to detect it, and make sure that we’re able to hold people in organizations accountable for spreading generated data without acknowledging it.”

https://digiday.com/?p=526126

More in Media Buying

Wpromote expands its offerings to full service, dubbing it ‘brandformance’

Wpromote quietly expanded from its performance roots to what it’s calling “brandformance” and expanded its c-suite to include a brand-side exec and a new managing director, to help ensure it is firing on full-funnel cylinders.

AI Briefing: Pinterest, Microsoft and Google bring new generative ad features for social and search

This week has had another flurry of AI-related ad news as major players like Pinterest, Microsoft and Google roll more ways for advertisers to create and buy ads across social, search and chatbots.

Omnicom Media Group strikes partnership with Snap for creator collaboration

OMG’s influencer marketing agency Creo is partnering with ​Snapchat’s​ Snap Star Collab Studio to develop brand campaigns with the platform’s creators, known as Snap Stars.