Only ten seats remaining

Secure your place at the Digiday Media Buying Summit in Nashville, March 2-4

REGISTER

How AI regulation differs in the U.S. and EU

This article was first published by Digiday sibling WorkLife

There’s a global artificial intelligence race — and so far the U.S. appears to be in the lead. 

That’s in part thanks to the states being home to huge tech companies like OpenAI, Microsoft, Google and Meta. But, it’s also because of the lack of federal legislation. So far, the only legislation that exists around AI is New York City’s Local Law 144, which requires that a bias audit is conducted on automated hiring processes. Other states, like California and New Jersey, aren’t too far behind with creating their own versions of state legislation. 

The white House has issued an Executive Order on safe, secure and trustworthy AI and a blueprint for an AI Bill of Rights. The Equal Employment Opportunity Commission (EEOC) has also been stringent in saying that it will continue to uphold Title VII of the Civil Rights Act, which is focused on preventing discrimination against job seekers and workers, whether the risk comes from a human or robot. 

Here’s a look at how the regulatory approaches taken in the U.S. and the European Union compare.

The EU’s precautionary approach

Overall, it’s clear the states have a more decentralized and sector-specific approach to AI regulation. Across the Atlantic, the EU has taken a more comprehensive and precautionary tack. This is embodied in the EU AI Act – which passed in June 2023 and is due to be finalized before European Parliament elections in June 2024. This law would classify AI systems by level of risk and mandate regulations depending on what category they fall in. 

The legislation focuses on five main priorities: AI use should be safe, transparent, traceable, non-discriminatory, and environmentally friendly. The legislation also requires that AI systems be overseen by people and not by automation, establishes a technology-neutral, uniform definition of what constitutes AI, and would apply to systems that have already been developed as well as to future AI systems. 

Both the U.S. and EU hold pivotal positions around the future of global AI governance in setting standards for AI risk management. However, Europe-based tech start ups are concerned that the heavier legislation coming out of the EU will hinder innovation, leaving them to fall behind those in the U.S., which has far less red tape.

The issue has led to leadership teams from companies like French-based AI company Mistral to lobby for diluted regulations, arguing that they make the global AI innovation race unequal.

‘We don’t want to cripple our winner, right?’

The U.K. lands somewhere in the middle. No longer officially part of the E.U., the country is developing its own AI rulebook. And yet – just like with the General Data Protection Regulation – if a U.K. company has customers in the EU and works with partners across those member states, it will need to play by the EU AI Act rules.

It’s a rock and a hard place situation. “Then every government kind of says, ‘well we don’t want to cripple our winner, right?,’” said James Clough, CTO and co-founder of Robin AI, a U.K.-based startup using AI to transform the legal industry. “They might see that they might have a really successful AI company growing in their country and they don’t want to regulate it away. But then it gets harder and harder to come up with meaningful regulations.”

And with any regulations, it is extremely bureaucratic. That’s burdensome on companies, especially smaller ones, which can’t match the legal and compliance resources of the tech giants.

“The result of that is it tends to favor established players and big companies,” said Clough. “They [big tech] can handle all of that regulation and it doesn’t stop them from doing what they want to do. Whereas smaller companies might be doing something really innovative, but if they don’t have the big compliance team to write a big report on potential risks, it makes it harder for them to innovate.”

To read the full article click here

More in Marketing

Thrive Market’s Amina Pasha believes brands that focus on trust will win in an AI-first world

Amina Pasha, CMO at Thrive Market, believes building trust can help brands differentiate themselves.

Despite flight to fame, celeb talent isn’t as sure a bet as CMOs think

Brands are leaning more heavily on celebrity talent in advertising. Marketers see guaranteed wins in working with big names, but there are hidden risks.

With AI backlash building, marketers reconsider their approach

With AI hype giving way to skepticism, advertisers are reassessing how the technology fits into their workflows and brand positioning.