Digiday Publishing Summit

Last chance to secure the best rate on passes is Monday, Jan. 13 | March 24-26 in Vail, CO

REGISTER

At the Senate’s first AI hearing, lawmakers and OpenAI, IBM execs weigh risk and regulation

The story was first published by WorkLife sibling Digiday

To illustrate the dangers of generative AI, Sen. Richard Blumenthal began yesterday’s Congressional hearing about AI oversight with something unexpected from a top government official: A deepfake audio recording of himself talking about the risks of AI and the need to regulate it.

The AI-generated clip was based on recordings of Blumenthal’s floor speeches and then scripted based on asking ChatGPT how he might open a hearing about AI regulation.

“Too often, we have seen what happens when technology outpaces regulation,” Blumenthal’s AI-generated voice said. “The unbridled exploitation of data, the proliferation of disinformation and the deepening of societal inequalities. We have seen how algorithmic biases can perpetuate discrimination and prejudice and how the lack of transparency can undermine public trust. This is not the future we want.”

At the U.S. Senate Judiciary Committee’s hearing, lawmakers spent several hours on Tuesday questioning top experts about generative AI, the risks and how to regulate it. The hearing also marked the first appearance in front of Congress for OpenAI CEO Sam Altman, who testified alongside Christina Montgomery, IBM’s chief privacy and trust officer and AI expert and NYU professor Gary Marcus.

Lawmakers took turns expressing their concerns about data privacy, election manipulation, copyright infringement and job losses. They also asked the witnesses to weigh in on what they saw as potential threats while seeking answers to what regulators and the companies themselves should do to mitigate them.

The hearing also showed how lawmakers have been experimenting on their own to understand the tech and its risks. Sen. Marsha Blackburn said she used ChatGPT to write lyrics for a song in the style of country musician Garth Brooks and asked about copyright issues such as who owns the rights to AI-generated material. Sen. Amy Klobuchar — who discovered ChatGPT generated a fake address when she asked for a polling location in Minnesota — also expressed concern about how AI will impact news organizations.

Altman said he thinks content creators and content owners “need to benefit from this technology,” adding that they’re still talking with people about what the economic model will be.

“Unless you start compensating for everything from movies, books, but also news content, we’re going to lose any realistic content producers,” Klobuchar said. “Of course, there is an exemption for copyright and section 230, but I think asking little newspapers to go out and sue all the time just can’t be the answer. They’re not going to be able to keep up.”

Altman said he thought users of ChatGPT so far have known that they need to verify what the chatbots create. However, he worries about what will happen as models improve and “users can have less and less of their own discriminating thought process around it.”

“I’m excited for a world where companies publish with the models’ information about how they behave, where the inaccuracies are and independent agencies or companies provide that as well,” Altman said.

Another key topic of discussion was transparency around AI models. Lawmakers also asked if AI-generated content should come with “nutrition labels” or scorecards created by independent agencies that explain whether or not consent can be trusted. However, Marcus said some tools for regulating AI don’t even exist yet.

What regulation might look like is still very much unclear. Marcus suggested there should be a U.S. government agency focused full-time on regulating AI and also an international agency to govern the technology on a global level. Altman mentioned there’s already some precedent in other industries such as the International Atomic Energy Agency for nuclear power.

At one point in the hearing, Sen. John Kennedy asked the three AI experts for examples of laws they’d enact if “king or queen for a day.” Montgomery said she’d focus on rules for AI in various contexts. Marcus suggested a safety review process like the Food and Drug Administration’s. Altman mentioned that an independent agency could conduct audits and issue licenses for AI companies.

During the hearing, advertising only came up a handful of times, but it’s something that experts expect will become more of an issue over time. According to NYU’s Marcus, hyper-targeted advertising that uses generative AI is “definitely going to come” — possibly through open-source AI models developed by others beyond OpenAI.

Altman said OpenAI’s decision to not rely on advertising revenue for its business model is why it doesn’t have to try and “get people to use it more and more.” However, when Sen. Cory Booker asked if OpenAI would ever consider ads, Altman said “I wouldn’t say never.”

“There may be people that we want to offer services to and there’s no other model that works,” Altman said. “But I really like having a subscription-based model.”

Lawmakers also said they want to make sure they don’t repeat past mistakes such as waiting too long to regulate major social networks. For example, Sen. Alex Padilla mentioned how companies have underinvested in content moderation for non-English languages. Others, including Sen. Jon Ossoff, said they want to make sure kids are protected from the technology.

“What we’ve seen repeatedly is that companies whose revenues depend upon the volume of use, screen time, the intensity of use, design these systems in order to maximize the engagement of all users,” Ossoff said. “Including children, with worse results in many cases. And what I would humbly advise you is that you get way ahead of this issue…Others on this subcommittee and I will look very harshly on the deployment of technology that harms children.”

https://digiday.com/?p=504444

More in Media

AI in 2025: Five trends for marketing, media, enterprise and e-commerce

After another year of rapid AI development and experimentation, tech and marketing experts think 2025 could help move adoption beyond the testing phase.

Media Briefing: What media execs are prioritizing in 2025

This week’s Media Briefing hones in on the business areas that publishing execs say they will prioritize this year – and what they are leaving behind in 2024.

How publishers are strategizing for a second Trump administration: softer news and more social media

When Donald Trump becomes president later this month, some news publishers will have updated tactics and strategies in place to cover a second Trump administration, ranging from a focus on softer news stories to more social media monitoring and engagement.