Biden’s new executive order marks another inflection point for AI and data privacy
Describing artificial intelligence as the “most consequential technology of our time,” President Joe Biden signed a new executive order yesterday to guide responsible AI development and guard against risks.
A milestone for AI policy in the U.S., the new EO covers a wide array of issues with broad implications for government agencies, businesses and consumers. Using the Defense Production Act, the sweeping order addresses a range of concerns including data privacy, cyber attacks, disinformation and national security. It also directs federal agencies to explore AI’s potential implications for civil rights, health care, innovation, competition, education and the economy.
In a speech Monday afternoon, Biden listed a range of ways AI has the potential to both help and harm — and even joked about watching an AI deepfake of himself that made him think, “When the hell did I say that?”
“One thing is clear,” Biden said before signing the executive order. “To realize the promise of AI and avoid the risk we need to govern this technology. Not and there’s no other way around it in my view. It must be governed.”
To promote its efforts, the White House also debuted a new website, AI.gov, that will provide information about government AI initiatives, offer educational resources and serve as a job portal for AI experts.
Biden also said the administration will continue pressuring Congress to pass bipartisan legislation to stop tech companies from collecting personal data from children, ban targeted ads directed at them and improve privacy protections for all Americans.
“We face a genuine inflection point in history,” Biden said. “One of those moments where the decisions we make in the very near term are going to set the course for the next decades … There’s no greater change that I can think of in my life that AI presents as a potential [for] exploring the universe, fighting climate change, ending cancer as we know it, and so much more.”
The EO’s influence only extends so far and can only take advantage of current laws and existing authorities. But within the scope the White House has, tech experts from former White House administrations say the new order still covers a lot of ground.
“By definition, an executive order is somewhat limited,” said Samir Jain, vp of policy at the Center for Democracy & Technology. “There’s certainly room for Congress [to act] and it’s important for Congress to be able to act here.”
As an example, the executive order directs the Department of Commerce to develop guidance for content authentication — such as watermarking for AI-generated content — existing methods for watermarking content, but it can’t require companies to adhere to any certain standards. When it comes to data privacy, the EO aims to evaluate and improve standards for how government agencies collect and use data — including from data brokers — but it can’t force companies to do anything different.
The full text of the AI order wasn’t published until Monday evening, but the framework has already been praised by AI and privacy experts, nonprofits, think tanks and members of Congress. Some see the scope of the plan as a chance for the U.S. to “lead by example.” Others say the White House is smart to use the power of the purse to advance new standards, which could then benefit the broader market beyond just government technology buyers.
The new order follows another executive order issued in February that addressed algorithmic discrimination. It also comes a year after the White House released its Blueprint for an AI Bill of Rights in October 2022, which state governments like California have recently affirmed as part of their own framework for future AI regulations.
AI regulations are something Americans might welcome. According to a recent survey conducted by Morning Consult, 61% of respondents said they think AI needs to be more highly regulated, 69% said tech companies aren’t developing AI responsibly, and only 32% trust AI companies. The survey also found 70% are concerned about AI’s impact on data privacy and 67% are worried about “foreign power using the tech against U.S. interests.”
The EO also pushes for companies to improve transparency and testing when developing and deploying foundation AI models to be evaluated for safety before being deployed. However, experts note it will be challenging to evaluate and regulate large language models like those developed by OpenAI, Google, Microsoft and Meta. Their size and complexity also make make it impossible for their own makers to even understand all the nuances around how they work. This will require developing a “new notion for how to explain these models,” said W. Russell Neuman, professor of media technology at New York University.
“We’re dealing with a very loosely defined set of industries here,” said Neuman, who also published a new book about AI last month. “It’s almost impossible to distinguish when any computational system does anything that looks like it made a decision to determine whether that is ‘AI’ or not. So it’s hard to regulate something you can’t define.”
Along with addressing various concerns, there’s also a need to highlight more of the potential benefits of AI, said Beth Simone Noveck, director of Northeastern University’s Burnes Center for Social Change. Noveck — who was the country’s first deputy chief technology officer during the Obama administration — said the EO is “laudable” and “fantastic” in terms of its sweeping nature, but “really misses the boat” with a tone focused on risks more than on AI’s potential for innovation. However, she added the White House’s approach makes “absolute sense” in the current climate. She also noticed the EO doesn’t address issues such as impact AI could have on democracy and citizen engagement — key issues ahead of the 2024 election.
“It’s reactive instead of proactive,” Noveck said, “Instead of focusing squarely on what we can do to improve the positive uses of AI, to talk about where the policy is, where our research dollars are going, where our investments are — [all things] that are really focused on encouraging positive uses of AI.”
More in Media
Future plc’s CFO Penny Ladkin-Brand announced on Thursday that she is stepping down, as the U.K.- based media company reported declining revenues and a new two-year investment plan to get back to growth.
In this week’s Media Briefing, publishing executives share how the task forces they created earlier this year to oversee generative AI guidelines and initiatives have expanded to include more people across their organizations.
News publishers hesitate to commit to investing more into Threads next year despite growing engagement
News publishers are cautious to pour more resources into Threads, as limited available data makes it difficult to determine whether investing more into the platform is worth it.