WTF is Section 230?

This article is a WTF explainer, in which we break down media and marketing’s most confusing terms. More from the series →

From Supreme Court litigation to federal legislation, a nearly 30-year-old law is increasingly in the spotlight: Section 230.

Passed by Congress in 1996 as part of the Communications Decency Act (CDA), the law has been a linchpin for protecting online platforms from legal challenges. While the CDA aimed to prevent minors from accessing explicit content, Section 230 created a framework for protecting companies like Google and Facebook from being sued over what people post.

Amid growing concern about user-generated content, Section 230 has found itself increasingly under a microscope. In recent years, U.S. lawmakers on both sides of the aisle have introduced legislation to amend Section 230 to curb misinformation and other harmful content, including generative AI. And just last month, the U.S. Supreme Court heard oral arguments about whether Florida and Texas should be allowed to limit how tech companies moderate user-generated content. 

Despite calls for change, some experts say changing Section 230 could lead to a deluge of bad-faith lawsuits against tech companies and online media more broadly. Others say forcing platforms to allow all content without consequences could create more of a hazardous environment for both users and advertisers.

So what exactly is Section 230?

While Section 230 is both controversial and complex, many experts point to a 26-word section that helps distill the legalese down to its essence: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” And yet, here’s a key ambiguity: Who should be considered a publisher and who only a distributor?

There are two key parts to Section 230. The first is a limited liability provision that gives companies broad immunity from legal complaints related to content on their platforms. The second part provides immunity for companies when they take action against content deemed as “obscene, lewd, lascivious, filthy, excessively violent, harassing or otherwise objectionable.”

As the internet grew, online platforms were tasked with parsing through massive amounts of content and deciding what to censor or not. That required platforms to have a heavier hand both in making sure content was safe but also relevant — something that became increasingly important as billions of global users fueled an explosion of user-generated content.

“There’s an argument to be made that these companies are media companies [and] their algorithms are editors,” said Robyn Caplan, a Duke University professor who researches social media and public policy. “And even if it’s being done through automated means, an algorithm isn’t without human input — they are putting in input to understand to figure out what to prioritize with an algorithm.”

Why is Section 230 relevant right now?

The internet’s evolution has prompted many to wonder if Section 230 should be updated for modern times to help address a myriad of issues related to online content. While experts wonder how Section 230 might intersect with various proposals for protecting kids from harmful content, others warn rolling back Section 230 could also weaken digital privacy protections. Meanwhile, members of Congress have suggested ways to amend Section 230 to make platforms liable for misinformation about elections and public-health emergencies.

The proliferation of generative AI also poses new questions about whether Section 230 will protect companies from being liable for AI-generated content — something the Supreme Court is also asking. When the high court heard cases against Twitter and Google in February 2023, conservative Justice Neil Gorsuch wondered if algorithms have evolved beyond being protected as a “neutral tool.”

“In a post-algorithm world, artificial intelligence can generate some forms of content, even according to neutral rules,” Gorsuch said. “Artificial intelligence generates poetry, it generates polemics today. That would be content that goes beyond picking, choosing, analyzing, or digesting content. And that is not protected.”

Debates about Section 230 arise alongside growing interesting — and growing concern — around generative AI. Should AI-generated content also be immune from litigation? Some experts argue users are liable based on their prompts. Others say the AI tool should be accountable since it has more control over the content.

“It should be noted that not all GenAI products function the same way, and similar to the legal analysis today under Section 230 it would be a fact-driven determination based on the application and functionality of the GenAI tool,” said Monique Bhargava, a partner at the law firm Reed Smith.

Who wants to change Section 230?

Updating Section 230 is a bipartisan issue, but answers vary when it comes to finding the right approach. Along with various efforts in Congress, others that have called for changes include President Joe Biden and former president Donald Trump.

Last year, U.S. Senators Josh Hawley (R-Mo.) and Richard Blumenthal (D-Conn.) introduced legislation to strip Section 230 immunity when civil and criminal lawsuits relate to AI-generated content. Another bill introduced last year was the “SAFE TECH Act,” a Democrat-backed bill proposed to address cyber-stalking, discrimination and online harassment while removing Section 230 protections for ads and paid content. 

Who wants to leave it in place?

Tech giants — along with various digital and civil-rights groups — are among the groups advocating against changing Section 230. Some warn updates could harm online privacy, curb online content from LGBTQ users, and harm free speech in other ways.

While TikTok, Reddit and other major platforms depend on user-generated content to thrive, tech companies say the massive amounts of content requires being able to make decisions about curation and moderation. Although experts say social networks are understand the importance of maintaining safe platforms, they also note the the difficult task of keeping the internet clear of harmful content. Melinda Sebastian, a senior policy analyst at Data & Society, said social media users are in a sense “renting space” when they post comments or other content.

“Especially if they have marketed themselves to be a place for children, they don’t want people to have the experience of a horrible or dangerous or unsafe place to be,” said Sebastian, who researches ethics and tech. “It’s in their market interest to be moderated and to be a healthier, cleaner space. Most of them want to have the ability to do that.”

Also in favor of upholding Section 230 are its co-authors, U.S. Sen. Ron Wyden (D-Ore.) and former U.S. Rep. Christopher Cox (R-Calif.), who expressed their concerns last year in a submission to the Supreme Court.

“The real-time transmission of user-generated content that Section 230 fosters has become a backbone of online activity, relied upon by innumerable Internet users and platforms alike,” Wyden and Cox wrote. “Given the enormous volume of content created by [i]nternet users today, Section 230’s protection is even more important now than when the statute was enacted.”

The origins of Section 230

Section 230 stemmed from two separate lawsuits filed against 1.0-era online platforms Prodigy and CompuServe in the first half of the 1990s. Although both were sued over defamatory content posted to online bulletin boards hosted by the companies’ servers, the cases reached different outcomes. In 1991, a New York court ruled in favor of CompuServe by determining it was a distributor instead of a publisher and therefore not liable for its content. Four years later, the New York Supreme Court ruled against Prodigy. There was another key difference: CompuServe didn’t moderate its contnet, but Prodigy tried.

“What they didn’t want was to disincentivize these companies from moderating what was on their platforms,” said Duke University’s Caplan. “And so what they did was include this good-faith provision that allowed companies the ability to moderate content as they see fit.”

One of the first tests for Section 230 was a 1997 case about an ad on AOL. In Zeran v. America Online Inc, Kenneth Zeran claimed defamation after an anonymous user posted Zeran’s name and number in an AOL bulletin board ad for t-shirts celebrating the Oklahoma City bombing. After Zeran received harassing and threatening calls, AOL removed the ad upon request, but a similar ad took its place. The case eventually made its way to the Supreme Court, which ruled in favor of AOL by determining it was a distributor and not a publisher. 

The stakes

Internet darlings past and present have cited Section 230 as a shield against numerous lawsuits. According to a list compiled by the Electronic Frontier Foundation, companies that used Section 230 immunity to win cases include ebay, MySpace, Yahoo, Google, Village Voice and Craigslist. Just last year, Twitter and Google both cited Section 230 in their arguments in front of the Supreme Court in dual cases addressing whether the companies should be liable for terrorist content on their platforms.

Any changes to Section 230 could have major implications for online platforms, content providers and advertisers. In the past year, various companies have mentioned changes for Section 230 could have an impact on their business. Some of those include Meta, Roblox, Vimeo, Zoom, Microsoft, Coursera, Roku, Snap and Squarespace. Section 230 also came up in regulatory filings by companies seeking potential IPOs including Reddit and Rumble.

“The impact of section 230 on advertising is in general similar to the impact on other kinds of third-party content,” said Scott Wilkens, senior counsel at the Knight First Amendment Institute at Columbia University. “However, I think it’s important that the test that the Supreme Court adopts — if it even rules on the merits here, just as the test that the lower courts the courts of appeals have adopted — leaves room for discrimination claims in a way for example, that ads are delivered to users.”

What happens next?

It’s too soon to know what the outcome of the various debates might be. Various legislative efforts in Congress have failed to pass and a ruling from the Supreme Court could take months. There’s also a chance the court might choose not to rule on the case at all — just like it declined to reconsider Section 230 in last year’s cases against Twitter and Google. Many experts say the court’s questions during last month’s oral argument suggest it will be wary of delving too deep into the debate — at least for now.

“Many of [the Supreme Court justices] seemed very concerned about how fast technology moves here,” Wilkens said. “And how what they say could have an impact on the sort of development of the iInternet and the development of the technologies that power the internet — how we use it, and how it might impact our ability for free speech.”

Because innovation usually outpaces legislation, lawmakers are often left to try and capture the spirit of what they want to promote or prevent, according to Sebastian. Instead, they’re trying to guess how an industry will evolve next.

“No one predicted words like ‘algorithmic decision-making’ in 1996,” she said. “They weren’t able to predict what that would look like.”

https://digiday.com/?p=537704

More in Media

Earnings from social and search players signal that AI will be a long-play investment

Giants like Google, Meta and Microsoft say investors and advertisers might have to wait longer for AI to generate a better return on investment.

Why some publishers aren’t ready to monetize generative AI chatbots with ads yet

Monetization of generative AI chatbot experiences is slow going. Some publishing execs said they’re not ready to add advertising to these products until they scale or can build a subscription model first.

Media Briefing: Publishers who bet on events and franchises this year are reaping the rewards

Tentpole events and franchises are helping publishers lock in advertising revenue.