Why publishers are questioning the effectiveness of blocking AI web crawlers

This article is part of Digiday’s coverage of its Digiday Publishing Summit. More from the series →

A number of publishers — including Bloomberg and The New York Times — were quick to block OpenAI’s web crawler from accessing their sites, to protect their content from getting scraped and used to feed the artificial intelligence tech company’s large language models (LLMs). But whether this tactic is actually effective is debatable, according to conversations with five publishing executives.

“It’s a symbolic gesture,” said a senior tech executive at a media company, who requested anonymity to speak freely.

In August, OpenAI announced that publishers can now block its GPTBot web crawler from accessing their web pages’ content. Since then, 26 of the 100 most-visited sites (and 242 of the top 1,000 sites) have done so, according to Originality.ai.

However, publishers’ content distribution models might make the protective strategy moot. One publishing exec told Digiday their company publishes on eight different syndication apps and websites. Because the content is already so discoverable, it feels like the protective measure to block OpenAI’s web crawler was a futile effort, they said.

“I think it was kind of a wasted effort on my part. It’s an inevitability that this stuff is ingested and crawled and learned from,” the exec said during a closed-door session at the Digiday Publishing Summit in Key Biscayne, Fla. last week.

Publishers have had a hard time protecting against generative AI tools like OpenAI’s chatbot ChatGPT from bypassing their paywalls and scraping their content to train their LLMs. Though publishers can now block OpenAI’s crawler, some publishing execs aren’t convinced it’s enough to protect their IP.

“It’s a long-term problem, and there isn’t a short-term solution,” said Matt Rogerson, director of public policy at Guardian Media Group. “It’s a sign that publishers are taking back a bit more control and are going to start demanding more control over other folks that are scraping for different purposes.”

Google and Microsoft are listening

OpenAI is just one of the tech companies using web crawlers to feed their LLMs for AI tools and systems. Google and Microsoft’s web crawlers are essential for publishers’ content to get indexed and surfaced in search results on Google Search and Bing — but those crawlers also scrape content to train those tech companies’ LLMs and AI chatbots. The Guardian’s Rogerson called these “bundled scrapers.”

“They treat it all as one big search product,” the first tech exec said. “They’re like, ‘No, you don’t get the granularity choice. We give you the opportunity to opt out.’ But obviously, we don’t want to opt out of all web crawling.”

Those tech companies are listening to publishers’ concerns. In July, Google announced it was exploring alternatives to its robots.txt protocol — the file that tells search engine crawlers which URLs they can access — to give publishers more control over how their IP is used in different contexts. And just Thursday, Google released a new tool called Google-Extended that gives website owners the ability to opt out of having their sites crawled for data used to train Google’s AI systems and its generative AI chatbot Bard. (The execs interviewed for this story spoke to Digiday before that announcement.)

Microsoft has chosen to go another route. Last week, the company announced that publishers can add a piece of code to their web pages to communicate that the content should not be used for LLMs (a bit like a copyright tag). Microsoft is giving website owners two options: a “NOCACHE” tag that allows only titles, snippets and URLs to appear in the Bing chatbot or to train its AI models, or a “NOARCHIVE” tag, which prevents any usage in its chatbot or AI training.

“They are signaling that they will add more granularity,” Rogerson said. “We’re examining that in detail.”

The New York Times took matters into their own hands and added language to its Terms of Service last month prohibiting the use of its content to train machine learning or AI systems, giving the Times the ability to pursue legal action against companies using their data.

A negotiation tactic

So why are publishers blocking OpenAI’s web crawler at all, if the move doesn’t ensure protection of their content?

Execs told Digiday it’s a negotiation tactic.

“Putting the blocker in place is at least one… starting point for the inevitable negotiations that we’ll have as publishers with OpenAI and other companies. We’ll be able to have that as a point of leverage and say, we’ll take it off if we can reach a deal or an agreement,” said the publishing exec at the Digiday Publishing Summit.

Publishers’ protective actions are creating a “market for licenses for data mining,” with a potential for compensation for sharing their data, Rogerson said. OpenAI struck a licensing partnership with the Associated Press in July, wherein OpenAI is paying to license part of the AP’s text archive to train its models.

But not all publishers feel like they’re powerful enough to negotiate the use of their content with these large tech companies.

“We’re not big enough to flex our muscles and block it,” said a second publishing executive who asked to remain anonymous. The exec was also unsure if blocking OpenAI’s web crawler would affect their use of GPT, the AI technology ChatGPT is built on that OpenAI has made available for outside developers to license.

“If you start blocking the crawler, do they cut you off from using the tool? Does the tool stop working as well? It’s really unclear,” the publishing exec said. “There probably is a way to eventually figure it out, but not without a ton of detective work,” they added.

https://digiday.com/?p=519789

More in Media

Publisher strategies: Condé Nast, Forbes, The Atlantic, The Guardian and The Independent on key revenue trends

Digiday recently spoke with executives at Condé Nast, Forbes, The Atlantic, The Guardian and The Independent about their current revenue strategies for our two-part series on how publishers are optimizing revenue streams. In this second installment, we highlight their thoughts on affiliate commerce, diversification of revenue streams and global business expansion.

How sending fewer emails and content previews improved The New Yorker’s newsletter engagement

The New Yorker is sending newsletters less frequently and giving paid subscribers early access to content in their inboxes in an effort to retain its cohort of 1.2 million paid subscribers and grow its audience beyond that.

The Rundown: How Amazon is wooing publishers to bolster its $50 billion ad business

Enhancements to Amazon Publisher Cloud and debut of Signal IQ represent the triopolist’s latest adland overture.