Why publishers are questioning the effectiveness of blocking AI web crawlers
This article is part of Digiday’s coverage of its Digiday Publishing Summit. More from the series →
A number of publishers — including Bloomberg and The New York Times — were quick to block OpenAI’s web crawler from accessing their sites, to protect their content from getting scraped and used to feed the artificial intelligence tech company’s large language models (LLMs). But whether this tactic is actually effective is debatable, according to conversations with five publishing executives.
“It’s a symbolic gesture,” said a senior tech executive at a media company, who requested anonymity to speak freely.
In August, OpenAI announced that publishers can now block its GPTBot web crawler from accessing their web pages’ content. Since then, 26 of the 100 most-visited sites (and 242 of the top 1,000 sites) have done so, according to Originality.ai.
However, publishers’ content distribution models might make the protective strategy moot. One publishing exec told Digiday their company publishes on eight different syndication apps and websites. Because the content is already so discoverable, it feels like the protective measure to block OpenAI’s web crawler was a futile effort, they said.
“I think it was kind of a wasted effort on my part. It’s an inevitability that this stuff is ingested and crawled and learned from,” the exec said during a closed-door session at the Digiday Publishing Summit in Key Biscayne, Fla. last week.
Publishers have had a hard time protecting against generative AI tools like OpenAI’s chatbot ChatGPT from bypassing their paywalls and scraping their content to train their LLMs. Though publishers can now block OpenAI’s crawler, some publishing execs aren’t convinced it’s enough to protect their IP.
“It’s a long-term problem, and there isn’t a short-term solution,” said Matt Rogerson, director of public policy at Guardian Media Group. “It’s a sign that publishers are taking back a bit more control and are going to start demanding more control over other folks that are scraping for different purposes.”
Google and Microsoft are listening
OpenAI is just one of the tech companies using web crawlers to feed their LLMs for AI tools and systems. Google and Microsoft’s web crawlers are essential for publishers’ content to get indexed and surfaced in search results on Google Search and Bing — but those crawlers also scrape content to train those tech companies’ LLMs and AI chatbots. The Guardian’s Rogerson called these “bundled scrapers.”
“They treat it all as one big search product,” the first tech exec said. “They’re like, ‘No, you don’t get the granularity choice. We give you the opportunity to opt out.’ But obviously, we don’t want to opt out of all web crawling.”
Those tech companies are listening to publishers’ concerns. In July, Google announced it was exploring alternatives to its robots.txt protocol — the file that tells search engine crawlers which URLs they can access — to give publishers more control over how their IP is used in different contexts. And just Thursday, Google released a new tool called Google-Extended that gives website owners the ability to opt out of having their sites crawled for data used to train Google’s AI systems and its generative AI chatbot Bard. (The execs interviewed for this story spoke to Digiday before that announcement.)
Microsoft has chosen to go another route. Last week, the company announced that publishers can add a piece of code to their web pages to communicate that the content should not be used for LLMs (a bit like a copyright tag). Microsoft is giving website owners two options: a “NOCACHE” tag that allows only titles, snippets and URLs to appear in the Bing chatbot or to train its AI models, or a “NOARCHIVE” tag, which prevents any usage in its chatbot or AI training.
“They are signaling that they will add more granularity,” Rogerson said. “We’re examining that in detail.”
The New York Times took matters into their own hands and added language to its Terms of Service last month prohibiting the use of its content to train machine learning or AI systems, giving the Times the ability to pursue legal action against companies using their data.
A negotiation tactic
So why are publishers blocking OpenAI’s web crawler at all, if the move doesn’t ensure protection of their content?
Execs told Digiday it’s a negotiation tactic.
“Putting the blocker in place is at least one… starting point for the inevitable negotiations that we’ll have as publishers with OpenAI and other companies. We’ll be able to have that as a point of leverage and say, we’ll take it off if we can reach a deal or an agreement,” said the publishing exec at the Digiday Publishing Summit.
Publishers’ protective actions are creating a “market for licenses for data mining,” with a potential for compensation for sharing their data, Rogerson said. OpenAI struck a licensing partnership with the Associated Press in July, wherein OpenAI is paying to license part of the AP’s text archive to train its models.
But not all publishers feel like they’re powerful enough to negotiate the use of their content with these large tech companies.
“We’re not big enough to flex our muscles and block it,” said a second publishing executive who asked to remain anonymous. The exec was also unsure if blocking OpenAI’s web crawler would affect their use of GPT, the AI technology ChatGPT is built on that OpenAI has made available for outside developers to license.
“If you start blocking the crawler, do they cut you off from using the tool? Does the tool stop working as well? It’s really unclear,” the publishing exec said. “There probably is a way to eventually figure it out, but not without a ton of detective work,” they added.
More in Media
Media Briefing: Efforts to diversify workforces stall for some publishers
A third of the nine publishers that have released workforce demographic reports in the past year haven’t moved the needle on the overall diversity of their companies, according to the annual reports that are tracked by Digiday.
Creators are left wanting more from Spotify’s push to video
The streaming service will have to step up certain features in order to shift people toward video podcasts on its app.
Digiday+ Research: Publishers expected Google to keep cookies, but they’re moving on anyway
Publishers saw this change of heart coming. But it’s not changing their own plans to move away from tracking consumers using third-party cookies.