How publishers like The Marshall Project and The Markup are testing generative AI in their newsrooms
Publishers including The Marshall Project and The Markup shared how reporters are using generative artificial intelligence in their newsrooms in their reporting processes — after some failed tests.
The presentations were held at this year’s Online News Association four-day conference, which took place in Philadelphia from Aug. 23-26. The event had more than half a dozen sessions dedicated to the emerging technology.
Andrew Rodriguez Calderón, a computational journalist at The Marshall Project, and Mark Hansen, a professor at the Columbia School of Journalism, outlined ways they experimented with ChatGPT for journalism — and how they had to tweak their prompts to get what they wanted from OpenAI’s generative AI chatbot tool.
Calderón tried to use ChatGPT to generate summaries of banned book policies in different states, to save time on manually extracting that information from reporters’ notes. He asked ChatGPT to create summaries of those notes, which resulted in lackluster paragraphs. So his team iterated on the ChatGPT prompts to create descriptions with subheads of those notes, and then asked ChatGPT to group the relevant parts of the policies under those specific subheads. Two people fact-checked those descriptions.
Calderón said he believed this process saved him time from the cumbersome task of manually extracting information from those notes, so he could focus on fact-checking and formatting the information. He stressed the importance of documenting ChatGPT prompts to have a log of what iterations worked best for templates on future projects, sometimes referred to as “prompt libraries.”
Columbia School of Journalism’s Hansen used ChatGPT to extract numbers from daily paragraphs from New York State’s system on monkeypox virus numbers, to find trends and spikes. It took some tweaking to get ChatGPT to understand he was looking for the biggest changes in the data and then to help create a template for a story on those findings.
“These are examples of how you have to get quite specific and granular to figure out which tasks [ChatGPT] is actually usable for and saves you time,” said Gideon Lichfield, former editor in chief of Wired, who moderated the panel. “There is effectively a whole programming language and a programming culture emerging around GPT. But unlike traditional coding languages, it’s imprecise. What makes a prompt more likely to lead to reliable results is a bit of an art.”
Sisi Wei, editor in chief at The Markup, said their newsroom policy is journalists are not allowed to input unpublished drafts into ChatGPT, out of worry that the information gets fed into the large language model without their control over where it goes and how it’s used.
But Wei does input published headlines into ChatGPT to see if it can generate better alternatives. So far, it’s been “affirming,” she said, because all of the headlines it’s generated have been worse than the published ones.
More in Media
Digiday+ Research: Publishers’ growing focus on video doesn’t translate to social platforms
Major publishers have made recent investments in vertical video, but that shift is not carrying over to social media platforms.
Technology x humanity: A conversation with Dayforce’s Amy Capellanti-Wolf
Capellanti-Wolf shared insight on everything from navigating AI adoption and combating burnout to rethinking talent strategies.
How The Arena Group is rewriting its commercial playbook for the zero-click era
The company is testing AI-powered content recommendation models to keep readers moving through its network of sites and, in doing so, bump up revenue per session – its core performance metric.