What The Guardian has learned from chatbots

For the past two months, the Guardian has been testing a Facebook Messenger bot, “Sous-chef,” that provides recipe suggestions to people based on what they had in their fridges. Now, it’s using the learnings to shape its main news bot, which launched a week ago.

It’s too early to get data on how many people are actively using the news app, though the Guardian’s 6.2 million Facebook followers all have access to the bot.

We spoke to Martin Belam, The Guardian’s social and new formats editor; and Chris Wilk, group product manager for off-platform, about what they’ve learned. Here are the takeaways:

Keep it simple
Gbot2The bot has a simple setup: Users enter specific ingredients like “salmon” or types of cuisine, dietary requirements, and specific dishes, to get started. The bot responds with recipes from the Guardian’s archive and witty messages like: “Did I crack it for you?” or “Ok, results are on the boil.”

One sticking point with bots is they aren’t sophisticated enough to handle natural language and misspellings, like “Yeh” and “yum.” So the Guardian switched to a natural language processing tool Wit.ai, which can be trained to handle more types of answers.

A lot of users responded as they would to a human, and when they got non-human responses, they’d stop using it, said Wilk. So the Guardian went in the opposite direction with its news bot and aimed for utter simplicity. The lesson, according to Wilk, was: “Don’t build people’s expectations too much of what’s possible, just keep it simple.”

Alerts based on time rather than breaking news
A number of publishers including CNN have sent out bot alerts based on what’s breaking. The Guardian chose to send alerts based on time of day to see if sending out alerts at the same time every day will help make the bot habit-forming. So a user can choose if they want news updates sent at 6, 7 or 8 a.m.
newsbot

Once the time is chosen, the bot sends out the five main stories that lead the Guardian’s home page. The bot is only being tested in the U.K., but if a user flies to a different time zone, the alerts will be adjusted to the new local time zone.

Tone matters 
Publisher have to strike the right tone when creating products to live on others’ platforms, so users still know who’s bringing them the information. It’s hard to get the balance right with an automated bot. People can be turned off if the experience is too robot-like or casual, so the Guardian opted for polite.

People still want bots to be bots
When testing Sous-chef internally, The Guardian wanted to see how people would react to very human-like responses. The tech isn’t there to do this yet, so the Guardian had in-house engineers and UX experts manually respond to questions posed by staffers who’d been recruited to test the bot. Their reactions showed that people may not be ready for bots that are too human-like. “You don’t want to be too formal and robotic in the responses because it can be boring and not engage people, but if you try and be too human people get freaked out more by that,” Wilk said.

https://digiday.com/?p=196515

More in Media

AI fatigue sets in among workers and company leaders

About half of business leaders report declining company-wide enthusiasm for AI integration and adoption, according to a recent EY pulse survey.

Media Briefing: The top trends in the media industry in 2024

This week’s Media Briefing takes a look at the top trends from 2024, from AI licensing deals to referral traffic challenges.

WTF is agentic AI?

Generative AI is being shoulder barged out of the way by the latest term du jour: “agentic AI.”