by Adam Singolda, founder and CEO, Taboola
While the term AI can be found in virtually every investor’s deck and every company’s page on the web, it’s actually quite rare to witness real artificial intelligence at work — because it’s a very complicated thing to do. There is a world of difference between machine learning (ML), deep learning (DL) and … BS.
Saying that, even true AI can go wrong. Recently, there was a trend on TikTok where people used the phrase “I had pasta tonight,” not to talk about what they’d had for dinner, but as a code word to signal a suicidal call for help. It wasn’t TikTok’s fault that the algorithm didn’t catch the trend quickly enough to stop promoting these posts as if they were really about food — risking a particularly tone-deaf look for the platform — because artificial intelligence requires ample historical data in order to work. In computer science, it’s referred to as “garbage in, garbage out.” This is why AI can beat humans in chess or Mahjong, but it could have never invented the game.
This was also something Harvard’s Professor Steven Pinker discussed last year when he referred to the “art of asking questions,” something that’s still reserved for humans. While AI will get better and better at computing things, it will likely never fall in love or ask a question out of curiosity.
When it comes to media and marketing, AI is really important, and as of now, it plays a big part in content moderation online. It decides what’s OK for us to see and what’s not OK, what’s harmful, what’s hateful, what’s fake, what gets boosted, what goes viral and what gets buried. But as we’ve seen from the big tech platforms over the past years — or from the examples above — it has fundamental issues and, even more than that, poses a fundamental question: Is AI the right tool to moderate content and to moderate ads? Or do we need humans as well?
The stakes in play and the AI–human mix
AI is an incredible and revolutionary tool, probably as significant as the invention of electricity or the internet, and it will be a huge part of our lives forever. But there are two important things to know about AI.
- AI only works when there’s sufficient data to train the AI model. For example, AI failed to predict the spread and impact of COVID-19 because there was no existing data to model the scale of its actual impact effectively. And when Face ID was first introduced as a way to unlock your iPhone, it didn’t account for people’s “morning face,” so the iPhone didn’t open. There was not enough data to suggest that people might look different in the morning when they wake up versus the rest of the day.
- Some mistakes are too big to bear. As an example, if Alexa made a mistake and suggested that a consumer buy coffee beans they don’t really want to buy based on my behavior, it’s not a big deal. It’s annoying, but not a big deal. If YouTube tagged a video as a pet video thinking there were dogs in the video, but there weren’t, it’s not a big deal. On the other hand, putting AI to use in more serious matters such as how to respond to a health emergency or questions related to democracy, depression, racism and human rights, it begs a bigger question — is AI good enough? Are these matters that we’d want an ethical human mind to consider as well?
Aside from mistakes on a global and societal scale, when it comes to serious matters of media and marketing, such as moderating content, publishers and platforms must recognize the limitations of humans as well. People get fatigued, whereas a computer has endless stamina, whether it’s reviewing 100 or 1,000 articles. People have biases; they have good days and bad days, and so forth. And so, if the goal is to consider a more human approach to moderating content, it’s important that those content review teams are incredibly diverse and supported.
Still, when it came to finally realizing that “eating pasta” was not about eating pasta and that it was a codename for suicide, it was humans who caught it. And when COVID-19 happened, humans saw it spread, not machines. And when an image-recognition AI horrifically misidentified Black people as gorillas, it was humans who picked it up, not AI.
Together, humans and machines make better decisions
The future will be over-indexed by machines that help people live a better life across many daily interactions. However, in serious matters — whether existential or editorial — there are human problems that require humans to solve them with AI in a supporting role.
It is important for every tech platform that has meaningful distribution to take responsibility for the content on its platform — meaning that while they address the limitations of human review with AI support, they also compensate for the judgment calls AI may never be able to make.
For marketers, publishers, and all stakeholders working to reach audiences with content, the only acceptable vote is to build a future for humans working with AI.
More from Digiday
What does the Omnicom-IPG deal mean for marketing pitches and reviews?
Pitch consultants predict how the potential holdco acquisition could impact media and creative reviews heading into the new year.
AdTechChat organizers manage grievances amid fallout of controversial Xmas party
Community organizers voice regret over divisive entertainment act at London-hosted industry party, which tops a list of grievances.
X tries to win back advertisers with self-reported video stats
Is X’s big bet on video real growth or just a number’s game?