Nate Carter is managing director of eEffective, a digital trading desk.
The other week I found out that my algorithm is a racist.
Don’t get me wrong, it wasn’t birthed this way. In fact, we can be sure that in this case the racism is a product of nurture, not nature. You see, I was running two creative sets. Both were pictures of children, their mere image beckoning the web browser to click on them. Click on them people did. The problem is that, over time, they clicked on one creative more than the other, and when they converted on the landing page, they converted on that same creative with higher frequency. Doing what it was designed to do, my algorithm jumped in, optimizing the campaign to the better-performing creative: the one with the white child, not the black child.
An awkward moment arose. What do we do? After all, this is a results business and the Caucasian creative was bring in the goods. Still something didn’t feel quite right. It also made me wonder, are we racist? Had our racism poisoned my algorithm and turned it into a monster?
These were difficult ethical questions. On the surface it appeared that I may have uncovered statistical proof of underlying racism. But what if the motives of the audience clicking were less devious? What if the creative with the Caucasian child was simply more appealing, without regard to skin color? Also there was the question of, now what? Do I reprogram my algorithm? Do we take the learnings and run with the better-performing creative? What are the ethical ramifications of the latter?
Overall it was a healthy conversation to have. It also showcased that in an age where it is easy to let the machine make all of the decisions, there are things which are worth debating, considering and pondering which go beyond simple numerical analysis. You see, there is a danger that our algorithms can end up racist or bigoted, for they are by their very function prejudiced. If we allow them to optimize, unencumbered they become a reflection of us, all of our best and all of our worst.
As we continue to make strides in customization and individualization of our messaging it is important that we are looking at what we are telling people, giving clients insights into campaign bias and considering the ethical ramifications.
More in Media
Media Briefing: Dotdash Meredith’s Jon Roberts on the AI agenda in 2025
This week’s Media Briefing features an interview with Dotdash Meredith’s chief innovation officer Jon Roberts on his plans for AI tech development in 2025.
OpenAI, The New York Times debate copyright infringement of AI tech companies in first trial arguments
The copyright infringement trial between The New York Times and OpenAI kicked off in a federal court hearing on Tuesday. Here’s what both parties argued.
Financial Times, MiQ and Uber Advertising are 2024 Digiday Awards Europe finalists
This year, the companies driving innovation in Europe focused on omnichannel strategies, including leaning on first-party data and AI-driven insights to improve targeting and audience engagement. The Digiday Awards Europe finalists also share a common theme of elevating user experiences to deliver more impactful technology and campaigns. For instance, the Financial Times is a nominee […]