This article appears in the latest issue of Digiday magazine, a quarterly publication that is part of Digiday+. Members of Digiday+ get access to exclusive content, original research and member events throughout the year. Learn more here.
Pro-Trump propaganda from Russian operatives spread through Facebook events, Twitter bots and Instagram memes. Nearly two years after the 2016 presidential election, we’ve come to know the downside of social networks all too well. But media scholar Jonathan Albright had been detecting the influence — good and bad — of these sites for years prior. Formerly of Harvard’s Berkman Klein Center and now at Columbia’s Tow Center for Digital Journalism, Albright has been using data to power his, and our, understanding of truth.
What’s your process?
It depends on the question. When you try to promote something and disrupt, you want to use each platform, service or product as it’s intended to reach the target audience and the right kind of people. If it’s a large effort, there will be other components besides just Facebook or just Twitter. Some tend to be ignored, things like Reddit or Pinterest, which are used to push images and text out through images. I tend to think of things as a larger ecosystem.
We’ve had the leaders of tech platforms speak in front of Congress. What did they miss?
That’s a difficult one to answer. I think it’s been as a spokesperson trying to sway or give opinions to the ways that other companies as big as Facebook deal with information, but [Zuckerberg’s] a celebrity so he has certain characteristics that detract from the more technical matters. Zuckerberg tends to defer to his entrepreneurial focus. He’s built his whole company that provides jobs and is iconic in terms of Silicon Valley and technology and that’s fine, but for this type of investigation, I think they need other people, specifically people that work a couple steps down.
How could the platforms better help you do your job?
Sometimes the best things platforms can do is stay out of the way and not delete content. Some of the biggest mistakes that were made is when things are found, there’s often no way for other people to learn from it or analyze it because it gets wiped or taken down. It’s good for transparency efforts to share more data, but it’s always up to researchers to audit. Even if they’re sharing data, are they providing the right data? Often it can be incomplete or only a small part of the larger equation.
What should marketers be doing in light of all of this?
It’s good that it’s more commonly understood that people use bots and propaganda. Marketers have been [using bots] for quite some time. I feel kind of sorry for genuine people that want to promote a product. I don’t know exactly what I would recommend, other than to observe that their tactics and methods are being replicated for propaganda.
What story do you think journalists are missing in the fight for misinformation?
It’s imperative that journalists use data sources or better data sources, not just business and technology journalists, to inform stories. It’s hard for researchers to get it, much less journalists, so partnerships with journalists and researchers and non-profits need to happen more, especially local news and smaller newspapers that are some of the least resourced.
What’s next for you?
As soon as I get started on something there’s some revelation that’s happening. I’m stepping back and looking at what does this all mean. How do we look at emerging media ecosystems and how our news reaches us so we’re more critical and more aware of the information that we receive. I’m trying to look at these problems less reactively, sharing data here and there, and look at it more from a cultural perspective.