by Tom Shields, chief strategy officer, AppNexus
Two weeks ago, Mark Zuckerberg went before Congress to explain himself. The immediate catalyst for the hearings was recent news that a political consulting firm called Cambridge Analytica stole private data from close to 90 million Facebook users. But make no mistake – these hearings are the culmination of a long build-up of anger. Facebook has been facing intense scrutiny for a number of controversies: its enablement of the spread of fake news, degradation of political discourse, and role in the decline of journalism.
I would describe Congress’ performance in the hearings as hit or miss. There were cringe-worthy questions – one senator asked how Facebook makes money on a service it offers users for free, to which Zuckerberg replied, “Senator, we run ads.” Other, more promising moments, included congressmen asking about Facebook’s specific policies around user data and Zuckerberg’s opinion on regulations to the company. Those are the kinds of questions Americans need answered.
But as the chief strategy officer of an advertising technology company and someone who understands how user data translates to ad targeting, I think that these hearings have only scratched the surface. Below are a few of the questions I wish Congress had asked Zuckerberg.
- Why does Facebook need to store our personal data forever?
The New York Times’ Brian Chen wrote an eye-opening article last week about what he found when he downloaded his personal file from Facebook – that is, all the data the company has on him. What he saw was distressing. Facebook was holding onto a list of everyone Brian had ever unfriended, the contact information for the 700+ people in his phonebook, metadata (session length, location, device) from every time he’d ever logged on to Facebook, and much more.
I’m not against collecting some personal data from internet users – far from it. Personal data is what enables us to serve relevant ads rather than spam users with a bunch of random stuff they’re not interested in.
But there’s no reason to store data from browsing sessions that happened ten years ago. Again, with online ads, the goal is relevancy. If you looked at the product page for a fridge on a retailer’s website last week, serving an ad for that fridge to you today could conceivably result in a purchase. But pages you looked at in 2009? Not so much. There’s no logical basis for holding onto personal data for that long and leaving it vulnerable to companies like Cambridge Analytica. That’s why AppNexus deletes most of a user’s personal data after 90 days.
I’d love to hear Zuckerberg’s explanation for why Facebook doesn’t have a similar policy. If he doesn’t have a good rationale, then I’d say it’s time for him to enact one.
- What responsibility does Facebook take for the damage it does to its users?
Facebook has more than 2 billion active users, who spend about 35 minutes a day on the platform. Facebook is one of the most popular products ever built.
However, there’s reason to believe that too much Facebook usage is harmful to users’ mental health. Research suggests that Facebook usage has been linked to an uptick in depression and sense of alienation in teenagers. A study by the Harvard Business Review showed that liking others’ content and clicking links on Facebook strongly correlates with a reduction in users’ self-reported physical and mental health. Facebook itself has admitted that these claims have validity. (For its part, the company cites studies showing that people who use Facebook to actively interact with others improve their mental wellbeing, while those who use the platform to passively consume content experience the opposite.)
There’s also cause for concern when you move beyond individuals and look at Facebook’s effect on society at large. It’s no secret that partisan “fake news” articles – some of which have been placed by foreign agents for propaganda purposes – were shared millions of times on Facebook in the lead-up to the 2016 U.S. election. More disturbingly, Facebook also appears to have been exploited in Myanmar in the country’s ongoing genocide of the Rohingya people. Facebook is the primary destination for Myanmar’s internet users, and activists say the platform facilitated the spread of radical, anti-Rohingya content.
Facebook’s potential for the spread of inflammatory, extreme content seems to be baked into the platform itself. Above all else, Facebook is designed to prompt users to click and share content. Nothing gets people to hit the share button more than content that evokes an emotional response, and science shows that negative emotions like anger move the needle most.
A recently leaked memo from inside Facebook suggests the company recognized these concerns but plowed on anyway to fulfill its mission of connecting people. I don’t think that Zuckerberg ever intended for Facebook to damage users’ mental health or help spread destructively false news stories. But I’d like to ask him if he feels the company is nonetheless responsible for these negative effects, and what Facebook can do to mitigate them.
- Does Facebook value high-quality journalism?
When Facebook first launched, many saw it as a potential savior for journalism. Finally, after years of uncertainty as digital replaced print, publishers had a new audience acquisition channel where their content could spread faster than ever before.
Fast forward to today and journalistic publishers feel much differently. Facebook now accounts for such a significant percentage of publishers’ traffic that their businesses are at the mercy of the platform’s algorithms. Whatever Facebook decides News Feed should be surfacing, publishers need to supply. That’s one of the reasons we see so many sites flooding the internet with low-effort viral content, including fake news.
Facebook’s grip on journalism is only tightening. The platform now takes in over a fifth of U.S. digital advertising revenue and is getting a bigger slice of the pie every year. That leaves less money for the journalists and publishers who create the content we need to stay informed. That’s why we’ve seen so many media outlets – especially local ones – go out of business in the last couple of years.
The decline of journalism leaves our society less knowledgeable and more vulnerable to threats like fake news. I’d like to ask Zuckerberg if he agrees, and if he does, how he intends to remedy the role Facebook has played in that decline.
Beyond Facebook’s business model
All of these questions call attention to negative consequences that flow directly from Facebook’s business model. Facebook monetizes our attention; the more it gets, the more revenue it brings in. That’s why the platform is designed to be as engaging as possible, and to prioritize content we’ll click and share, regardless of whether that content is good for us. Facebook also monetizes the information it collects on us, which is why it holds onto our personal data indefinitely, occasionally letting it fall into the wrong hands.
I don’t think that Zuckerberg or anyone else at Facebook ever intended these negative consequences. Nevertheless, the company’s business model brought them into being, and we’re all paying for it. I hope that in future conversations with Zuckerberg, senators, representatives, journalists, and Facebook users will push him on what he intends to do about it.
More from Digiday
Uncertainty over TikTok’s U.S. future splinters creators and agencies
With the possible removal of TikTok in the U.S. as early as January, creators and agencies fall on both sides of the issue: either believing it will happen or confident that the ban won’t go through in the end
In Graphic Detail: How Sia’s Clip It launch shows the power of Roblox for musicians
Sia’s Clip It integration into Roblox is the first time a prominent mainstream musical artist has placed their music and branding inside the space.
Marketers have a new audience to worry about — large language models
Tech firms are creating new ways to understand how large language models perceive their brands.