Mark Zuckerberg’s Apology Tour

Mark Zuckerberg’s Apology Tour

The New Yorker
Illustration by Tom Bachtell

Last week, on a conference call with reporters, Mark Zuckerberg, the C.E.O. of Facebook, began, uncharacteristically, with an apology. “For the first decade, we really focussed on all the good that connecting people brings,” he said. “But it’s clear now that we didn’t do enough. We didn’t focus enough on preventing abuse.” He added, “That goes for fake news, foreign interference in elections, hate speech, in addition to developers and data privacy. We didn’t take a broad enough view of what our responsibility is, and that was a huge mistake.” Taken alone, any of the incidents he alluded to—the exploitation of Facebook data by the political consultancy Cambridge AnalyticaRussian meddling in the 2016 election, the uptick in viral hoaxes and propaganda—might, eventually, have been forgiven. Taken together, though, they’ve caused a profound shift in public perception, leading people to wonder why they ever thought of social media as a force for good. This week, for the first time, Zuckerberg will testify before Congress about Facebook’s mistakes. The extent to which the public finds him credible, or at least sympathetic, will affect the company’s stock price, the velocity of the #DeleteFacebook movement, and, possibly, the company’s long-term survival.

Facebook is now the biggest social-media company—and advertising platform and data tracker—in the world, with more than two billion users. In 2004, when Zuckerberg built the company, and for years afterward, he was hailed as a behoodied innovator. His motto, “Move fast and break things,” was regarded as youthful insouciance. Anyone who expressed concern about the role of social media in our society, and particularly in our politics, was treated as a cut-rate Andy Rooney, too curmudgeonly to learn to stop worrying and love the selfies.

It’s now clear that the problem wasn’t the selfies; it was the business model. For years, tech critics warned, “You’re not the customer, you’re the product.” “We could make a ton of money if we monetized our customers,” Tim Cook, the C.E.O. of Apple, recently told the journalists Kara Swisher and Chris Hayes. His point was that Apple’s model—charging for goods and services—is healthier than that of Google and Facebook. Those companies give consumers free things, such as birthday reminders and quick bursts of quantifiable attention, in exchange for their private data, which digital marketers then use to sell them products, ideologies, or candidates.

For a long time, this trade-off, if people thought about it at all, apparently seemed worth it. Any potential harm seemed distant and abstract. Then came the Trump campaign, Brexit, a resurgence of far-right extremism across Europe and the United States, and the widespread inability to distinguish information from disinformation. Social media didn’t cause these developments, but it certainly facilitated them.

Cambridge Analytica’s executives claim to have converted Facebook data into “psychographic” profiles, which political propagandists then used to microtarget users, sending them ads tailored to their biases and anxieties. (Initially, the firm was said to have harvested fifty million profiles; last week, Facebook revised that number to eighty-seven million.) The executives may have inflated their power—depending on your biases and anxieties, they seem either like crafty Bond villains or like bumbling paper-pushers in an Armando Iannucci satire. Still, whether or not they could sway people’s moods, their beliefs, and, ultimately, their votes, Facebook surely can.

Since its inception, Facebook has delivered two contradictory sales pitches. To the public, it insisted that it is not an editor or a gatekeeper but merely an open platform, neutrally reflecting the world. But no platform is neutral; its algorithms must, by definition, prioritize some things over others. Facebook was designed to maximize attention, so its algorithms prioritize the posts that spur the most comments, clicks, and controversy, creating a feedback loop in which buzzy topics generate yet more buzz. (Time headline from June, 2015: “Donald Trump’s Presidential Announcement Sparks Huge Facebook Reaction.”) Meanwhile, Facebook’s pitch to advertisers sounded not unlike Cambridge Analytica’s: With our sophisticated tools, any advertiser can deliver any message to any microsegment of the market. Now that the market in question is the democratic marketplace of ideas, Facebook is again professing neutrality. But this time the public doesn’t seem to be buying it.

Two days after Trump was elected, Zuckerberg was asked whether Facebook had “distorted the way that people perceived the information during the course of the campaign.” He replied, “Voters make decisions based on their lived experience.” But online experience and lived experience become more inseparable every day. If what people see online is supposed to have no impact on what they do in the world, what is the point of social media? A decade ago, the upstart entrepreneurs of Silicon Valley promised to topple the gatekeepers in journalism, business, and politics. They have succeeded. Now, although they go to great lengths to deny it, the former upstarts have become gatekeepers themselves.

For almost a week after the Cambridge Analytica scandal broke, Zuckerberg remained silent, while his company lost nearly fifty billion dollars in stock value. Then he embarked on an apology tour, which included last week’s conference call. Alex Kantrowitz, of BuzzFeed, asked whether Facebook would consider making less profit in order to protect users’ privacy. Zuckerberg proceeded to answer a question that he hadn’t been asked, about ad relevance. If Kantrowitz had a follow-up, no one heard it—reporters’ phones were muted after their initial question. When Zuckerberg testifies before Congress, he won’t have the luxury of muting his interrogators.

If Zuckerberg wants to regain the public’s trust, he can start by dropping the pretense of neutrality. Facebook guides what billions of people see, hear, and know about the world. If this doesn’t make it a media company, then the distinction is semantic enough to be meaningless. In addition to apologizing and making reassuring noises about the sanctity of user privacy, Zuckerberg should make some clear commitments: to protect Facebook’s users from microtargeted propaganda; to use his algorithms to promote truth over reckless sensationalism; to prevent bad actors from using his tools to sow discord and bigotry. After more than a decade of moving fast and breaking things, it’s time to slow down and clean up the mess. ♦


By Andrew Marantz

Next Article

Report Page