Blue wall with large facebook f to the side
Milton Wolf Seminar on Media and Diplomacy

Forget “Too Big to Fail.” Facebook is Too Big to Succeed

By Nathalie Maréchal
July 7, 2018

The theme of this year’s Milton Wolf Seminar, “Public Diplomacy in Moments of Geopolitical Transformation,” engages with the increasingly turbulent information environment and the challenges this poses for various actors — nation-states but also private companies, interest groups, and civil society organizations — seeking to shape global narratives and conversations. Political communication today does not work as it did as little as ten years ago, and it is safe to say that its rapid evolution will continue unabated for the foreseeable future, with troubling implications for democracy and human rights.

One of the key drivers of this shift is, of course, Facebook. The 14-year-old company was famously started in a Harvard dorm room as a tool for gauging students’ relative attractiveness, and is now a social media behemoth with over 2 billion active users, $40 billion (USD) in annual revenue (2017), and 20% of the global online advertising market. It has been used by pro-democracy movements and social movements dedicated to human rights, but also by regressive radicals of all stripes and by influence campaigns intent on shifting public opinion in liberal democracies toward conservative, xenophobic, and bigoted political parties whose leaders’ worldviews align with Vladimir Putin’s. In this essay, I argue that Facebook’s size and business model make it inherently unfixable: it has become too big to succeed.

Facebook has become the unavoidable platform for ordinary people, for news publishers, and for advertisers, even though misgivings about the company are growing. The current structure of global communication is such that quitting Facebook often comes with a high cost, and does next to nothing to improve the situation. Reversing our current descent into networked authoritarianism is going to take the hard, frustrating work of politics. It will also require putting the best interests of the world’s citizens ahead of corporate profits.

After years of treating the internet as somehow separate from the “real world,” and thus exempt from governance, there is a growing appetite among governments and the public for regulating Facebook. This has prompted the company to attempt to fix itself to stay ahead of regulation, but many of the measures being proposed offer only superficial fixes at the expense of exacerbating other aspects of Facebook’s impact.

Take, for example, the company’s current efforts to screen political advertising, rooted in a desire to prevent bad actors from intentionally manipulating public discourse in ways that undermine democracy — one of many factors that have brought us Brexit, Trump, and a global resurgence of rightwing populism.  But differentiating between issue-based political ads, opinion writing and news reporting isn’t always straight-forward, especially at scale. Facebook seems to have decided to err on the side of politicizing news content: its list of “national issues of public importance” includes the budget, civil rights, the economy, foreign policy, health, immigration, poverty, and values (you have to wonder what’s left to write about that actually matters). These “top-level issues” are “considered to require advertiser authorization and labeling for ads targeting the US,” making it harder for news outlets to advertise their stories and reach online audiences and further blurring the line between news, opinion, and electioneering. At the moment, Facebook is only focusing on the U.S. context, but plans to expand the scheme globally before long. If they can’t even get it right here, in the country the company and its employees presumably know best, how is going to fare elsewhere? Moreover, should private companies really be in this position?

These new rules affect advertisers —Facebook’s real customers — rather than ordinary users, who have long been subject to content moderation. But whether the company is policing its users or its customers, it stubbornly pleads apolitical neutrality by appealing to the wishes and sensibilities of an ill-defined “community.” This is surely linked, at least in part, to wanting to avoid regulation as a media organization with (depending on the jurisdiction) increased responsibility for content generated by both users and advertisers. But it’s also dictated by the imperative to avoid alienating potential users, from whom data can be extracted, and potential advertisers, from whom profits flow. Under these condictions, it’s no wonder that Facebook’s content moderation guidelines include hair-splitting distinctions between white supremacy and white nationalism. Rather than having a single arbiter of discourse that is doomed to fail in its quest for neutrality, we need a real diversity of writers, editors, and publishers to make their cases to the public, with the backbone to exile hate speech from the political mainstream. We need a true public sphere to replace the private salon that Facebook has turned into.

After 14 years of trying to be all things to all people while making as much money as possible, Facebook is collapsing under the weight of its own internal contradictions. As a result, the PR spin that Facebook has offered to different audiences over the years is internally incoherent. CEO Mark Zuckerberg infamously claimed that Facebook had no impact on the 2016 U.S. election — after the Facebook Elections team had spent months telling campaigns that it had the power to hand them elections. Zuckerberg often claims that Silicon Valley, and therefore Facebook, is full of liberals, but the company bends over backwards to appease conservative critics’ unfounded claims of “liberal bias.” For example, Facebook recently did away with the “Newsfeed” after years of trying to assuage conservative critics who alleged that the product had a liberal bias.

In other cases, the spin is not only incoherent but actively misleading. When questioned by Congress about the data that Facebook collects about its users, and how it uses this data, Zuckerberg parroted talking points about the control users have over other Facebook users’ access to their posts, photographs, and other content. Facebook’s granular visibility controls are actually one of its better features, but it’s wholly unrelated to the question that was asked. The governance challenge at hand has nothing to do with privacy from other human users, and everything to do with Facebook’s advertising-based business model.

In 2015, Harvard scholar Shoshanna Zuboff published a widely cited article in which she further developed a concept first coined by John Bellamy Foster and Robert W. McChesney: surveillance capitalism. Both pieces merit careful reading, but in short, surveillance capitalism refers to the increasingly data-intensive business model predicated on monitoring the online and offline behavior of the world’s population to devise targeted advertising campaigns in order to influence their behavior. Its defenders argue that serving internet users with personalized advertisements helps connect consumers with products and services that are most relevant to them, but the same methods that encourage us to purchase one brand of clothing over another are also being used to manipulate public debate and steer electorates toward increasingly overt fascism.

Mark Zuckerberg keeps insisting that Facebook isn’t an advertising company, and maybe he actually believes that. He is reportedly much more interested in user-facing products than in his company’s business operations, claiming that Facebook is a mission-driven company. But Facebook’s vaguely stated mission — “building community” — is an impossible one. Even if Facebook had a working definition of “community” (which it doesn’t), such a project obscures the dizzying diversity of human communities, plural. Facebook desperately wants to portray itself as a neutral platform for all ideas, but its content moderation policies are inevitably grounded in a certain interpretation of U.S. values. And in addition to the platform’s own policies, many users’ experience of Facebook is also subject to national regulation. Indeed, Facebook works with governments around the world to block or remove content that violates local laws. Some restrictions on free expression are arguably compatible with human rights (bans on Nazi-related content in Germany and France, for instance), but many are not.

So what should be done? David Kaye, the U.N. Special Rapporteur on the right to freedom of expression, argues that social media companies should involve local communities (plural) in content moderation in order to be more attuned to local contexts. In some cases, this may require breaking up global behemoths, raising thorny questions about interoperability. Kaye further stresses that companies that operate at a global scale should adhere to international human right law and hold themselves accountable to independent expert groups similar to existing press councils that govern the traditional media in many countries. They should be much more transparent than they currently are about not only what governments ask or compel them to do, but also about how they interpret and apply government demands as well as their own terms of service.

But, as Zuboff has pointed out, expecting companies whose existence depends on surveillance capitalism to adequately regulate themselves is not enough. At most, they will reign themselves in just enough to avoid regulation. Silicon Valley’s reaction to the European Union’s General Data Protection Regulation (GDPR) is a telling one, with Facebook notably going to great lengths to avoid implementing the GDPR worldwide. Austrian activist Max Schrems filed suit against Facebook and Google in May, contending that neither company complies with the new informed consent requirements.

The governance challenge lies in regulating surveillance capitalism firms like Facebook and Google while respecting human rights, including freedom of expression online. Measures like the GDPR that focus on privacy, and increased enforcement of antitrust rules are only two possible ways to do so. Expert convenings like the Milton Wolf Seminar are well-suited to brainstorming and developing novel proposals to reign in surveillance capitalism, undermine resurgent fascism, and bolster liberal democracy, human rights rights and the rule of law. Nothing less than the future is at stake.