Disinformation and Democracy
08-06-2020Roger Berkowitz
Hannah Arendt argues that the distinction between truth and lie can be eroded, over time, by "continual lying." When political leaders, institutions, the press, and respected figures habitually and continually state alternative facts, their lies—even if they are neither intended to be believed nor are believed—attack the very foundations of what Arendt calls the common world.
"The result of a consistent and total substitution of lies for factual truth is not that the lies will now be accepted as truth, and the truth be defamed as lies, but that the sense by which we take our bearings in the real world—and the category of truth vs. falsehood is among the mental means to this end—is being destroyed."
For Arendt, "consistent lying, metaphorically speaking, pulls the ground from under our feet and provides no other ground on which to stand." The result of consistent lying is that we come to experience "a trembling wobbling motion of everything we rely on for our sense of direction and reality.”
The Lesson Arendt draws is that politics has vanquished facts. Arendt worries that the loss of facts and factual truths deprive the shared world of the pillars on which it stands; facts "constitute the very texture of the political realm." The loss of facts, therefore, is the loss of our shared political reality.
In this context, Jonathan Freedland reviews a series of books that seek to understand the defactualized world that is like a viral pandemic corrupting our political institutions.
The most vivid example remains the intervention by Russian intelligence in the US presidential election of 2016, in which 126 million Americans saw Facebook material generated and paid for by the Kremlin. But the phenomenon goes far wider. According to Philip N. Howard, professor of Internet studies at Oxford, no fewer than seventy governments have at their disposal dedicated social media misinformation teams, committed to the task of spreading lies or concealing truth. Sometimes these involve human beings, churning out tweets and posts aimed at a mainly domestic audience: China employs some two million people to write 448 million messages a year, while Vietnam has trained 10,000 students to pump out a pro-government line. Sometimes, it is automated accounts—bots—that are corralled into service. The previous Mexican president had 75,000 such accounts providing online applause for him and his policies (a tactic described by Thomas Rid in Active Measures as “the online equivalent of the laugh track in a studio-taped TVshow”). In Russia itself, almost half of all conversation on Twitter is conducted by bots. Young activists for Britain’s Labour Party devised a bot that could talk leftist politics with strangers on Tinder.
Still, Howard writes in Lie Machines that the place where disinformation has spread widest and deepest is the US. He and his team at Oxford studied dozens of countries and concluded that the US had the “highest level of junk news circulation,” to the point that “during the presidential election of 2016 in the United States, there was a one-to-one ratio of junk news to professional news shared by voters over Twitter.”
Freedland shows that the rise of political disinformation is not new and is, in fact, a continuation of a nearly century-long trend of political disinformation warfare. And yet, it is also the case, Freedland suggests, that the rise of big data and social media means that disinformation today is both quantitatively and qualitatively different and more disruptive than at any other time. He writes:
And yet it would not be right to conclude that today’s disinformation efforts are simply a high-tech version of those of the past. The differences are more substantial than that. Today’s active measures are simultaneously more personal and much broader in reach than before. While KGB operatives in the 1950s might have placed a forged pamphlet or bogus magazine in front of a few thousand readers, their heirs can now microtarget millions of individuals at once, each one receiving bespoke messaging, designed to press their most intimately neuralgic spots. Those engaged in what Howard calls “computational propaganda” don’t merely mine the attitudes you’ve expressed on social media; they can also draw conclusions from your behavior, as recorded by your credit card data. What’s more, think of all the data gathered by the connected objects around you—the Internet of things—monitoring your sleep, your meals, your habits, your every move. This reveals more about you than your browsers ever could, says Howard, adding, arrestingly, that we’ve been “focusing on the wrong internet.”
It’s this blend of “massive distribution, combined with sophisticated targeting” that is new. The work is so much easier too, requiring little of the fine, almost artistic skill demanded of the master forgers and tricksters of yore. In the earlier era, only governments, through their intelligence agencies, had the money and muscle to attempt such work. Now the cost of production is low, and so is the bar to entry.
What’s more, technological advances promise to make disinformation easier still and more effective. It’s already possible to create fake audio and video; it can’t be long before fake fact-checking sites follow. Chatbots are in their infancy, but they are growing more sophisticated. The future may see not only your Twitter feed dotted with AI bots, but even your WhatsApp messages, filled with “digital personalities” engineered to look and sound like people you know.