“All warfare is based on deception. Hence, when we are able to attack, we must seem unable; when using our forces, we must appear inactive; when we are near, we must make the enemy believe we are far away; when far away, we must make him believe we are near.” – Sun Tzu, the Art of War
In recent years, prominent national security officials and media outlets have raised alarm about the unprecedented effects of foreign disinformation in democratic countries. In practice, what they mean is that democratic governments have fallen behind in their command of the methods of information warfare in the early 21st century. As outlined herein, while information warfare is a real and serious issue facing democratic governments in the 21st century, the war on disinformation, as currently practiced, has backfired spectacularly and done far more harm than good, as evidenced most clearly by the response to COVID-19.
We begin with the definitions and history of a few key terms: Censorship, free speech, misinformation, disinformation, and bots.
Censorship and Free Speech
Censorship is any deliberate suppression or prohibition of speech, whether for good or ill. In the United States and countries which have adopted its model, censorship induced by governments and their appendages is constitutionally prohibited except in the narrow category of “illegal speech”—e.g., obscenity, child exploitation, speech abetting criminal conduct, and speech that incites imminent violence.
Because censorship involves the exercise of power to silence another individual, censorship is inherently hierarchical. A person who lacks the power to silence another cannot censor them. For this reason, censorship inherently reinforces existing power structures, whether rightly or wrongly.
Though the United States may be the first country to have enshrined the right to free speech in its constitution, the right to free speech developed over centuries and predates the western Enlightenment. For example, the right to speak freely was inherent to the democratic practices of the political classes in Ancient Greece and Ancient Rome, even if it was not enshrined in words. This is only logical; because these systems treated all members of the political class as equals, no member of the political class had the power to censor another except with the consent of the body politic.
The right to free speech developed and receded in fits and starts over the coming centuries for a number of reasons; but in accordance with George Orwell’s view of institutional evolution, free speech developed primarily because it afforded an evolutionary advantage to the societies in which it was practiced. For example, the political equality among Medieval British lords in their early parliamentary system necessitated free speech among them; by the 19th century, the cumulative benefits of this evolutionary advantage would help make Britain the world’s primary superpower. The United States arguably went a step further by enshrining free speech in its constitution and extending it to all adults, affording the United States a still greater evolutionary advantage.
By contrast, because censorship depends on and reinforces existing power structures, censors tend especially to target those who seek to hold power to account. And, because the advancement of human civilization is essentially one unending struggle to hold power to account, this censorship is inherently incompatible with human progress. Civilizations that engage in widespread censorship therefore tend to stagnate.
Misinformation is any information that is not completely true, regardless of the intent behind it. A flawed scientific study is one form of misinformation. An imperfect recollection of past events is another.
Technically, under the broadest definition of “misinformation,” all human thoughts and statements other than absolute mathematical axioms are misinformation, because all human thoughts and statements are generalizations based on subjective beliefs and experiences, none of which can be considered perfectly true. Moreover, no particular levels or “degrees” of misinformation can be readily defined; the relative truth or falsity of any information exists on a continuum with infinite degrees.
Accordingly, because virtually all human thoughts and statements can be defined as misinformation, a prerogative to identify and censor misinformation is extraordinarily broad, depending entirely on the breadth of the definition of “misinformation” employed by the censor in any given instance. Because no particular “degrees” of misinformation can be defined, an official with a license to censor misinformation could censor virtually any statement at any time and justify their action, correctly, as having censored misinformation. In practice, because no man is an angel, this discretion inherently comes down to the biases, beliefs, loyalties, and self-interests of the censor.
Disinformation is any information shared by a person who knows it to be false. Disinformation is synonymous with lying.
Disinformation goes back centuries and is far from limited to the Internet. For example, according to Virgil, toward the end of the Trojan War, the Greek warrior Sinon presented the Trojans with a wooden horse that the Greeks had supposedly left behind as they fled—without informing the hapless Trojans that the horse was, in fact, filled with the Greeks’ finest warriors. Sinon could rightly be considered one of history’s first accounts of a foreign disinformation agent.
In a more modern example of disinformation, Adolf Hitler convinced western leaders to cede the Sudetenland by making the false promise, “We want no Czechs.” But just a few months later, Hitler took all of Czechoslovakia without a fight. As it turned out, Hitler did want Czechs, and much more besides.
Technically, disinformation can come just as easily from a source either foreign or domestic, though how such disinformation should be treated—from a legal perspective—is very dependent on whether the disinformation had a foreign or domestic source. Because the greatest challenge in distinguishing simple misinformation from deliberate disinformation is the intent of the speaker or writer, identifying disinformation presents all the same challenges that people have faced, since time immemorial, in identifying lies.
Is a statement more likely to be a lie, or disinformation, if someone has been paid or otherwise incentivized or coerced to say it? What if they’ve wrongly convinced themselves that the statement is true? Is it enough that they merely should have known the statement is untrue, even if they didn’t have actual knowledge? If so, how far should an ordinary person be expected to go to find out the truth for themselves?
Just like lying, disinformation is generally considered negative. But in certain circumstances, disinformation can be heroic. For example, during the Second World War, some German citizens hid their Jewish friends for years while telling Nazi officials that they did not know of their whereabouts. Because of circumstances like these, the right to lie, except when under oath or in furtherance of a crime, is inherent to the right to free speech—at least for domestic purposes.
Defining “foreign disinformation” further complicates the analysis. Is a statement “foreign disinformation” if a foreign entity invented the lie, but it was shared by a domestic citizen who was paid to repeat it, or who knew it was a lie? What if the lie was invented by a foreign entity, but the domestic citizen who shared it did not know it was a lie? All these factors must be considered in correctly defining foreign and domestic disinformation and separating it from mere misinformation.
The traditional definition of an online bot is a software application that posts automatically. However, in common usage, “bot” is more often used to describe any anonymous online identity who is secretly incentivized to post according to specific narratives on behalf of an outside interest, such as a regime or organization.
This modern definition of “bot” can be difficult to pin down. For example, platforms like Twitter permit users to have several accounts, and these accounts are allowed to be anonymous. Are all of these anonymous accounts bots? Is an anonymous user a “bot” solely by virtue of the fact that they’re beholden to a regime? What if they’re merely beholden to a corporation or small business? What level of independence separates a “bot” from an ordinary anonymous user? What if they have two accounts? Four accounts?
The most sophisticated regimes, such as China’s, have vast social media armies consisting of hundreds of thousands of employees who post to social media on a daily basis using VPNs, allowing them to conduct vast disinformation campaigns involving hundreds of thousands of posts in a very short timespan without ever resorting to automated bots in the traditional sense. Thus, Chinese disinformation campaigns are impossible to stop algorithmically, and even difficult to identify with absolute certainty. Perhaps for this reason, whistleblowers have reported that social media companies like Twitter have effectively given up on trying to police foreign bots—even while they pretend to have the issue under control for purposes of public relations.
Information Warfare in the Present Day
Owing to the seriousness with which they’ve studied the methods of information warfare, and perhaps to their long mastery of propaganda and linguistics for purposes of exercising domestic control, authoritarian regimes such as China’s appear to have mastered disinformation in the early 21st century to a degree with which western national security officials can’t compete—similar to how the Nazis mastered the methods of 20th century disinformation before their democratic rivals.
The magnitude and effects of these foreign disinformation campaigns in the present day are difficult to measure. On the one hand, some argue that foreign disinformation is so ubiquitous as to be largely responsible for the unprecedented political polarization that we see in the present day. Others approach these claims with skepticism, arguing that the specter of “foreign disinformation” is being used primarily as a pretext to justify western officials’ suppression of free speech in their own countries. Both arguments are valid, and both are true to varying degrees and in various instances.
The best evidence that national security officials’ alarm about foreign disinformation is justified is, ironically, an example so egregious that they have yet to acknowledge it happened, seemingly out of embarrassment and fear of the political fallout: The lockdowns of spring 2020. These lockdowns weren’t part of any democratic country’s pandemic plan and had no precedent in the modern western world; they appear to have been instigated by officials with strange connections to China based solely on China’s false claim that their lockdown was effective in controlling COVID in Wuhan, assisted in no small part by a vast propaganda campaign across legacy and social media platforms. It’s therefore essentially axiomatic that the lockdowns of spring 2020 were a form of foreign disinformation. The catastrophic harms that resulted from these lockdowns prove just how high the stakes in 21st century information warfare can be.
That said, the astonishing failure of western officials to acknowledge the catastrophe of lockdowns seems to speak to their unseriousness in actually winning the 21st century information war, justifying skeptics’ arguments that these officials are merely using foreign disinformation as a pretext to suppress free speech at home.
For example, after the catastrophic lockdowns of spring 2020, not only did national security officials never acknowledge foreign influence on lockdowns, but on the contrary we saw a small army of national security officials actually engaging in domestic censorship of well-credentialed citizens who were skeptical of the response to COVID—effectively exacerbating the effects of the lockdown disinformation campaign and, conspicuously, making their own countries even more like China.
The Orwellian pretext for this vast domestic censorship apparatus is that, because there is no way to properly identify or control foreign social media bots, foreign disinformation has become so ubiquitous within western discourse that federal officials can only combat it by surreptitiously censoring citizens for what the officials deem to be “misinformation,” regardless of the citizens’ motivations. These officials have thus deemed well-qualified citizens who oppose the response to COVID-19 to be spreading “misinformation,” a term which can encompass virtually any human thought or statement. Depending on their underlying motivations and loyalties, the actions of these officials in surreptitiously censoring “misinformation” may have even been an intentional part of the lockdown disinformation campaign; if so, this speaks to the multi-level complexity and sophistication of information warfare in the 21st century.
There are signs that some of the primary actors in this vast censorship apparatus were not, in fact, acting in good faith. For example, Vijaya Gadde, who previously oversaw censorship operations at Twitter and worked closely with federal officials to censor legal and factual speech, was being paid over $10 million per year to act in this role. While the dynamics and definitions of misinformation and disinformation are philosophically complex, and Gadde may have legitimately not understood them, it’s also possible that $10 million per year was sufficient to buy her “ignorance.”
These problems are exacerbated by the fact that honest institutional leaders in western countries, typically of an older generation, often don’t fully appreciate or understand the dynamics of information warfare in the present day, seeing it as primarily a “Millennial” problem and delegating the task of monitoring social media disinformation to younger people. This has opened up a promising path for young career opportunists, many of whom have no particular legal or philosophical expertise on the nuances of misinformation, disinformation, and free speech, but who make lucrative careers out of simply telling institutional leaders what they want to hear. As a result, throughout the response to COVID-19, we saw the horrifying effects of disinformation effectively being laundered into our most venerated institutions as policy.
Winning the 21st Century Information War
While the dynamics of information warfare in the early 21st century are complex, the solutions need not be. The idea that online platforms have to be open to users of all countries largely harkens back to a kind of “kumbaya” early-Internet ideal that engagement between peoples of all nations would render their differences irrelevant—similar to late-19th century arguments that the Industrial Revolution had made war a thing of the past. Regardless of how widespread foreign disinformation may actually be, the fact that national security officials have secretly constructed a vast apparatus to censor western citizens for legal speech, supposedly due to the ubiquity of foreign disinformation, lays bare the farcical notion that online engagement would resolve differences between nations.
It’s morally, legally, and intellectually repugnant that federal officials in the United States have constructed a vast apparatus for censoring legal speech, bypassing the First Amendment—without informing the public—on the pretext that the activities of foreign regimes which have been deliberately permitted on our online platforms have gotten so out of control. If foreign disinformation is anywhere near that ubiquitous in our online discourse, then the only solution is to ban access to online platforms from China, Russia, and other hostile countries that are known to engage in organized disinformation operations.
Because the effects of foreign disinformation can’t be accurately measured, the actual impact of banning access to our online platforms from hostile countries isn’t clear. If disinformation alarmists are correct, then banning access from hostile countries could have a significant ameliorative effect on political discourse in democratic nations. If skeptics are correct, then banning access from hostile countries might not have much effect at all. Regardless, if federal officials really don’t think there’s any way to allow users in hostile countries to access our online platforms without circumscribing the United States Constitution, then the choice is clear. Any marginal benefit that’s gained from interactions between western citizens and users in hostile countries is vastly outweighed by the need to uphold the Constitution and the principles of the Enlightenment.
This article was first published on the author’s own site. It has been reprinted with permission.