Home / OPINION / Analysis / Forget Counterterrorism, the United States Needs a Counter-Disinformation Strategy

Forget Counterterrorism, the United States Needs a Counter-Disinformation Strategy

Brian Raymond

If the U.S. government wants to win the information wars, Cold War-era tactics won’t cut it anymore.

A “misinformation newsstand” aiming to educate voters about disinformation ahead of the 2018 U.S. midterm elections as seen in Manhattan on Oct. 30, 2018.  ANGELA WEISS/AFP via Getty Images

On Oct. 14, Facebook and Twitter made the decision to remove a dubious New York Post story from their platforms—provoking heated debate in the internet’s various echo chambers. The article in question purportedly revealed influence peddling by Democratic presidential nominee Joe Biden’s son Hunter Biden, and the social media giants suspected that the uncorroborated claims were based on hacked or fabricated correspondences. Weeks before the U.S. presidential election, Silicon Valley’s swift and decisive action in response to disinformation is a stark contrast to its handling of hacked emails from Hillary Clinton’s presidential campaign four years ago.

A week prior, on Oct. 7, the U.S. Justice Department announced that it had seized nearly 100 websites linked to Iran’s Islamic Revolutionary Guard Corps (IRGC). These sites had been engaged in a global disinformation campaign, targeting audiences from the United States to Southeast Asia with pro-Iranian propaganda. But it wasn’t just the government engaged in countering adversaries online: One day later, Facebook and Twitter reported that they had taken down more than a dozen disinformation networks used by political and state-backed groups in Iran, Russia, Cuba, Saudi Arabia, and Thailand.

Trending Articles

Trump Looks to Score Last-Minute Foreign Policy Points

Will presidential promises to bring troops home and give Israel more diplomatic victories be enough to sway voters?

In the grand scheme of things, the events of Oct. 7 and 14 were hardly noteworthy. In recent years, private and public actors alike have had to ramp up their efforts against botnets, troll farms, and artificial intelligence systems that seek to manipulate the online information environment and advance certain strategic objectives. These actors came under unprecedented scrutiny in the aftermath of the 2016 U.S. presidential election.

The United States continues to rely on the same dated playbook that led to success against Soviet propaganda operations.

But while cyberspace may be a new front in the fight against disinformation, disinformation in and of itself—as well as the societal discord it can sow—has been a national security concern for decades; the Cold War was largely waged by propagating competing versions of the truth. And much as the threat of “fake news” is nothing new, so too is the way policymakers deal with it—or try to.

Therein lies the real problem. In countering disinformation emanating from the Kremlin, Chinese Communist Party (CCP), and IRGC, among others, the United States continues to rely on the same dated playbook that led to success against Soviet propaganda operations, known as “active measures,” in the 1980s. But this anti-disinformation strategy, like most else developed in the 1980s, has been rendered largely obsolete by an evolving media landscape and emerging technology.

Now, if the United States is going to have any hope of getting back on its front foot—and put a stop to adversaries’ attempts to sow confusion and cynicism domestically—it’s going to have to seriously reconceive its old playbook. But that can’t be done without Big Tech companies, which are the linchpin in the fight against disinformation.

Granted, some state-citizen reconciliation is needed to mend the fraught ties of the post-Snowden era. In 2013, the whistleblower Edward Snowden leakeddocuments exposing widespread cooperation between U.S. technology companies and the National Security Agency, triggering widespread backlash from technology companies and the public, who lamented the lack of personal privacy protections on the internet.

Since then, the chasm between Silicon Valley and the U.S. national security community has only widened—but there are signs that the tide may be shifting: Companies like Facebook, Twitter, and Google are increasingly working with U.S. defense agencies to educate future software engineers, cybersecurity experts, and scientists. Eventually, once public-private trust is fully restored, the U.S. government and Silicon Valley can forge a united front in order to effectively take on fake news.

Disinformation crept onto the national security radar just as Ronald Reagan assumed the presidency in early 1981. After the CIA was publicly disgraced during the Church Committee hearings—which exposed the CIA’s controversial (and in some cases illegal) intelligence gathering and covert action against foreign leaders and U.S. citizens alike—Reagan recruited William Casey to revamp the agency. On moving into his seventh-floor office at Langley, Casey, known to be a hawk, was dismayed to learn that the CIA was collecting almost no information on Soviet active measures—and doing even less to counter them.

Casey reorganized key offices within the CIA’s Directorate of Intelligence to focus on better understanding Soviet active measures and instructed the Directorate of Operations to ramp up its collection of classified intelligence on Soviet propaganda. By mid-1981, the scale of the Soviets’ efforts became clear. In an August 1981 speech on Soviet disinformation campaigns against NATO, Reagan revealed that the Soviet Union had spent around $100 million to sow confusion in Western Europe after NATO developed the neutron warhead in 1979.

The United States and Europe are ill-prepared for the coming wave of “deep fakes” that artificial intelligence could unleash.

Where the conspiracy came from and what it means for politics at home—and abroad.

It doesn’t matter if Russia actually sways the vote. What matters is whether Americans think it did.

Of Moscow’s latest efforts, Reagan said he didn’t “know how much they’re spending now, but they’re starting the same kind of propaganda drive,” which included funding front groups, manipulating media, engaging in forgery, and buying agents of influence. In 1983, for example, Patriot, a pro-Soviet Indian newspaper, released a story claiming that the U.S. military had created HIV and released it as a biological weapon. Over the next four years, the story was republished dozens of times and rebroadcast in over 80 countries and 30 languages.

By 1982, the CIA estimated that Moscow was spending $3 billion to $4 billion annually on global propaganda efforts. The Soviet Politburo and Secretariat of the Communist Party, which directed the active measures, made no major distinction between covert action and diplomacy; to the Kremlin, disinformation was a tool to advance the strategic goals of the Soviet Union in its competition with the West.

With the nation fixated on Soviet propaganda, senior leaders from across the Reagan administration came together to form what came to be called the Active Measures Working Group. Led by the State Department—and including representatives from the CIA, FBI, Defense Intelligence Agency, and Defense and Justice departments—the national security bureaucracy quickly went on the offensive. Through the end of the Cold War, the group was effective not only in raising global awareness of Soviet propaganda efforts but also in undermining their efficacy. In fact, U.S. anti-disinformation campaigns were so successful that Soviet premier Mikhail Gorbachev in 1987 instructed the KGB to scale back its propaganda operations.

U.S. anti-disinformation campaigns were so successful that Soviet premier Mikhail Gorbachev in 1987 instructed the KGB to scale back its propaganda operations.

Clearly, those days are long gone. In stark contrast to the triumphs of the 1980s, the United States since the turn of the century has largely failed to counter disinformation campaigns by geostrategic competitors like Russia, China, and Iran.

The opening salvo of a new, digitized phase of state-level competition for influence occurred in 2014, when Russia seized Crimea from Ukraine. As he moved troops to the strategic Black Sea outpost, Russian President Vladimir Putin publicly claimed that those forces occupying Crimea could not possibly be Russian special forces—lying outright to the global community. In the years since, the Kremlin’s disinformation campaigns have increased in volume, velocity, and variety. Today, state-level actors such as Russia, China, Cuba, Saudi Arabia, North Korea, and others employ armies of trolls and bots to flood the internet with false, misleading, or conspiratorial content to undermine Western democracy.

If Washington is still fighting the same enemy, then what went wrong?

The United States’ counter-disinformation playbook has been predicated on two unspoken assumptions, neither of which is valid today: first, that shining light on lies and disinformation through official government communications is an effective tactic; and second, that Washington can keep up with the speed and scale of disinformation campaigns. In fact, debunking efforts by government officials do little to discredit propaganda, and the volume of threats vastly exceeds the U.S. government’s ability to identify and counter them. These inferences take U.S. credibility—and technological prowess—for granted, which is hardly inevitable.

Broadly speaking, three factors have changed the disinformation game since the 1980s—and rendered the assumptions that formed the bedrock of the United States’ campaign against Soviet active measures obsolete. First, the global media environment has become far more complex. Whereas in the 1980s most citizens consumed their news from a handful of print and broadcast news outlets, today, world events are covered instantaneously by a tapestry of outlets—including social media, cable news, and traditional news channels and publications.

Second, U.S. adversaries have relied on bots to amplify fringe content and employed trolls to generate fake content to advance their strategic objectives. Finally, rising political polarization has accelerated consumers’ drive toward partisan echo chambers while increasing their suspicion of government leaders and expert voices. Against such a backdrop, the Active Measures Working Group—a relic of simpler times—can no longer be successful.

Indeed, in the early days of the coronavirus pandemic, U.S. efforts to stem Chinese disinformation about COVID-19 backfired; Beijing’s disinformation campaigns accelerated between March and May. By June, Twitter reported that it had removed 23,750 accounts created by the Chinese government to criticize protests in Hong Kong and to extol the CCP’s response to COVID-19.

To complicate matters further, the one anti-disinformation campaign where the United States has been successful in recent years is hardly a generalizable case. The U.S.-led Operation Gallant Phoenix, fighting the Islamic State, was able to steadily erode the group’s legitimacy by undermining its propaganda machine. From a multinational headquarters in Jordan, the coalition flooded the internet with anti-Islamic State content and hobbled the group’s ability to broadcast its message globally.

But a campaign against the Islamic State is far from a viable blueprint for countering Russian, Chinese, and Iranian disinformation campaigns. The international community—private sector tech firms included—shares the broad consensus that the Islamic State must be defeated. This sort of political harmony hardly exists, for example, on how, or whether, to forcefully counter Chinese-led disinformation efforts related to COVID-19.

It’s clear that the United States is losing the information wars, in part due to a lack of innovation among the key stakeholders in the executive branch.

It’s clear that the United States is losing the information wars, in part due to a lack of innovation among the key stakeholders in the executive branch.

But not all is lost. The next administration can make the United States a viable competitor in the global information wars by developing a comprehensive counter-disinformation strategy that is predicated on three different pillars.

Before any decisive counter-disinformation strategy can be formulated, key constituencies will need to come to some sort of consensus about data ethics. A commission staffed by leaders from the executive branch and media organizations must first draft a set of first principles for how data should be treated in an open and fair society; philosophical rifts like those between Twitter CEO Jack Dorsey and Facebook CEO Mark Zuckerberg over the role of speech need to be overcome. Any effective campaign in pursuit of the truth requires a set of guiding principles to inform the types of speech should be permitted in digital town squares and when speech should be fact-checked—or, in extreme cases, removed entirely.

Once first principles are established, the White House can erect a policy framework to guide defensive actions and appropriate resourcing to counter foreign disinformation campaigns. In the spirit of the Active Measures Working Group, an effective counter-disinformation strategy will require a whole of government approach, likely anchored by the State Department and supported by the Pentagon, the intelligence community, and other key stakeholders.

Finally, though the U.S. government can and should do much more to counter disinformation campaigns, it should be clear-eyed about the fact that its ability to shape the information environment has eroded since the 1980s. A comprehensive counter-disinformation strategy would be smart to recognize the limits of government action given the speed and scale with which information moves across social media today.

Thus, it’s important to nest government-led counter-disinformation activities within a broader set of actions driven by the private sector. Playing the role of coordinator, the United States should encourage the creation of a fact-checking clearing house among social media platforms to rapidly counter suspected disinformation. Indeed, Facebook and Twitter have already begun adding fact-checked labels to potentially false or misleading posts—to the ire of Donald Trump. This should be encouraged and expanded to operate at the speed and scale with which content is generated and disseminated across social media.

The government could also use innovative investment pathways such as the Defense Innovation Unit or Joint Artificial Intelligence Center to incubate the development of new AI technologies that media platforms could use to spot deepfake technology—which can be used to create fake videos, new images, and synthetic text—at work. Deepfakes are rapidly becoming an inexpensive, fast, and effective means by which actors can wage irregular warfare against their adversaries.

Regardless of the precise form it takes, the future incarnation of the Active Measures Working Group should seek out Silicon Valley leaders to not only help co-lead the initiative but to also staff other key posts across the executive branch. In the end, the pathway to U.S. preeminence requires mobilizing the country’s unique assets: its ability to innovate, marshal resources at scale, and to come together in times of distress—as after 9/11. Only a response marked by bipartisanship within government—as well as strong partnerships with actors outside of it—can give the United States the reality check it desperately needs.

Brian Raymond is a vice president at Primer.ai. Previously, he served on the U.S. National Security Council and with the CIA.