What the U.S. Should Have Learned From the 2016 Election

What the U.S. Should Have Learned From the 2016 Election
Illustration: Jim Cooke/Gizmodo
We may earn a commission from links on this page.

Years after revelations regarding Russian interference in 2016 started to come to light, the tiresome odes to “Soviet Russian tradecraft” ought to leave us wondering what’s changed.

At this point, the fact that there were multiple Russian-led campaigns to sow disinformation around the 2016 election has become a well-documented fact. All took advantage of social media, at least in some capacity, and contributed to a climate of uncertainty and anxiety in the years to come. Still, despite the fact that some in Silicon Valley have taken steps to prevent a recurrence of the mess that was 2016, these same platforms have remained a haven for new conspiratorial communities that are much more domestic in origin. Few learned from the experience. Instead, we’ve continued to outsource managing our societal disarray to platforms that were, as Zeynep Tufekci noted in 2018, designed to amplify sensational content.

The news cycle around Russia’s involvement in the 2016 election spurred a wave of obnoxious social media personalities who put Glenn Beck’s infamous chalkboard scribblings to shame. One of them purported that Gizmodo was a Russian front. Another leveraged his unreadable, hundred-plus long tweet threads to transform himself from a mere assistant English professor and abhorrent poet, into a mainstream political analyst and author of three books on Trump. Others spent many a night pondering if the “pee tape” was real and, in turn, the nature of reality itself.

Russia did not “hack” the election. Revelations regarding activity around the already circus-like 2016 election have not found any proven impact on the outcome. Still, in popular culture, it marked a breaking point for social media’s role in American life. Russia’s “influence campaign,” as a January 2017 report from the Director of National Intelligence dubbed it, blended “covert intelligence operations” (e.g., intelligence gathering and/or meeting with the Sopranos-style parade of Trump minions) and “overt efforts by Russian government agencies, state-funded media, third-party intermediaries, and paid social media users or ‘trolls’.” More importantly, it encouraged Americans to accelerate their own crisis, though it’s not like we needed much.

There were plenty of lessons to be drawn from the 2016 election cycle—not just for the lawmakers and the social media companies whose platforms opened themselves up to manipulation, but also for media and the average information consumer or social media user as well. Foremost among them was a need for preparedness. Social media companies, as numerous researchers have argued, were caught entirely off guard and are, in some respects, still catching up. Even as platforms built up policies around “bots” and other forms of inauthentic activity, they have continued to lag behind on content moderation. The fact that these same companies struggle with sustaining a backbone when it comes to groups like QAnon—a far-right conspiracy movement that whose material was only recently banned from Facebook, Twitter, and YouTube despite entrenching itself for years on those platforms—reveal that, when it comes to managing America’s epistemic crisis, there is a long way to go.

By most accounts, Russian electoral interference in 2016 constituted a number of different hacking and social media disinformation operations, spanning numerous platforms. While the Internet Research Agency—a so-called “troll farm” operating out of St. Petersburg with ties to the Kremlin—became the face of the operation, in reality these were carried out by a variety of state actors or groups affiliated with the Russian government. Some remain unknown.

In a joint report published in late 2018, researchers from the University of Oxford and the data analytics firm Graphika noted that accounts associated with the IRA began targeting a U.S. audience as early as 2013 on Twitter. As the report notes, its U.S.-focused activity continued at a “low level” at first, before ramping up “dramatically at the end of 2014” and roping in a number of different platforms, including Facebook, Instagram, and YouTube, as well as a variety of other less prominent platforms like Tumblr. Leaked IRA material illustrated how the organization identified certain fault lines within American society.

Some of the material was goofy. One of the IRA ads presented to the House Intelligence Committee in 2017 featured an image of a colorful and muscular Bernie Sanders in a speedo, alongside text promoting a coloring book called “Buff Bernie: A Coloring Book for Berniacs.” Another post, from a page called “Army of Jesus,” included an image of a jacked, glowing Satan arm wrestling Jesus Christ.

But Nina Jankowicz, author of How to Lose the Information War, told Gizmodo in an interview that these oddities were only part of the package.

“If you look at what they did, they really built trust in communities over time. That’s why they shared positive content at the beginning,” she said, referring to IRA accounts’ tendency to share seemingly innocuous memes in thousands of Facebook groups.

The IRA’s activities online took place concurrently with a Russian military intelligence-led hack into the digital infrastructure of Hillary Clinton’s campaign, the Democratic National Committee, and the Democratic Congressional Campaign Committee. According to the 2019 Mueller report, the GRU used a spearphishing campaign to target the work and personal emails of Clinton campaign employees and volunteers in mid-March of 2016. By April, GRU gained access to DCCC and, later, DNC, networks and began extracting material. In late May and early June, officers used their access to DNC’s mail server to steal thousands of emails and documents.

These emails were, per the Mueller report, disseminated initially through two “fronts”: a persona named “Guccifer 2.0” and a website called DCLeaks. Unlike the IRA, as a 2019 report from the Stanford Internet Observatory noted, GRU’s success was largely reliant on networking and “direct outreach.” Both personas were in contact with Wikileaks, as well as Trump affiliates, such as Roger Stone and Gen. Michael Flynn. Alexander Nix, the former head of the creepy data-analytics company Cambridge Analytica, said in a 2018 email that he had approached Wikileaks about the stolen Clinton emails as well.

Despite social media’s outsized role in spreading disinformation related to the election, some of the most prominent platforms that had served as a platform for not only for Russian-linked “fake” accounts, but also for hard-right and racist disinformation, were caught off guard.

“In all seriousness, I cannot overstate how unprepared Silicon Valley was in the face of this threat in 2016 and how much progress has been done, and quickly, since then,” Camille François, the chief innovation officer at Graphika, told Gizmodo in an interview.

Throughout 2017 and 2018, Facebook and Twitter fumbled to get a grip on the widespread proliferation of disinformation on their platforms. Facebook published its first report touching on Russian information operations in spring of 2017. Twitter followed, releasing a list on January 31, 2018, of the 3,841 IRA-linked accounts that it had identified and alerting users who interacted with them. Of these accounts, around 120 had over 10,000 followers. Several, such as @Ten_GOP—which posed as an “unofficial” account for the Tennessee GOP—were boosted by prominent members of the Trump campaign, including Donald Trump, Jr.

Others were less forthcoming. A Google report from October 2017 released a few pages of data summarizing their findings, saying it had found fewer than 20 IRA accounts on YouTube in particular. However, subsequent research has identified YouTube as the second-most linked to site in IRA tweets, with most of the links being to explicitly conservative content.

It’s odd enough that a subsection of the social media-using population was duped by a cadre of poorly paid 20-somethings in St. Petersburg watching “House of Cards.” But some took the fruits of these efforts and turned them into a mess that was thoroughly American—and even more difficult to control.


On December 4, 2016, Edgar Welch walked into Comet Ping Pong, a pizzeria in northwest D.C.. Armed with a loaded AR-15 assault rifle and .38 caliber revolver, he began working his way through the restaurant. He fired a handful of shots as he maneuvered his way toward a basement labyrinth of child torture chambers that didn’t exist.

Welch had driven to Washington, D.C., from his home in North Carolina, after consuming hours upon hours of content on YouTube and other sites claiming that Comet was home to a pedophile sex trafficking ring—a narrative that lay at the heart of a conspiracy theory called Pizzagate. Even though Welch told officers that he had come to “investigate” Comet Ping Pong to determine if these allegations were true, he appeared to be well aware that his actions could result in violence, even death. In a text sent to a friend on December 2, 2016, Welch justified his actions as “[r]aiding a pedo ring, possibly sacraficing [sic] the lives of a few for the lives of many.”

Pizzagate wasn’t birthed from the mess of Russia disinformation per se. However, the self-proclaimed internet sleuths masquerading as Pizzagate “researchers” used Clinton campaign manager John Podesta’s emails, which had been snagged by GRU, as a resource. But QAnon, a successor to Pizzagate that has taken root in parts of the Republican Party, showed that the lessons of Russia’s online disinformation operations cannot be differentiated from similar domestic campaigns or conspiratorial thinking. At the very least, it makes Mark Zuckerberg’s post-election comment that “fake news” couldn’t influence voting patterns look rather daft.

François proposes seeing disinformation as a composite. In a 2019 paper, she suggests seeing “viral deception campaigns” through the lens of three “vectors,” titled ABCs, where “A” stands for “manipulative actors” (e.g., trolls), “B” for “deceptive behaviors,” and “C” “harmful content.” In addition to providing guidance for regulators, presenting these efforts as multifaceted encourages a better understanding of how disinformation peddlers operate across platforms.

Both Aric Toler, a researcher at Bellingcat, and Jankowicz stressed that in the rush to concoct policies after the fallout of 2016, social media companies focused on behavior, not content.

“The spotlight of 2016 all went to disinfo campaigns via bots, astroturfed pages/sites...which is relatively easy to stop algorithmically or through visible takedown efforts,” Toler told Gizmodo in an email. As for the GRU’s hack-and-dump efforts, he noted “there are no social media guidelines...to really stop that.”

Recent bans on coronavirus disinformation and QAnon communities on Twitter, YouTube, and Facebook do show a growing willingness to regulate content. (Whether they do so well is a different question.) But, as Jankowicz noted in her book, How to Lose an Information War, companies are locked in a game of “Whack-a-Troll.”

“Like the carnival game of Whack-a-Mole, Whack-a-Troll is all but unwinnable; neither tech platforms nor governments nor journalists can fact-check their way out of the crisis of truth and trust Western democracy currently faces,” she observed.

There’s no real solution to our political hell. But there are, as Yochai Benkler, Robert Faris, and Hal Roberts wrote in their 2018 book, Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics, a few takeaways from the 2016 election that we can use to manage future crises. Companies, lawmakers, users, and the media need to be careful in terms of assessing the actual danger posed by foreign disinformation campaigns. In the same vein, they encouraged people to refrain from overstating the impact of disinformation operations; after all, there is still no evidence any Russian actions impacted the election. For one, the IRA itself has also seized upon American lawmakers’ post-2016 alarmism. One IRA campaign in 2018, for instance, appeared to poke fun at the portrayal of Russian trolls as master manipulators by claiming to run a network of accounts that didn’t exist.

The authors also pointed to a “competitive dynamic” among right-wing media outlets, where sites would compete for traffic by using increasingly incendiary rhetoric. This dynamic, the researchers argued, put right-wing sites at greater risk of manipulation. It also extends far beyond Russian disinformation. As the same researchers noted in an October 2020 study on rhetoric around mail-in ballots, the conspiracies being pushed in right-wing circles about voter fraud were tied to an “elite-driven, mass media communicated information disorder.” Social media companies fact checking Trump, for instance, would do little; right-wing media provided enough of an echo-chamber that would render such efforts fruitless.

Still, it’s worth wondering if we’d all be better off if Facebook had stuck to its original mission from the beginning: a place to discover “‘whether Frank puked on his frat brother last night.’”

Advertisement

Advertisement