A Murder Plot, a Twitter Mob, and the Strange Unmasking of a Pro-Kremlin Troll

A recent case shows how social media continues to give great cover for disinformation.

Mother Jones illustration; Getty Images

Fight disinformation: Sign up for the free Mother Jones Daily newsletter and follow the news that matters.

For several days in March, British Prime Minister Theresa May was the focus of an all-out assault on Twitter after she blamed the Kremlin for the poisoning of a former Russian spy and his daughter on British soil. One account in the melee stood out, racking up hundreds of retweets and claiming May was lying about the nerve-agent attack on Sergei and Yulia Skripal:

“The #Skripal Case: It Looks Like Theresa May Has Some Explaining to Do!” declared one of many broadsides from @ian56789, who called the attempted murder a “#falseflag” operation.

To expert disinformation researchers, the troll appeared to be working on behalf of Vladimir Putin’s regime, part of a longer-term pro-Kremlin campaign. The British government reported that the “Ian” account—whose avatar featured the chiseled face of British male model David Gandy—sent 100 posts a day during a 12-day period in April, reaching 23 million users. Atlantic Council analyst Ben Nimmo examined tens of thousands of tweets around #Skripal and concluded Ian was likely part of a Kremlin troll operation, based on multiple characteristics seen across Ian’s posts going back six years. The account vigorously backed Russia’s 2014 annexation of Crimea, and pushed Moscow spin regarding chemical weapon attacks in Syria and the shooting down of Malaysian flight MH-17 over Ukraine. The most important clue, according to Nimmo, was Ian’s extensive posting about the assassination of Boris Nemtsov in the 24 hours after the Russian opposition leader was murdered in Moscow on February 27, 2015. Ian let loose those tweets—including the suggestion that the CIA was involved—as a social media campaign about Nemtsov was launched by the Internet Research Agency, the infamous Kremlin troll farm in St. Petersburg that targeted the 2016 US elections.

But it turned out the Ian account was not necessarily what it seemed. In April, British media reports, citing UK government sources, misidentified Ian as a Russian “bot,” and the account was temporarily suspended by Twitter. Then, a retired British IT project manager named Ian Shilling came forward as its owner, defiantly stating he had no connection to the Russian government.

The case underscores how daunting it remains to track sources of disinformation on social media, especially on platforms like Twitter that allow account owners to be anonymous. Outside researchers can’t see an account’s IP address, phone number, or other indicators of its origins; such data is visible only to the tech companies, or to law enforcement agencies that request the information using a subpoena or court order. Facebook, even with its “real-name policy,” has been under siege from fake accounts that spread malicious content, having to purge more than 580 million fake accounts in the first three months of 2018, according to a recent disclosure by the company.

The case of the Ian account “is an important reminder that even when all of the signs are there, it’s impossible to be 100 percent certain” about the identity of trolls, Nimmo told Mother Jones. (For his part, Nimmo never labeled Ian a “bot,” which refers to an automated account rather than a human-controlled one.) Yet Nimmo believes it is critical for researchers to continue trying to expose trolls—especially with the next US election looming. “The more you can understand the behavior and identify the key actors, the more you can put together the bigger picture and keep an eye out for what’s coming next,” he says. “Are there early warnings that there’s another troll factory effort in the works?”

Twitter revealed this year that thousands of Kremlin-linked accounts operated on its platform in 2016, and intelligence experts suspect many others are active on the platform. As the tech companies face increasing pressure to address proliferating disinformation, they are also being pushed to protect users’ information from being mined and misused by political data firms like the now-defunct Cambridge Analytica, which harvested private information from more than 50 million Facebook profiles.

Kremlin-backed trolls and other propagandists are simply exploiting social media networks as they are designed and currently run, says Joshua Geltzer, a national security expert and constitutional law professor at Georgetown University. “It’s not like they hacked Twitter—they’re using Twitter,” he says. The platform is a dream for spreading disinformation: “The ability to share ideas very quickly, impassion people across national borders, with anonymity and while manufacturing momentum—these are integral features to how Twitter works.”

But given the stakes now, Geltzer argues, tech companies need to experiment with bolder tactics to limit manipulative content. Because malicious users are taking advantage of the platforms’ core features, “it suggests that mere technical tweaks aren’t enough to address the problem,” Geltzer says. “You need the companies to make value judgments. And they try to shy away from doing that.”

Anonymity as a force for good and evil

Twitter, which declined to comment on the record for this story about its policies, has also faced sharp criticism over neo-Nazis, misogynists, and other harassers, and has defended users’ right to tweet anonymously. Many users leverage anonymity for nobler purposes: Pro-democracy activists in Arab countries and LGBTQ people who aren’t out in their communities take cover this way, as do government whistleblowers. Last year, Twitter sued the Department of Homeland Security to prevent unmasking the owner of @ALT_USCIS, which was critical of Trump administration policies. The move preserved the user’s ability “to speak freely and without the fear of negative consequences that may flow from being identified as the source of controversial views,” the company wrote. DHS dropped its demand for the account holder’s identity.

Nimmo is sympathetic to the social-media companies caught between supporting free speech and privacy and harboring trolls. “On any platform that values free speech, there’s going to be abuse,” he says. “There’s a necessity to have a high level of tolerance.” But that tolerance shouldn’t protect a mob of anonymous accounts spewing vitriol and disinformation, Nimmo adds. “Some of this looks like orchestrated hate-mobbing. That’s desperately bad for the platforms, for the users, for all involved.”

Even though Shilling has denied any links to the Russian government, Nimmo says the Ian account has a clear history as “a systematic pro-Kremlin troll.” The account also has a long trail of promoting political conspiracy theories about 9/11, the Iraq War, last year’s white supremacist violence in Charlottesville, Virginia, and attacks on “traitor to America” John McCain, who is also a big target of Kremlin trolls.

After outing himself, Shilling gave on-camera interviews to Sky News and BBC to show he wasn’t a “bot.” When asked by the BBC why his tweets often match Kremlin propaganda from state-media outlets like RT and Sputnik, Shilling said, “If I’m telling the truth and the Russian government also decides to tell the truth about what’s going on, we can agree.” He emphasized: “I am not controlled by anybody.”

Clint Watts, a former FBI special agent whose new book, Messing With the Enemy, dissects Russian information warfare on social media, says the account could be an unwitting participant in a Kremlin influence network. (Watts says he hasn’t studied the Ian case specifically.) One Kremlin tactic, Watts says, is to identify sympathetic “fellow travelers” and use its networks to amplify those voices to a target audience. “It’s harder to detect because you can rent bots, or you can put them in networks of trolls who amplify them, and they don’t know that they’re trolls,” he says.

“Some people are paid for. Some are coerced. Some are influenced. Some agree. Some don’t know what they’re doing,” Watts explains. “Where they fall on that spectrum may not matter ultimately.” What matters most, he says, is the message they’re carrying and whether its reach is growing.

Twitter’s choice to value user anonymity over authenticity frustrates Watts, who told Mother Jones he’s disappointed the company paused its system of verifying accounts: “This would clear up a lot of the problem if everybody goes to verified accounts and ignores all the anonymous ones,” he says.

Disinformation campaigns on social media are designed to create confusion and erode trust in democratic institutions, and Nimmo believes it’s critical to root out the Kremlin-linked trolls. “We know a lot about the main Kremlin operation from 2014 to 2017, and the operators are still out there,” he says. “What have they moved on to? The incentive for Russia to keep doing this is greater than ever.”

“They swung the election to a Trump win”

It is hard to know what impact Kremlin cyber operations ultimately had on the 2016 election, but former Director of National Intelligence James Clapper doesn’t hold back in his new book, Facts and Fears: Hard Truths from a Life in Intelligence. “Of course the Russian efforts affected the outcome,” he writes. “Surprising even themselves, they swung the election to a Trump win. To conclude otherwise stretches logic, common sense, and credulity to the breaking point. Less than eighty thousand votes in three key states swung the election. I have no doubt that more votes than that were influenced by this massive effort by the Russians.”

Last October, after seeing evidence of political ads bought by Kremlin accounts posing as Americans, Democratic US Sens. Mark Warner and Amy Klobucher, along with the GOP’s McCain, proposed the Honest Ads Act, which would require social-media companies to disclose sources and funds behind political advertising. Twitter and Facebook have said they support the bill. Warner says that’s encouraging, but that it was a mistake for the companies to blow off the issue for months. “For the most part, their responses have been a day late and a dollar short,” Warner told Mother Jones. “The days of feeling like it’s the Wild West in online political advertising are over,” he added. “Whether it’s allowing Russians to purchase political ads, or extensive micro-targeting based on ill-gotten user data, we now know that, if left unregulated, these platforms will continue to be prone to deception.”

But the bill is stuck in committee, and Warner acknowledged lawmakers are still playing catch-up: “We need these companies to step up and work with us to get this right.”

Separately, Twitter and Facebook have announced efforts to make advertising sources more transparent on their platforms, and the companies say the policies will extend to more than just political ads. Twitter CEO Jack Dorsey also acknowledged the company has a responsibility to address abuse on the network. In May, the company revealed one step it has taken to address trolling, by tracking users’ behavior to identify and downplay tweets that “distort and detract” from the conversation. Downgraded tweets remain online but they’re harder to find; users can see them by choosing to “Show more replies” or adjusting their search settings. The company says early results show an 8 percent drop in some abuse reports since implementing this change. The company also plans to fund academic research or software projects measuring Twitter’s “conversational health.”

A civic mission or censorship?

Geltzer, who served as senior director for counterterrorism at the National Security Council from 2015 to 2017, says the tech companies are making some progress in the fight against disinformation, but not enough. For example, Twitter typically takes an “after-the-effect” approach to moderating content, he notes. Current policy allows users to flag posted content for review; if the company finds the content violates its terms of service, it may be taken down. But in the meantime, it is publicly available and can spread far and wide.

Geltzer suggests experimenting with automated screening of content that could flag terms based on the behavior of known Kremlin (or other) troll accounts. That flagged content could be vetted by Twitter employees to be sure it didn’t violate the terms of service before it was posted. “You’re not talking about suppressing speech, just delaying certain speech,” Geltzer says. The company’s move to downplay trolling tweets is a good step, he adds, but “I think it’s fair game to nudge them to do more.”

Twitter does have a precedent for systematically identifying and removing content that violates its terms of service—with tweets promoting terrorism. The company said it suspended 274,460 accounts in the last six months of 2017 for terrorist content, and reported that 93 percent of those accounts were identified using its own internal proprietary tools.

Other content prohibited by the Twitter Rules include hate speech, posts promoting violence, harassing tweets, automated spam, and threats to reveal personal information (known as “doxing.”) But disinformation and state-sponsored propaganda aren’t on the list.

Although none of that speech is illegal, First Amendment scholars contend there is no right to free speech on social media, because the platforms are owned by private companies and the Constitution only limits government inhibitions on speech. For its part, Twitter encourages open debate, telling users the company supports “speech that presents facts to correct misstatements or misperceptions, points out hypocrisy or contradictions, warns of offline or online consequences, denounces hateful or dangerous speech, or helps change minds and disarm.” Facebook has tried more aggressive tactics against hate speech—and faced criticism last year after a ProPublica investigation revealed its rules favored elites and governments over racial minorities and activists. Facebook also recently removed its “Trending” section after years of complaints that it was suppressing conservative news sources. This spring, Facebook revealed the internal guidelines its teams around the world use to evaluate whether posts violate its rules against hate speech and disinformation.

Geltzer urges tech companies to make algorithms more transparent so outside experts can better observe how malicious actors use the platforms. Further crowdsourcing the disinformation problem, he says, would lead to results more valuable for Twitter and Facebook than keeping their proprietary “secret sauce” to themselves. “Even if it costs you a little, it’s worth it,” he says. “That’s what the big boys do—they sometimes give a little to be better corporate actors.”

Geltzer also would like to see the companies collaborate on a civic education campaign to educate users on how to vet information before sharing it. “All of us own the responsibility to not contribute to the spread,” he says.

Jay Stanley, a policy analyst for the American Civil Liberties Union, acknowledges that “free expression has always come with nasty side effects,” but is wary of hardwired solutions like creating screening systems for content. “We have to be very cautious when you build an infrastructure for censorship—especially when a platform is so central to our political discourse,” he says. Instead, Stanley urges the companies to give users more filters to control what they see. “The more the companies can stay out of the role of censor, the better,” he says. “But the pressure is enormous.” 

Cybersecurity expert Renee DiResta says companies and outsider researchers should see working together as mutually beneficial. “Platforms have a lot of signals outsiders can’t see,” and researchers also spot signals the companies don’t, she says. Meanwhile, “malicious actors are always evolving. If researchers or companies look only for accounts that fit one formula, they are fighting the last war.”

“A small group of people who scream very loudly”

The pro-Kremlin network Nimmo observes on Twitter appears to come from all over the world. Even though many, like Ian, may not be directly part of a Kremlin-controlled troll farm, Nimmo suspects some coordinate their messaging. “It’s a small group of people who scream very loudly,” he says. “They’re quite aggressive and shrill.”

Shilling acknowledged to the BBC, “I do talk to other people privately, and there’s a couple of other people I talk to. They might do a tweet and they ask me to retweet it.” But he insisted, “All my tweets are me and me alone.”

It takes an enormous investment of time watching troll networks to confidently identify covert Kremlin-backed accounts, says Watts, the former FBI agent. “You have to see signatures that are foul-ups, and you have to see them run failed operations that don’t take off and get overtaken by organic media,” he says. “That’s how we gained confidence, and it took us three years. If they hadn’t come trolling at me, we might have never even noticed it.”

Watts says he’s more concerned at this point with the force multipliers than with the identities of individual operators working directly for the Kremlin. “You let the FBI figure out if it’s an agent,” he says. “I’m more worried about the American audiences who are catching onto [the message] and then showing up in Charlottesville chanting, ‘Russia is our friend.’

Fact:

Mother Jones was founded as a nonprofit in 1976 because we knew corporations and billionaires wouldn't fund the type of hard-hitting journalism we set out to do.

Today, reader support makes up about two-thirds of our budget, allows us to dig deep on stories that matter, and lets us keep our reporting free for everyone. If you value what you get from Mother Jones, please join us with a tax-deductible donation today so we can keep on doing the type of journalism 2022 demands.

payment methods

Fact:

Today, reader support makes up about two-thirds of our budget, allows us to dig deep on stories that matter, and lets us keep our reporting free for everyone. If you value what you get from Mother Jones, please join us with a tax-deductible donation today so we can keep on doing the type of journalism 2022 demands.

payment methods

We Recommend

Latest

Sign up for our free newsletter

Subscribe to the Mother Jones Daily to have our top stories delivered directly to your inbox.

Get our award-winning magazine

Save big on a full year of investigations, ideas, and insights.

Subscribe

Support our journalism

Help Mother Jones' reporters dig deep with a tax-deductible donation.

Donate