Moscow Thanks You for Sharing Its Cute Cat Pics

Want to defeat fake news and online propaganda? Pay attention.

Javier Jaén

I learned of Alec Baldwin’s death on the subway, deep beneath Manhattan. I’d started reading a New York Times editorial about the proliferation of fake news in the age of Donald Trump. Scrolling down on my phone, I saw what looked like a related Times headline: “Baldwin: Gone at 58.”

Of course, Baldwin and his Trump impersonation were very much alive. The fake news of his death had appeared, via third-party advertising code, within the very piece damning this phenomenon. It felt ironic, but I knew the problem was algorithmic: Keywords in the article had sent signals to an ad server, which performed its functions exactly as planned. In the digital world, attention is currency. That false but enticing Baldwin headline drew lots of clicks, and so it proliferated.

These days it is code, not human arbiters of facts, that dictates what most of us read and watch online.

In other corners of the internet, bogus claims were circulating about the Earth cooling and a Russian flag hanging in the Texas Statehouse. Both had elements of truth: The climate story originated from the Daily Mail, which butchered data from government science agencies to support its preferred headline. The Russian flag was real, but it had been placed in the Statehouse by protesters, a fact misrepresented by the people posting about it on Twitter and Facebook.

We’ve created a fine and tangled mess for ourselves. Spreading falsehoods for personal gain and amusement is nothing new, of course. Humans have been doing that since our ancestors were grunting at each other in caves millions of years ago. But the democratization of the internet back in the 1990s meant everyone would get to participate regardless of their agendas: political activists, foreign propagandists, hackers, spin doctors, publicity hounds, and news outlets desperate for ad dollars. We didn’t plan ahead as the internet matured. That’s what makes the proliferation of fake news so acute right now, and why there is no easy way to stop the threat it poses to our nation.

These days it is code, not human arbiters of facts, that dictates what most of us read and watch online. If it seems like you see nothing but Trump headlines, that’s because an algorithm decided you were likely to click, read, and share those stories—and then, right on cue, you did. Algorithms aren’t partisan. They’re designed to execute commands. They don’t care whether an unhinged guy with an assault rifle shoots up a pizza parlor in his quest to “self-investigate” bogus stories claiming the place was Hillary Clinton’s secret child-trafficking headquarters.

Police arrest Edgar Maddison Welch after the December 2016 “Pizzagate” incident. AP/Sathi Soma

It would be easy to blame the recent explosion of fake news on the alt-right—or Macedonian teenagers. But you’re an accomplice, too. For example, you may remember some funny photos of cats wearing helmets that were going around, accompanied by an online piece about the pets of Viking warriors. That story, which I saw shared on Facebook by too many friends to count, was published by, a.k.a Russia Today, a large, Kremlin-funded propaganda operation. On the back end, your innocent click-and-share of silly cat pics is rendered as a bait-and-switch command to the algorithms, ensuring that you will be served up more RT content on Facebook and elsewhere—stories that often rely on fringe experts and dubious evidence.

One RT piece from December slapped together quotes from former National Security Agency technical director and whistleblower William Binney to make the case that the CIA had failed to prove Russia hacked Democratic National Committee servers prior to the election. It bore the standard disclaimer: “The statements, views and opinions expressed in this column are solely those of the author and do not necessarily represent those of RT.” The piece wasn’t fake news, strictly speaking. Binney is real, and so were his comments—but they were framed to support a position clearly held by the Russian government.

This makes our problem even more vexing. RT publishes stories that elevate fringe voices left and right—Michael T. Flynn, Trump’s national security adviser, has been an RT commentator—with the apparent goal of sowing distrust in the American government. It’s classic propaganda that proliferates with 21st-century speed as we click and repost without considering the story’s source and its agenda.

Content companies have mountains of data proving we’ll choose provocative headlines over serious ones.

Should we happen to share that Binney story on Twitter, we compound the problem by enlisting the help of bots and fake accounts posing as real people. Vast networks of Twitter accounts are programmed to monitor keywords and links, and they’ll automatically promote a link or amplify a hashtag when triggered. This helps a piece go viral, or at the very least be seen by hundreds of thousands of people until a verified account—say that of Flynn or Trump—retweets it with commentary. Then, in what’s called a “handoff,” the fake accounts delete their original posts. Voilà! The story looks even more credible because it appears to have originated from a verified account.

Facebook and Twitter algorithms prioritize posts with high “engagement”—popular ones—and links that their customization code predicts you will click on. All over the web, your past digital behavior results in targeted ads, some of which resemble news stories. Content recommendation companies like Outbrain and Taboola place sponsored links on publishers’ websites for a fee but are only marginally effective in policing fake news and propaganda. On the contrary: All these companies make money off clicks, and they’ve got mountains of data proving we’ll choose provocative headlines over serious ones. Facebook recently introduced a third-party verification system for questionable articles, and Google has started labeling some stories with a “Fact Check” tag. Yet neither measure is sophisticated enough to solve the problem.

At present, artificial intelligence is being used to determine your general interests so that suitable content can be directed your way. But what if I told you that within a decade you will be targeted with content created using details from your personal life and crafted specifically for you? That’s where we’re headed: AI and machine learning algorithms will analyze your online habits, personal data (calendar, relationships, work history, places traveled), and communication preferences (say, that you like strong narratives over short informational pieces). They will then merge your information with current news, package it in the way most likely to grab your attention, and deliver it in a format optimized for you.

This means our opinions will be reflected right back at us, making it ever more difficult to confront contrary beliefs and ideologies. Extreme viewpoints will feel like the norm, not the outliers they actually are. And by the time we finally accept that our democratic internet is fatally hobbled by clever code, Big Data, and our own fallible instincts, it may be too late: Our appetite for nonpoliticized, explanatory news will have eroded, leaving a clear opening for newslike stories created by foreign governments, hucksters, jaded politicians—anyone who stands to benefit from a divided and distracted America.

Facebook and Google could be way more aggressive than they have been in ferreting out false content.

There’s still time to fix this. Facebook and Google could be way more aggressive than they have been in ferreting out false content and demoting websites that promote fake news. Twitter’s troll problem could be tackled using variables that analyze tweet language, hashtag timing, and the origin of links. It’s also high time for news organizations and other content distributors to join forces and build an international, nonpartisan verification body for credible sources. Just as the United Nations imposes sanctions on members who violate the common good, this body could withhold its seal of approval from the disseminators of misinformation or systematic propaganda. Stories from verified sources would carry more weight in searches, in social-media feeds, and on third-party ad servers—and would therefore be viewed more widely.

This is a controversial idea. Some people object that it would impede the free flow of ideas or exclude independent voices. And tech companies will argue that they’re not in the business of censoring their users. But look around: Machines already decide what gets published, where, when, and to whom. Because our attention is increasingly sucked up by fake news and salacious headlines, we’re excluding credible independent voices now. And while each country has its own journalistic standards, no reputable news outlet on the planet would snuff out Alec Baldwin.

While we wait for Silicon Valley and the media conglomerates to get their acts together, there’s something we can do ourselves. It’s strikingly obvious. Simply slow down, read, and actually think before passing a story along. Because if there’s one thing all of us share, it’s responsibility for this fiasco. The best way to break the scurrilous cycle of fake news, oddly enough, may be to withdraw from it.


Headshot of Editor in Chief of Mother Jones, Clara Jeffery

It sure feels that way to me, and here at Mother Jones, we’ve been thinking a lot about what journalism needs to do differently, and how we can have the biggest impact.

We kept coming back to one word: corruption. Democracy and the rule of law being undermined by those with wealth and power for their own gain. So we're launching an ambitious Mother Jones Corruption Project to do deep, time-intensive reporting on systemic corruption, and asking the MoJo community to help crowdfund it.

We aim to hire, build a team, and give them the time and space needed to understand how we got here and how we might get out. We want to dig into the forces and decisions that have allowed massive conflicts of interest, influence peddling, and win-at-all-costs politics to flourish.

It's unlike anything we've done, and we have seed funding to get started, but we're looking to raise $500,000 from readers by July when we'll be making key budgeting decisions—and the more resources we have by then, the deeper we can dig. If our plan sounds good to you, please help kickstart it with a tax-deductible donation today.

Thanks for reading—whether or not you can pitch in today, or ever, I'm glad you're with us.

Signed by Clara Jeffery

Clara Jeffery, Editor-in-Chief

payment methods

We Recommend