How Twitter’s Disappearing Fleets Could Be a Disinformation Disaster

Twitter CEO Jack Dorsey testifies remotely during a November Senate hearing.Hannah McKay-Pool/Getty Images

Fight disinformation: Sign up for the free Mother Jones Daily newsletter and follow the news that matters.

Last week, Twitter unveiled its latest piece of purported Silicon Valley innovation, Fleets—a feature that it copied from Instagram Stories, which the Facebook-owned company originally lifted from Snapchat.

The rollout allows Twitter users in the United States to share pictures and video that automatically delete after 24 hours. Almost immediately, social media researchers pointed out the format’s potential to serve as a vector for spreading disinformation and extremist content. 

The gaps in moderation that Argentino and others pointed out—which Twitter’s terms of service already explicitly ban—aren’t Fleets’ biggest vulnerability—it’s the ephemeral structure of the format.

Unlike Instagram Stories, Fleets aren’t designed to be shared beyond an individual account’s followers. While a user could screenshot or screen record a Fleet and repost it themselves, that added step increases the friction of sharing the posts and reduces the ease of spreading disinformation in a way that Instagram Stories doesn’t. While that may slow the spread of bad information, repetition could help it go to large enough swathes of people to be damaging. It also means the messages will mostly remain inside closed circles, just as in Facebook groups. 

Closed, private Facebook groups and Instagram stories have already helped spread dangerous disinformation. In Oregon this past summer, false rumors about Antifa starting wildfires in the state spread quickly, almost certainly coming from within private Facebook groups. The stories inspired vigilantes with assault rifles to mount neighborhood patrols and set up military-style checkpoints. While no one ended up hurt, it’s not hard to imagine how a situation like that could turn deadly.

In April, as I was reporting on QAnon’s growing appeal in alternative health and wellness influencer communities, I noticed that the conspiracy was being trafficked largely through Instagram Stories. Influencers would post lengthy video monologues discussing false claims about how blood was being harvested from children kept in underground tunnels for elite liberal pedophiles. As I flipped through wellness Instagram accounts, I could watch QAnon content gain traction with other influencers, as they reposted Stories pushing Q. While the Stories were available to tens or hundreds of thousands of followers, the posts deleted themselves within 24 hours, making them difficult to document or debunk. And it was impossible to gauge how often they were being shared via private direct messages. 

By spotting and bringing to light false or dangerous posts, journalists have become a de facto free content moderation service for social media platforms. While this is a problematic dynamic, journalists can’t even do this properly when the posts in question are made in ephemeral formats. Tweets, for all of their issues, are searchable and remain on the platform by default. Fleets will not.

Peter W. Singer a senior fellow at the think tank New America, and the author of Like War, a book on how social media has been weaponized in politics, says he’s been worried about the disinformation potential of Fleets since Twitter started testing the feature in Brazil in March. “So much of their system in actuality relies not on their own AI and content moderators, but on fellow users and researchers to flag violators. With Fleets, researchers won’t be able to see and track as much,” he told me via a Twitter direct message. 

These aren’t problems that hiring oceans of underpaid, overworked contract moderators will ever solve—there will never be enough moderators to get ahead of offending content. By creating Fleets, the company has introduced a structural problem by creating a space where people can post misinformation and extremist content at a faster rate than moderators can ever track, and where it won’t be easily viewable by concerned users. Twitter will always be at least two steps behind.

Fact:

Mother Jones was founded as a nonprofit in 1976 because we knew corporations and billionaires wouldn't fund the type of hard-hitting journalism we set out to do.

Today, reader support makes up about two-thirds of our budget, allows us to dig deep on stories that matter, and lets us keep our reporting free for everyone. If you value what you get from Mother Jones, please join us with a tax-deductible donation today so we can keep on doing the type of journalism 2022 demands.

payment methods

Fact:

Today, reader support makes up about two-thirds of our budget, allows us to dig deep on stories that matter, and lets us keep our reporting free for everyone. If you value what you get from Mother Jones, please join us with a tax-deductible donation today so we can keep on doing the type of journalism 2022 demands.

payment methods

We Recommend

Latest

Sign up for our free newsletter

Subscribe to the Mother Jones Daily to have our top stories delivered directly to your inbox.

Get our award-winning magazine

Save big on a full year of investigations, ideas, and insights.

Subscribe

Support our journalism

Help Mother Jones' reporters dig deep with a tax-deductible donation.

Donate