"Falsehood flies, and the truth comes limping after it", wrote Jonathan Swift in 1710. If a tweet is labeled "false", that doesn't imply that the person who wrote it is trying to pull a fast one.
Some of Twitter's rumors are true. Others, of course, are false and far more pernicious, such as conspiracy theories about the Parkland high school shooting in Florida.
The Boston Marathon bombing in 2013 was used as an example of how users were using Twitter for up to the minute updates and the rumor of a second bomber spread rapidly across Twitter and these scholars wanted to know why and how.
Turns out you might be as guilty as the Russian bots in spreading fake news. "The central concept of this paper is veracity", Aral said. The six sites agreed on which reports were true about 95 percent of the time, they said.
Vosoughi, Roy and Aral used this framework to map the spread of information on Twitter since its creation in 2006 all the way through to past year.
And fact-checking can backfire, they noted.
"Then we looked for footprints of those stories on Twitter, including links to stories we investigated that were embedded in tweets, tweets about those stories without links, and photo memes related to those stories", Vosoughi says.
By nearly all metrics, false cascades outpaced true ones. "Falsehood reached more people at every depth of a cascade than the truth, meaning that many more people retweeted falsehood than they did the truth", the paper said.
The study's authors agree with that point. The underlying assumption has been that, if leaders can curb bots, they can get the false stories more under control. Snopes: False. Time to 200 retweets: 4.2 hours. Aral declined to identify it, citing the conditions that Twitter imposed when sharing the data set. Applying standard text-analysis tools, they found that false claims were significantly more novel than true ones - maybe not a surprise, since falsehoods are made up. But the difference between how false and true news spread was obvious.
Kansas vs. Oklahoma State - 3/8/18 College Basketball Pick, Odds, and Prediction
West Virginia still has winning on its mind and making it to at least an Elite Eight would make this year a total success . But they have gone 6-12 since and now Trae Young and company could find themselves on the wrong side of the NCAA bubble.
The academics at MIT also found fake news was 70 per cent more likely to be retweeted. The study authors hypothesized that falsehoods contain more novelty than truth.
Instead, they categorized news as either "true" or "false".
"These findings shed new light on fundamental aspects of our online communication ecosystem", Roy said in the statement, adding what they found left them feeling "somewhere between surprised and stunned".
Their study, published today in the journal Science, is one of the largest long-term investigation of fake news on social media ever conducted.
"We're barely starting to scratch the surface on the scientific evidence about false news, its consequences and its potential solutions", said Sinan Aral, an expert on information diffusion in social networks at MIT and co-author of the study. That insight ultimately led to the current study of false news.
Unfortunately, we can't just blame the robots.
The three MIT scholars released a series of videos talking about their research, you may watch the first installment below with others available on YouTube. But the results didn't change: False news still spread at roughly the same rate and to the same number of people.
And the reason we do it, according to the researchers, is to avoid being boring.
He said he was unsure whether or not bots would be more prominent if the study had focused exclusively on political news.