Four years after Facebook decided it needed to do something something to fix its fake news problems, we now know those efforts only made things worse.
As the world was still coming to terms with President Donald Trump’s victory in the 2016 election, Facebook rolled out a program that December for independent fact-checkers to flag questionable content as “disputed.” But a new study out of MIT found that people assume that if some articles have warnings, those that don’t must be accurate.
“Putting a warning on some content is going to make you think, to some extent, that all of the other content without the warning might have been checked and verified,” David Rand, one of the authors of the report and a professor at the MIT Sloan School of Management, said in a statement.
Rand and his co-authors have called it the “implied truth effect” and because of the sheer scale of Facebook’s platform — which has 1.25 billion daily users — and the amount of content posted on it, fact-checkers simply can’t keep up.
“There’s no way the fact-checkers can keep up with the stream of misinformation, so even if the warnings do really reduce belief in the tagged stories, you still have a problem, because of the implied truth effect,” Rand adds.
Facebook did not respond to questions about the study’s findings.
The MIT team conducted studies with more than 6,000 participants, who were shown a variety of real and fake news headlines as they would look on Facebook.
One half of the group was shown stories without fact-checking tags of any sort. The other half was shown a typical Facebook feed comprising a mixture of marked and unmarked posts.
Then people were asked if they believed the headlines were accurate.