Most people who share and spread false news and misinformation on social media, e.g. for politics or health, do not do it on purpose, but -simply- out of carelessness, haste, and instinctive reaction, without thinking much about what they are doing.
This is the conclusion of a new study by researchers from the American University of MIT, which proposes as a “medicine” to encourage users not to rush to do another “like” but to be more careful in what they reproduce.
The researchers, led by Professor David Rand, who made the notice published in the journal «Nature», performed a series of experiments with thousands of users social media, who were asked to read and share with others a range of news, the half of which were true and the other half were false.
The initial finding is that of the people who shared and spread the news with the misinformation, about 50% did so mainly due to lack of attention and the hurried way most people use social media. 33% (one in three) did so because they mistakenly thought the news was accurate, while the remaining 16% did it knowing that they were lying. It is precisely this last minority of users, often very dynamic, that is more difficult to deal with.
Other scientists have suggested that online misinformation is fueled either by the desire of many people to sacrifice their personal, political or other ideological views at all costs or simply to draw attention to themselves and thus feel more important. New research highlights another parameter: Lack of adequate attention by users and their failure to think as much as they need to know exactly what they are relaying.
In part, to improve the situation, many users should automatically stop sharing a “news story” simply because they were impressed by its title or because it seemed in line with their policies and other beliefs, without the slightest effort to consider whether it is true. An experiment conducted by researchers with 5,379 Twitter users showed that when they were sent a message asking them to evaluate the accuracy of a news headline, their willingness to share misinformation was reduced.
In 2020, “share” and “like” in false or misleading news is estimated to have doubled to 17% of the total, resulting in a visible increase in polarization, extremism (often violent), racism, sexism , refusal of vaccines, etc. Attempts so far by Facebook, Twitter and other social media to reduce the phenomenon, often with the help of artificial intelligence and special algorithms, are considered insufficient so far.
To date, users have no clear incentive to “chase” credibility and most just care to increase their own “likes” (which sometimes brings them revenue). Research has shown that fake news generates more “shares”, “likes” and “retweets” than real news and posts, with the result that misinformation spreads six to 20 times faster than reliable information on the internet. One of MIT’s experiments showed that 40% of users are willing to promote fake news related to their political ideology, although only 20% believe (incorrectly) that this is accurate news.
So far, users are being rewarded when their post is liked by… internet people, even if it is inaccuracies. The big challenge is a new system that will reward accuracy and reliability, something no social media platform has done yet.
One possibility would be to add a “trust” button, which shows how many “trusts” (and not just “likes”) a post receives. While this will not solve the so-called “confirmation bias” (to trust more what confirms your already established views), it would be a step forward. An alternative would be the automatic “smart” evaluation of posts by artificial intelligence systems.