A number of studies have defined fake news as everything from satire to outright lying. We’re going to focus on premeditated lies, because this is the most disturbing area.
The issue is two-fold; firstly, who can you trust? And secondly, the influence that social media algorithms have over a large number of people is becoming divisive.
20 years ago, most people wouldn’t be too alarmed by the dodgy tabloid headline ‘Aliens land on the moon’ but today, how do you know what is appearing on your social media page isn’t state-sponsored misinformation?
Isn’t trust the most important thing in any relationship?
We know it’s naive to think that Governments and big businesses don’t spread ‘biased’ information to shape the outcomes of everything from election results to mergers and acquisitions but ultimately, don‘t we all want to believe in institutions who want to leave the world in a better place? Yeah, I know it’s naive!
What follows are some facts about ‘Fake News’ from 2018 that might make for a very interesting 2019.
Unfortunately, fake news is shared more
An 11-year study by MIT reveals that ‘Fake News’ was 70% more likely to be retweeted than true stories. It also takes six times longer for true stories to reach 1,500 people than ‘Fake News’. True stories are usually shared by less than 1,000 people but fake news items could reach up to 100,000 shares.
Major institutions are trusted less today than 20yrs ago
For almost two decades trust has been declining across all major institutions according to the Edelman Trust Barometer. Trust in social media platforms has also been suffering recently, with 40% of respondents saying they have deleted at least one social media platform in the last year.
A whopping 60% of people don’t trust social media platforms to behave responsibly with their data.
Apparently, in the US less than a third of people trust social media (30%), whilst almost two-thirds trust traditional media (58%).
Social media sites need to take responsibility?
In June this year, Facebook announced it will update its fact-checking programme, including removing fake accounts and partnering with fact-checkers. This includes fact-checking photos and videos; which are the most difficult to identify as being fake. Imagine checking the 350 million images that are uploaded every day by Facebook’s two billion monthly users. One piece of good news is that Facebook’s user interactions (with known fake news sites) have declined by 50 per cent since the 2016 election (according to a study by Stanford and New York University).
Big brands need to be part of the solution?
Big brands want the user data that social media platforms provide but don’t want to be associated with fake news (not great for business). So if the big advertisers review their social media strategies then the platforms may find it more attractive to self-regulate. In February, Unilever CMO Keith Weed urged the digital media industry to clean up its act, saying that Unilever won’t invest in social media platforms that create division in society.
So what can be done?
Governments and big brands can shape the behaviour they want to see. If social media companies won’t regulate then governments need to. Big brands could simply cut spending! Then you’ll see change. If all else fails, it’s down to us… our advice is don’t feed the animals.
- I fell for Facebook fake news. Here’s why millions of you did, too.
- Facebook steps up efforts to root out fake photos and videos