Social Media Platforms Promote Fake News and Spread Unreliable Content
In the aftermath of the Las Vegas shooting, the deadliest mass shooting in US history, questions rose and speculation around the shooter and his motives mounted.
Very quickly, trending news outlets reported that the man behind the mass shooting in Las Vegas was an anti-Trump liberal, that the F.B.I. had linked to the Islamic State, and had recently converted to Islam.
The New York Times revealed, “They were shocking, gruesome revelations. They were also entirely false — and widely spread by Google and Facebook.”
This was one more instance where ‘fake news’ had penetrated mainstream venues, underscoring how fake news can flourish via social media platforms. In fact, Since the 2016 presidential election, awareness of fake news has soared.
Detecting and preventing the spread of unreliable media content is a difficult problem, especially given the rate at which news can spread online. Google and Facebook blamed algorithm errors for these events.
Overwhelming amounts of data challenge Social Media to Take Action on Fake News
The reach and speed of social media networks (Facebook alone has nearly two billion users) make it easy for such stories to spread before they can be debunked. Part of the challenge lies in how Facebook and Google rely on algorithms, especially when it comes to making complex news decisions.
For example, according to a Buzzfeed investigation, during the last three months of the presidential election, the top 20 fake election news stories on Facebook generated more shares, reactions, and comments on Facebook than the top 20 pieces from major news outlets, such as The New York Times, The Washington Post, and others.
Further examples illustrate the impact of fake news on both mainstream media and the public’s mind share:
- “Google algorithm gamed again into spreading dangerous fake news by online trolls”
- “Was There A Second Shooter In Vegas?”
- Dow Jones posts fake story claiming Google was buying Apple
- Fake news posts blame Puerto Rico’s truck drivers for refusing to ship relief supplies
- Fake news story about an NFL player supposedly burning an American flag in the team’s locker room.
- Fake article about Megyn Kelly in Trending Topics
While the algorithms are geared to support the social media giants’ business model for generating traffic and engagement, they’re largely run by engineers who rely on data to choose which content will trend.
Are Machine Learning Algorithms Reliable or Are More Human Editors the answer?
While computer programs may be cheaper than real-life human editors, Fortune asserts, “The reality is that Facebook needs to hire humans to edit and review the content it promotes as news—and it needs to hire a lot of them.”
Facebook was using human editors. Last year, Facebook fired the human editors and moved to algorithms after Gizmodo reported that they were routinely suppressing conservative news stories from trending topics.
Buzzfeed explains, “For Trending to become a reliable, global product, it will need to account for the biases, bad actors, and other challenges that are endemic to Facebook and the news media. Put another way, in order to succeed, the Trending algorithm needs to be better than the very platform that spawned it. That’s because fake news is already polluting the platform’s News Feed organically.”
With the sheer volume of data and speed of appearance, MIT has suggested that the use of artificial intelligence tools could help. In a recent campus challenge, MIT Lincoln Laboratory staff developed algorithms that use text, images, and html metadata to determine the reliability of a news article. “To solve the fake media challenge problem, the team must develop tools that consider all aspects of the data,” said Lin Li, a staff member in the Human Language Technology Group.
AI-powered Analytics Using Anomaly Detection Can Hold Back the Spread of Fake News
The problem is with the trending algorithms that the social media platforms use – these are machine learning algorithms. They have no context and therefore make these errors. In light of the recent South Park motivated Alexa mishap, we suggested that there needs to be systems in place to detect when something out of place happens, in order to let the right people know.
AI-powered analytics tools would include stance classification to determine whether a headline agreed with the article body, text processing to analyze the author’s writing style, and image forensics to detect Photoshop use. To determine the reliability of an article, Algorithms could extract even relatively simple data features, like image size, readability level, and the ratio of reactions versus shares on Facebook.
The fake news issue can also be detected by focusing on anomalies. When a social media algorithm starts pushing a trending post or article to the top, if AI-powered analytics tracked the sudden surge of a new topic, correlating this data with the source site or Facebook page, it would emerge as an obvious anomaly and be paused from gaining any further momentum until a human at Facebook or Google can validate the specific item, rather than needing human review of all topics.
You can’t prevent anyone from writing fake news, but by applying AI-powered analytics that employ anomaly detection, we can prevent the “simple-AI” algorithms from spreading and promoting fake news stories.
The power of this application of AI-powered analytics to spot anomalies, far faster than humans could, can be used when working with thousands or millions of metrics. Real-time anomaly detection can catch even the most subtle, yet important, deviations in data.