Social Media Platforms Promote Fake News and Spread Unreliable Content

Picture this: You’re at work and you’ve been given an assignment by your boss to research a possible new product. So you go out and do some googling, you find several blog posts, including a very intriguing one with several quotes from industry leaders. You go fetch yourself a cup of coffee and settle in to read.

There’s one very big problem with this post, however: It’s completely fake.

According to a recent post in the Wall Street Journal, “[r]eal-sounding but made-up news articles have become much easier to produce thanks to a handful of new tools powered by artificial intelligence.”

This could be one more instance where ‘fake news’ has penetrated mainstream venues, underscoring how fake news can flourish online. In fact, since the 2016 presidential election, awareness of fake news has soared.

Detecting and preventing the spread of unreliable media content is a difficult problem, especially given the rate at which news can spread online. Google and Facebook blamed algorithm errors for these events.

Overwhelming amounts of data challenge Social Media to Take Action on Fake News

The reach and speed of social media networks (Facebook alone has nearly two billion users) make it easy for such stories to spread before they can be debunked. Part of the challenge lies in how Facebook and Google rely on algorithms, especially when it comes to making complex news decisions.

Already in the 2020 presidential campaign, we’ve seen disinformation spread, including manufactured sex scandals against former Mayor Pete Buttigieg of South Bend, Ind., and Sen. Elizabeth Warren (D-Mass.), and a smear campaign claiming Sen. Kamala Harris is “not an American black” because of her mixed-race heritage.

Further examples illustrate the impact of fake news on both mainstream media and the public’s mind share:

While the algorithms are geared to support the social media giants’ business model for generating traffic and engagement, they’re largely run by engineers who rely on data to choose which content will trend.

Are Machine Learning Algorithms Reliable or Are More Human Editors the answer?

While computer programs may be cheaper than real-life human editors, Fortune asserts, “The reality is that Facebook needs to hire humans to edit and review the content it promotes as news—and it needs to hire a lot of them.”

Facebook was using human editors, but then in 2016 the company fired them after it was reported that they routinely suppressed conservative news stories from trending topics. Now, however, Facebook has brought back human editors to curate certain news content.

Appeasing all audiences won’t be easy, though. As New York magazine explains, “the algorithms are biased, and if Facebook hires editors and moderators to double-check decisions made by algorithms, those editors will be denounced as biased too.” 

With the sheer volume of data and speed of appearance, MIT has suggested that the use of artificial intelligence tools could help. But artificial intelligence alone isn’t the answer, writes Samual Wooley, who argues that the future will involve “some combination of human labor and AI that eventually succeeds in combating computational propaganda, but how this will happen is simply not clear. AI-enhanced fact-checking is only one route forward.”

AI-powered Analytics Using Anomaly Detection Can Hold Back the Spread of Fake News

The problem is with the trending algorithms that the social media platforms use – these are machine learning algorithms. They have no context and therefore make these errors. In light of the recent South Park motivated Alexa mishap, we suggested that there should be systems in place to detect when something out of place happens, in order to let the right people know.

AI-powered analytics tools would include stance classification to determine whether a headline agreed with the article body, text processing to analyze the author’s writing style, and image forensics to detect Photoshop use. To determine the reliability of an article, Algorithms could extract even relatively simple data features, like image size, readability level, and the ratio of reactions versus shares on Facebook.

The fake news issue can also be detected by focusing on anomalies. When a social media algorithm starts pushing a trending post or article to the top, if AI-powered analytics tracked the sudden surge of a new topic, correlating this data with the source site or Facebook page, it would emerge as an obvious anomaly and be paused from gaining any further momentum until a human at Facebook or Google can validate the specific item, rather than needing human review of all topics.

You can’t prevent anyone from writing fake news, but by applying AI-powered analytics that employs anomaly detection, we can prevent the “simple-AI” algorithms from spreading and promoting fake news stories.

The power of this application of AI-powered analytics to spot anomalies, far faster than humans could, can be used when working with thousands or millions of metrics. Real-time anomaly detection can catch even the most subtle, yet important, deviations in data.



Written by Ira Cohen

Ira Cohen is not only a co-founder but Anodot's chief data scientist, and has developed the company's patented real-time multivariate anomaly detection algorithms that oversee millions of time series signals. He holds a PhD in machine learning from the University of Illinois at Urbana-Champaign and has more than 12 years of industry experience.

You'll believe it when you see it