The 2016 U.S. presidential election took many observers by storm. From the primaries to the general, surprises emerged that flipped many previously-held notions on their head – everything from proving that some previously crushing scandals wouldn’t have a great impact, to being able to widely win the popular vote but not the election, to the unreliability of polls as predictors. Another surprise was the propagation of fake news.

Indeed, misinformation and hoaxes have spread more this election than others. Fanning the flames, these deceitful articles even went viral reaching broad audiences across social media. This issue has gained national attention, as some suggest fake news influenced the election. And now others are calling upon social media companies to cull the problem.

Companies like Google and Facebook are taking steps to limit the spread of misinformation. Let’s take a look at what they’ve done.

The fake news problem

In a new trend, people are creating websites with the sole purpose of publishing highly sharable articles regardless of their accuracy. This practice has led to stories going viral like Pope Francis endorsing Donald Trump or PepsiCo’s CEO telling Trump fans to “take their business elsewhere.” For the record, neither the pope nor the CEO made those statements. After Facebook put an algorithm in charge of its “Trending” topics section, the website itself even shared a fake and sensationalist story about Fox News anchor Megan Kelly. And some US intelligence officials now believe Russia is behind some misleading stories.

Oops… These are just a few examples of problems caused by fake news.

Beyond being a nuisance, this trend is troubling. For instance, a BuzzFeed analysis found that in the presidential campaign’s final months, top-performing fake election news sites generated more engagement than the top stories from 19 major news outlets like New York Times, Washington Post, Huffington Post, NBC News and others. Facebook is receiving a lot heat for contributing to this problem. And criticisms might be fair, as an analysis of more than 20 known fake news sites revealed that Facebook referrals accounted for half of their traffic, resulting in millions of likes. And the articles have even lead to violence. A person fired an assault rifle at a pizzeria while allegedly “self-investigating” a conspiracy theory.

Social platforms curbing misinformation

With companies being scrutinized, they’re looking for ways to curb the problem. It’s a delicate practice as they don’t want to be perceived as censoring. But here’s what they’ve done so far.


Since it’s getting the most heat, Facebook announced ways it is trying to reduce the problem. This includes Facebook updating advertising policies to include fake news in its ban on deceptive and misleading content. And it is empowering users to report false stories. NPR also reported, “Facebook says it’s working with fact-checking groups to identify bogus stories — and to warn users if a story they’re trying to share has been reported as fake.”

But some argue that Facebook could be doing more. Additional suggestions include:

  • Crowdsourcing to evaluate news sources and articles
  • Ensuring a variety of friends’ posts appear in a news feed – limiting the problem of hearing only from likeminded people
  • Rating a reporter’s credibility to gauge article trustworthiness

Twitter verified account icon

Among my Facebook friends, they suggest highlighting trusted sources the way Twitter indicates verified accounts. They suggest putting a check or another symbol next to a shared linked from a reliable source.

However, while helping Facebook save face, these measures likely won’t be impactful. They may just alienate some users. While well intentioned, these measures could make Facebook appear biased, rendering the efforts fruitless.


Cutting off the life force of fake news, Google announced in November 2016 that it will stop these sites from using its ad software. The intent is to cut off popular click-bait websites’ access to the ads that make them lucrative. In a statement, a Google spokesperson told Reuters:

“Moving forward, we will restrict ad serving on pages that misrepresent, misstate, or conceal information about the publisher, the publisher’s content, or the primary purpose of the web property.”

This could stave off the effectiveness of misleading articles as Google AdSense serves as some publishers’ primary source of revenue.


Over the past months, Twitter has come under fire for enabling hate speech and trolling. However, there is an overlap here of so-called alt-right accounts spreading abusive messages as well as false information. The crux for the company has been the removal of these problem accounts and making it easier for users to report abusive language.

As result, Twitter suspended several prominent accounts linked to the alt-right movement.

While focusing on abusive accounts, fake news has not been a major priority. Twitter’s former head of news, Vivian Schiller, says it would be a mistake for social media companies to independently define what is fake news and “shut if off.” It would be a “very, very slippery slope,” she argues.

While some major companies work to curb hoaxes and propaganda, others claim to not experience the problem. For instance, BuzzFeed reported that neither Snapchat nor Apple are fertile ground for fake news. The reason being on Snapchat is user-generated content disappears after a short while. Also, these posts appear chronologically and are not sharable.

As for Apple News, it maintains greater control of content. The service reviews publishers, users can flag fake news or hate speech, and the company curates any content that gets featured. All this helps shield its 70 million active users from false information.

Having trouble assessing whether an article is fake? Read NPR’s guide on spotting misleading news.