Facebook believes that describing solid instances of false news could enhance news literacy or as a minimum, could verify that it is taking some serious actions against the concerning problem of misinformation. Facebook inaugurated “The Hunt for False News” wherein it scans viral B.S. and conveys the decisions taken by its third-party-authenticators who further explain the procedure of tracking down that particular story. The first article lays bare the stories where fabricated captions were put on old videos and images and the real facts were massively exaggerated or innocent people were wrongfully tagged as offenders.
The blog was launched after 3 recent studies threw light upon the fact that volume of misinformation on Facebook has gone down by 50% after the 2016 election, however, Twitter’s volume hasn’t dropped as severely. Regrettably, the lingering 50% still poses a threat to elections, political unity across the globe, conformist safety, and civil dissertation.
One of The Hunt’s first examples deflates a news that a man who postured for a picture with one of Brazil’s politicians had stabbed the presidential entrant. Facebook elucidates how its machine learning models recognized the photo and then a Brazilian fact-checker by the name of Aos Fatos branded it as a false news. Facebook now automatically senses and moves down the uploads of the image.
Although the edifying “Hunt” series is quite effective, it simply cherry-picks random fake stories and videos from over a wide time period. It would be niftier if Facebook applies this modus to the current circulation of misinformation about the vital news stories.
Recently, the business and technology columnist of The New York Times’ Kevin Roose began using Facebook’s CrowdTangle tool to underline the top 10 hot stories regarding the topics like the Brett Kavanaugh hearings.
If Facebook wished to be clearer about its botches and triumphs around the fake news, it would put out lists of the shams with the highest readership each month and then put the Hunt’s format into operation elaborating how this false news was exposed. This could help to dismiss the myths that may be circulated by the means of fake headlines, even though users don’t click through to browse this news.
But fortunately, all of Facebook’s pains taken to demote the suspicious content including changing News Feed algorithms, doubling its security staff, and fact checks are certainly now paying off.
Here are few examples to delight our hearts:
- An NYU and Stanford study exposed that Facebook comments, likes, shares, and comments links to 570 fake news dropped drastically by more than half after the 2016 elections. On the other hand, the circulation of these fake news by Twitter continued to upsurge.
- In order to evaluate the volume of fake news sites distributed on Twitter and Facebook, the study from the University of Michigan devised the metric “Iffy Quotient.”
- Another example includes a French newspaper Le Monde looking 630 French websites across Reddit, Facebook, Pinterest, and Twitter. Facebook’s engagement with the sites labelled as “dubious or unreliable” has gone down by 50% since 2015.
Undoubtedly, taking into consideration Twitter’s apparent failure in tackling the concern of trolling and misinformation is not a great yardstick for Facebook to judge by. Though it is helpful that Facebook is defining ways to spot and curb fake news, the public has to adopt these approaches for society to head on the path of progress. Imbibing these tactics can be difficult if they go against some politicians’ stubbornly held beliefs.