The monoliths of social media gave the narrowest of definitions when challenged in Washington last week, writes Damien McElroy
Fake news? A better term would be a multi-headed hydra of deceit
When Qatar secured a meeting with the Louvre museum management in Paris on Monday, its officials clearly had an agenda.
The Qatari media soon after reported that the French museum had apologised following the removal of a map in the Children’s Museum in Louvre Abu Dhabi that had mistakenly omitted the peninsula.
No such apology had been offered by the French. A few years ago the matter might have gone unnoticed by the outside world. But fake news spreads far and wide in the era of social media. To paraphrase Winston Churchill, a lie had gone around the world many times before the truth had got its boots on.
In this case, the Qatari official Ali bin Dhameeh, head of its National Human Rights Committee, used Twitter to spread the false claim. On Wednesday, in a highly unusual intervention, the Louvre in Paris knocked Doha’s story down, dismissing outright the Qatari version of the meeting.
Leave aside the issue that the National Human Rights Committee has past form in fabrication. Just a few weeks ago, it leaked internal summaries from a UN team that visited Qatar last year. These, it said, were formal findings but that was a lie. In fact the memo was just a record of what the team had been told by their hosts.
Putting the spotlight on social media companies highlights their role in distributing those reports far and wide.
Can these platforms take responsibility for ensuring such material is denied an audience?
In a highly revealing set of exchanges, 14 senior social media company executives and other experts were grilled by British MPs visiting Washington last week. The exchanges revealed a terrifying gap between the scale of the fake news problem and the providers' responses.
The monoliths of social media gave the narrowest of definitions of what is a multi-headed hydra of deceit.
Contrary to the perception that the companies are under pressure over fake news, they appeared to hide behind activities that address just a handful of well-trailed issues.
Harvard’s Claire Wardle, one of the experts, has argued the term fake news should be scrapped as inadequate. A far more developed understanding of the threats is needed.
“Much of the content used as examples in debates on this topic is not fake, it is genuine but used out of context or manipulated,” she told one newspaper. “Similarly, to understand the entire ecosystem of polluted information, we need to consider far more than content that mimics ‘news’.”
The Qatari example above could serve as a textbook example of one kind of mendacious news.
Do social media companies have this sort of problem in their sights? The answer is no.
Instead there is an over-reliance on creating an army of moderators. Juniper Downs, a YouTube executive, had told the hearing these were “mission-critical” for the firm.
Yet Damian Collins, the tenacious chairman of the British delegation, pointed out YouTube spent just 0.1 per cent of its annual revenues on removing suspect content. “It’s a sticking plaster over a wound,” he declared.
The confrontation was telling because it drew a line in a numbers debate that serves to obscure more than it illuminates.
The companies are very adept at producing headline figures that point to effective remedial action. Google said it has 10,000 raiders – specialist employees – on its books to thwart the misuse of its search engine. Facebook claimed it had more than 14,000 involved in fighting fake content.
Other highlights were that thousands of fake accounts were shut down by Facebook ahead of last year’s French and German elections. Twitter meanwhile weighed in with an estimate that 942 messages were generated by Russian trolls ahead of the UK vote on Brexit. In total, Russian attempts to meddle in the vote amounted to 0.005 per cent of the entire number of accounts tweeting about the referendum.
Some argue that until the providers tackle the problem of fake accounts, the problem will not be cracked. An internet citizenship token is one means of tying accounts to individuals.
Just as fundamental is the nature of the content. While there are no guarantees that veracity that will ever be 100 per cent, systems must be found to improve transparency.
Would it be possible to devise a verification system that allowed news organisations to establish their credentials as reliable sources of information? Would social media and web platforms accept outside involvement in how content is promoted or filtered? Would this be done in a framework that rebalanced the diet of news on accounts or in feeds?
Awareness is only one facet of addressing the problem. The next stage of this battle is clear, or should be. It is to authenticate the platforms are not abused, neither randomly nor systematically. That will take far more effort than is currently on display.