Undercover documentary reveals failures in weeding out harmful content
Facebook failed to act on anti-Muslim abuse
Facebook failed to remove an anti-Muslim video from the network after a company moderator said that immigrants were “less protected” than other people, secretly recorded footage revealed on Tuesday.
An undercover reporter who worked for six weeks at Cpl Resources – Facebook’s largest centre for moderating UK content – also found that extremists with large numbers of followers would be treated with special consideration, like governments and news organisations.
The Facebook page of British far-right leader Tommy Robinson was given such protection, which meant that frontline moderators could not directly remove material and had to pass it to a more senior team.
Mr Robinson was jailed for 13 months in May for contempt of court after broadcasting an hour-long video over Facebook that risked causing a criminal trial to collapse.
In one incident, a moderator for the network was captured by an undercover reporter reviewing a video which, in abusive terms, said that “Muslim immigrants” should return to their own countries.
The moderator said the video should be ignored because "they're still Muslims but they're immigrants, so that makes them less protected".
A moderator filmed in the programme said: "If you start censoring too much then people stop using the platform. It's all about money at the end of the day."
Facebook, the world's biggest social network with more than two billion users, called the practices "mistakes" which do not "reflect Facebook's policies or values”.
The documentary, for Channel 4’s Dispatches, was due to screen on Tuesday in the UK and comes as the social network is under intense political pressure across Europe over its handling of personal data and its efforts to stop the spread of harmful content.
Chief operating officer Sheryl Sandberg said in January that Facebook had to do better to stem the spread of hate speech and attempts to manipulate voters via the social network.
The company said it was investing in artificial intelligence and hiring up to 20,000 people by the end of 2018 to identify and remove harmful content.