Facebook is fighting back against hate speech, says senior official
The global platform is turning to artificial intelligence to combat fake news and terrorist propaganda
Facebook is investing billions of dollars to fight the scourge of fake news and hate speech on the social media platform, a senior official said.
Jesper Doub, Facebook's director of news partnerships, for Europe, the Middle East and Africa, insisted the global organisation had a 'duty' to make sure misinformation was rooted out.
He said Facebook was using artificial intelligence to combat the threat of terrorist organisations hijacking the site to disseminate propaganda.
“It’s not just in this region, if you look across the globe you will see all kinds of challenges arising from misinformation,” said Mr Doub.
“In the last year Facebook spent more on implementing systems and measures against misinformation than the entire company was worth when it first went public.
“We are talking about the investment of billions of dollars. The reason we are doing this is because we have a duty to make sure we get it right.”
He made his comments in the wake of Facebook, along with Twitter and Google, facing criticism from the EU over the detection of “co-ordinated, inauthentic behavior” on platforms ahead of the European Parliament elections in May.
The EU said it had found evidence of “continued and sustained disinformation activity” from Russian sources.
There were more than 600 groups and Facebook pages in countries including Spain, Germany, Poland, the UK and France that were said to be spreading disinformation and hate speech leading up to the elections.
These pages generated a combined 763 million views. Facebook also removed 2.2 billion fake accounts in the first quarter of 2019, a figure almost double that of the previous three months.
Mr Doub was in Dubai to talk about his company’s journalism project which aimed to promote better sourced and more accurate content on Facebook’s platforms.
“We are training artificial intelliegence right now to discover patterns and one thing we especially do not want on the platforms is terrorist content,” he said.
“Our machines are currently in a position to reduce 99 per cent of Al Qaeda and Isis terrorist content before it hits the platform.
“You can never get to 100 per cent but we are in a very good place to control these things.”
The next subject that the Facebook team have in their sights is hate speech and fake news spread by white supremacists.
“We are going to be looking at that as well as instances of self harm,” he said.
“It is important to us that we are vigilant and make sure this type of content does not appear.”
Out of all the content that was identified as being hate speech in the company’s last report, which looked at July to September 2018, Facebook’s own team was able to proactively flag up 52 per cent of it internally before anyone reported it to them.
That figure, said Mr Doub, was only 24 per cent in the same period during the previous year. In the third quarter of 2018, the company also removed 754 million fake accounts.
Another area which had proven to be a bone of contention for many was Facebook’s insistence that numerous instances of fake news were not removed from the site, merely voted down.
Mr Doub defended his company’s stance.
“If something is not going against our community standards but was still misinformation we would allow it to stay on,” he said.
“It gets voted down towards the bottom of the newsfeed which we find reduces the number of people looking at it by 80 per cent.”
He gave the example of someone posting the world was flat, which while clearly untrue was not in breach of the site’s rules about discrimination on gender, race or religion.
Updated: June 20, 2019 05:43 PM