Facebook plans to ramp up efforts to fight misinformation ahead of the European Parliament election in May and will partner with German news agency DPA to boost its fact-checking, a senior executive said on Monday.
Facebook has been under pressure around the world since the US election in 2016 to stop the use of fake accounts and other types of deception to sway public opinion.
The European Union last month accused Alphabet's Google, Facebook and Twitter of falling short of their pledges to combat fake news ahead of the European election after they signed a voluntary code of conduct to stave off regulation.
On Monday, Facebook said it was setting up an operations centre that would be staffed 24 hours a day with engineers, data scientists, researchers and policy experts, and coordinate with external organisations.
"They will be proactively trying to identify emerging threats so that they can take action on them as quickly as possible," Tessa Lyons, head of news feed integrity at Facebook, told journalists in Berlin.
Facebook also announced it is teaming up with Germany's biggest news agency, DPA, to help it check the accuracy of posts, in addition to Correctiv, a non-profit collective of investigative journalists that has been flagging fake news to the company since January 2017.
It will also train over 100,000 students in Germany in media literacy and seek to stop paid advertising being misused for political ends.
Germany has been particularly proactive in trying to clamp down on online hate speech, implementing a law last year that forces companies to delete offensive posts or face fines of up to EUR 50 million ($56.71 million or roughly Rs. 391 crores).
The issue of misinformation and elections became prominent after US intelligence agencies concluded that Russia tried to influence the outcome of the 2016 US presidential election in Donald Trump's favour, partly by using social media. Moscow denied any meddling.
Lyons said Facebook had made progress in limiting fake news in the last two years, adding that it would increase the number of people working on the issue globally to 30,000 by the end of the year from 20,000 currently.
In addition to human intervention, she said Facebook is constantly refining its machine learning tools to identify untrustworthy messages and limit their distribution.
"This is a very adversarial space, and whether the bad actors are financially or ideologically motivated, they will try to get around and adapt to the work that we are doing," she said.
© Thomson Reuters 2019
For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.