Facebook Says Its AI Systems Spot More Offensive Photos Than Humans

Facebook Says Its AI Systems Spot More Offensive Photos Than Humans
Advertisement
In a bid to stem hate speech, Facebook has revealed that its Artificial Intelligence (AI) systems are spotting more offensive photos than humans on its platform.

According to a TechCrunch report, nearly 25 percent of Facebook engineers now regularly use its internal AI platform to build features and do business but the best use is to check and find offensive photos.

"One thing that is interesting is that today we have more offensive photos being reported by AI algorithms than by people. The higher we push that to 100 percent, the fewer offensive photos have actually been seen by a human," Joaquin Candela, Facebook's director of engineering for applied machine learning was quoted as saying.

"This AI system helps rank News Feed stories, read aloud the content of photos to the vision impaired and automatically write closed captions for video ads that increase view time by 12 percent," he informed.

AI can finally help Facebook tackle hate speech.

Facebook, along with Twitter, YouTube and Microsoft have also agreed to new European hate speech code that requires them to review "the majority of" hateful online content within 24 hours of being notified - and to remove it, if necessary.

The new rules, announced by the European Commission, also oblige the tech companies to identify and promote "independent counter-narratives" to hate speech and propaganda published online.

According to the Verge, hate speech and propaganda have become a major concern for European governments following terrorist attacks in Brussels and Paris and amid the ongoing refugee crisis.

"The recent terror attacks have reminded us of the urgent need to address illegal online hate speech," Vera Jourova, the EU commissioner for justice, consumers, and gender equality, said in a statement.

"Social media is unfortunately one of the tools that terrorist groups use to radicalise young people and to spread violence and hatred," she added.

"In short, the 'code of conduct' downgrades the law to a second-class status, behind the 'leading role' of private companies that are being asked to arbitrarily implement their terms of service," the statement read.

"This process, established outside an accountable democratic framework, exploits unclear liability rules for companies. It also creates serious risks for freedom of expression as legal but controversial content may well be deleted as a result of this voluntary and unaccountable take down mechanism," it added.

Comments

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.

Further reading: Apps, Facebook, Microsoft, Social, Twitter, YouTube
US Federal Reserve Records Show Dozens of Cyber Breaches in Recent Years
BBC Micro:bit Mini Computer Goes Up for Pre-Orders Starting GBP 13
Facebook Gadgets360 Twitter Share Tweet Snapchat LinkedIn Reddit Comment google-newsGoogle News

Advertisement

Follow Us
© Copyright Red Pixels Ventures Limited 2024. All rights reserved.
Trending Products »
Latest Tech News »