Researchers Slam Artificial Intelligence Software That Predicts Emotions

The group of professors and other researchers cited as a problematic example the company HireVue, which sells systems for remote video interviews for employers such as Hilton and Unilever.

Researchers Slam Artificial Intelligence Software That Predicts Emotions

AI Now also criticised Amazon, which offers analysis on expressions of emotion through Rekognition

Highlights
  • The group called for a ban on automated analysis of facial expressions
  • Analysis should not be used in hiring and other major decisions: group
  • Action against software-driven "affect recognition" priority: AI Now
Advertisement

A prominent group of researchers alarmed by the harmful social effects of artificial intelligence called Thursday for a ban on automated analysis of facial expressions in hiring and other major decisions. The AI Now Institute at New York University said action against such software-driven "affect recognition" was its top priority because science doesn't justify the technology's use and there is still time to stop widespread adoption.

The group of professors and other researchers cited as a problematic example the company HireVue, which sells systems for remote video interviews for employers such as Hilton and Unilever. It offers AI to analyse facial movements, tone of voice and speech patterns, and doesn't disclose scores to the job candidates.

The nonprofit Electronic Privacy Information Center has filed a complaint about HireVue to the US Federal Trade Commission, and AI Now has criticised the company before.

HireVue said it had not seen the AI Now report and did not answer questions on the criticism or the complaint.

"Many job candidates have benefited from HireVue's technology to help remove the very significant human bias in the existing hiring process," said spokeswoman Kim Paone.

AI Now, in its fourth annual report on the effects of artificial intelligence tools, said job screening is one of many ways in which such software is used without accountability, and typically favoured privileged groups.

The report cited a recent academic analysis of studies on how people interpret moods from facial expressions. That paper found that the previous scholarship showed such perceptions are unreliable for multiple reasons.

"How people communicate anger, disgust, fear, happiness, sadness, and surprise varies substantially across cultures, situations, and even across people within a single situation," wrote a team at Northeastern University and Massachusetts General Hospital.

Companies including Microsoft are marketing their ability to classify emotions using software, the study said. Microsoft did not respond to a request for comment Wednesday evening.

AI Now also criticised Amazon.com, which offers analysis on expressions of emotion through its Rekognition software. Amazon told Reuters that its technology only makes a determination on the physical appearance of someone's face and does not claim to show what a person is actually feeling.

In a conference call ahead of the report's release, AI Now founders Kate Crawford and Meredith Whittaker said that damaging uses of AI are multiplying despite broad consensus on ethical principles because there are no consequences for violating them.

© Thomson Reuters 2019

Comments

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.

Oppo Reno 3 Confirmed to Use MediaTek Dimensity 1000L 5G SoC, Reno 3 Pro to Support Enhanced VOOC 4.0
Godfall, a New Loot Grind With Melee Combat, Is Coming to PlayStation 5 and PC
Share on Facebook Gadgets360 Twitter Share Tweet Snapchat Share Reddit Comment google-newsGoogle News
 
 

Advertisement

Follow Us

Advertisement

© Copyright Red Pixels Ventures Limited 2024. All rights reserved.
Trending Products »
Latest Tech News »