Fake-Porn Opponents Are Fighting Back With AI Itself

Fake-Porn Opponents Are Fighting Back With AI Itself
Advertisement

The best hope for fighting computer-generated fake-porn videos might come from a surprising source: The artificial intelligence software itself.

Technical experts and online trackers say they are developing tools that could automatically spot these "deepfakes" by using the software's skills against it, deploying image-recognition algorithms that could help detect the ways their imagery bends belief.

The Defense Advanced Research Projects Agency, the Pentagon's high-tech research arm known as DARPA, is funding researchers with hopes of designing an automated system that could identify the kinds of fakes that could be used in propaganda campaigns or political blackmail. Military officials have advertised the contracts - code-named "MediFor," for "media forensics" - by saying they want "to level the digital imagery playing field, which currently favours the manipulator."

The photo-verification start-up Truepic checks for manipulations in videos and saves the originals into a digital vault so other viewers - insurance agencies, online shoppers, anti-fraud investigators - can confirm for themselves. The company wants to embed its software across a range of sensors and social-media platforms so as to validate footage against what it calls a "definitive point of truth."

Fake-Porn Videos Are Being Weaponised to Harass and Humiliate Women

The company's chief executive, Jeffrey McGregor, said its engineers are working to refine detective techniques by looking for the revealing giveaways of fakes: the soft fluttering of hair, the motion of ears, the reflection of light on their eyes. One Truepic computer-vision engineer designed a test to look for the pulse of blood in a person's forehead, he said.

However, the rise of fake-spotting has spurred a technical blitz of detection, pursuit and escape, in which digital con artists work to refine and craft evermore deceptive fakes. In some recent pornographic deepfakes, the altered faces appear to blink naturally - a sign that creators have already conquered one of the telltale indicators of early fakes, in which the actors never closed their eyes.

Hany Farid, a Dartmouth College computer-science professor and Truepic adviser, said he receives a new plea every day from someone asking him to investigate what they suspect could be a deepfake. But the group of forensic specialists working to build these systems is "still totally outgunned," he said.

The underlying technology also continues to evolve: In September, researchers at DeepMind, the trailblazing AI firm owned by Google's parent company Alphabet, said they had trained the programs behind deepfakes, known as generative adversarial networks, or GANs, "at the largest scale yet attempted," allowing them to create high-quality fake images that looked more realistic than ever.

"The counterattacks have just gotten worse over time, and deepfakes are the accumulation of that," McGregor said. "It will probably forever be a cat-and-mouse game."

© The Washington Post 2018

Comments

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.

Further reading: Deep-Fake, Truepic
Motorola One Power Gets December Security Patch in India, Call Quality Improvements, Users Report
Samsung Working on Smaller Bixby-Powered Smart Speaker: Report
Facebook Gadgets360 Twitter Share Tweet Snapchat LinkedIn Reddit Comment google-newsGoogle News

Advertisement

Follow Us
© Copyright Red Pixels Ventures Limited 2024. All rights reserved.
Trending Products »
Latest Tech News »