AI May Soon Monitor Your Live Videos on Twitter, Facebook

AI May Soon Monitor Your Live Videos on Twitter, Facebook
Advertisement
"She got caught up in the likes" was how a prosecutor described the 18-year-old Ohio woman accused of live-streaming the alleged rape of her 17-year-old friend. There is no question that live-stream apps, such as Periscope and Facebook Live Video, carry the risk of exposing audiences to heinous images. In rare but harrowing incidents, users have watched as suicide, rape or domestic violence unfold in real time before their eyes.

The job of watching and removing violent or pornographic content from these live apps, as well as video sites like YouTube, has primarily been a human undertaking. Workers tasked with content moderation review hours of video flagged as inappropriate by users, taking down anything that violates guidelines. It's a grueling and at times horrific job, and the sheer amount of content makes it a challenge for manpower alone. Now, artificial intelligence is poised to help with this task.

Software that can intelligently watch video is being developed by several companies, including both Twitter and Facebook, for use on their live-stream services, Periscope and Facebook Live Video.

Companies such as Clarifai and Dextro have made huge gains in developing this kind of sophisticated software as well. Dextro, a New York-based start-up, uses video recognition AI to easily search through content on live-stream apps. It doesn't monitor for inappropriate content right now, instead scanning for video that might be interesting and relevant to a company's brand. But the technology or similar software could easily be used to weed out porn or violence.

Co-founder David Luan said the challenge lies in creating software that can interpret not just still images but moving images, audio and other "signifiers" that demonstrate what is happening in the video. "It's like trying to re-create a human's experience of watching these videos," Luan said.

Companies traditionally relied on tags to indicate the nature of a video's content to a computer system, but Luan said reducing the meaning of a video to a couple of key words does not accurately capture its full scope.

"What is challenging about these videos is that they are much more complex than just a single picture. Even though it is a series of images all one after the other, there's the motion element, the audio, so much of that gets thrown away if you just analyze image after image," he said. "So you really need to treat it like a whole piece and analyze that," which relying on tags cannot achieve.

Dextro's software, on the other hand, can recognize objects and signifiers in a frame without human intervention, such as a gun in a potentially violent video. And the speed of AI's recognition gives it a huge leg-up on human monitors. Luan's software can analyze a video within 300 milliseconds of posting.

Cortex, Twitter's division focused on Periscope-monitoring AI, has been working on software that can watch and recommend live video since its launch in July 2015. "Periscope has been working with Twitter's Cortex team to experiment with ways to categorize and identify content in live broadcasts," a Twitter spokesman said in a statement. "The team is focused on pairing that advanced technology with an editorial approach to provide a seamless discovery experience on Periscope." Cortex could not confirm when they would be rolling out their product.

Facebook confirmed that the company does not currently use AI to filter out pornographic or violent videos, and declined to comment on whether AI software was being developed.

Human content moderation has traditionally been outsourced to countries such as the Philippines. Even in a future where AI does most of the work, Luan sees a role for human intervention. "Humans help to retrain the algorithm and help it get better over time," he said of his company's video-watching AI.

But in the aftermath of widely viewed videos and live-streams of the deaths of Philando Castile and Alton Sterling at the hands of the police, the question of when censorship is ethical or appropriate poses a challenge for tech developers in the role of content moderator. How would a machine handle those videos, if it eventually takes over as the prime moderator?

"Nations vary greatly in their degrees of restriction on press and freedom of speech to begin with, and even within our own country there are certain lifestyles whose practitioners gather in community sites that would be considered indecent by one or more religious groups," said Malcolm Harkins, an information privacy expert. "So the definition of indecent content would of necessity be done by humans, and is likely to be the most complex and challenging part of building the application."

© 2016 The Washington Post

Comments

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.

Smartphone Shipments in India Grew 15 Percent in Q2 2016: Counterpoint
LG V10 Successor Confirmed to Launch This Quarter
Facebook Gadgets360 Twitter Share Tweet Snapchat LinkedIn Reddit Comment google-newsGoogle News

Advertisement

Follow Us
© Copyright Red Pixels Ventures Limited 2024. All rights reserved.
Trending Products »
Latest Tech News »