Researchers develop new tool to improve robots' visual comprehension

Facebook Gadgets360 Twitter Share Tweet Snapchat LinkedIn Reddit Comment google-newsGoogle News
Researchers develop new tool to improve robots' visual comprehension
Advertisement
A statistical tool can improve 'vision' in robots by helping them better understand the objects in the world around them.

Object recognition is one of the most widely studied problems in computer vision, researchers said.

To improve robots' ability to gauge object orientation, Jared Glover, a graduate student in Massachusetts Institute of Technology (MIT)'s Department of Electrical Engineering and Computer Science, is exploiting a statistical construct called the Bingham distribution.

In a paper to be presented at the International Conference on Intelligent Robots and Systems, Glover and MIT alumna Sanja Popovic, who is now at Google, describes a new robot-vision algorithm, based on the Bingham distribution, that is 15 percent better than its best competitor at identifying familiar objects in cluttered scenes.

That algorithm, however, is for analysing high-quality visual data in familiar settings.

Because the Bingham distribution is a tool for reasoning probabilistically, it promises even greater advantages in contexts where information is patchy or unreliable.

In cases where visual information is particularly poor, the algorithm offers an improvement of more than 50 percent over the best alternatives.

"Alignment is key to many problems in robotics, from object-detection and tracking to mapping," Glover said.

"And ambiguity is really the central challenge to getting good alignments in highly cluttered scenes, like inside a refrigerator or in a drawer. That's why the Bingham distribution seems to be a useful tool, because it allows the algorithm to get more information out of each ambiguous, local feature," Glover said.

One reason the Bingham distribution is so useful for robot vision is that it provides a way to combine information from different sources, researchers said.

Determining an object's orientation entails trying to superimpose a geometric model of the object over visual data captured by a camera ? in the case of Glover's work, a Microsoft Kinect camera, which captures a 2-D colour image together with information about the distance of the colour patches.

In experiments involving visual data about particularly cluttered scenes - depicting the kinds of environments in which a household robot would operate - Glover's algorithm had about the same false-positive rate as the best existing algorithm: About 84 percent of its object identifications were correct, versus 83 percent for the competition.

But it was able to identify a significantly higher percentage of the objects in the scenes - 73 percent versus 64 percent.

Comments

For details of the latest launches and news from Samsung, Xiaomi, Realme, OnePlus, Oppo and other companies at the Mobile World Congress in Barcelona, visit our MWC 2025 hub.

Further reading: MIT, robots, algorithm
Chinese police target online 'jihad' talk amid rumour crackdown
Tablets to comprise half of first-time PC purchases by 2017: Gartner

Advertisement

Follow Us

Advertisement

© Copyright Red Pixels Ventures Limited 2025. All rights reserved.
Trending Products »
Latest Tech News »