Google may soon not need to look into geo-tagging information to know where a photo was taken. The Mountain View-based company has devised a new deep-learning machine called PlaNet that is able to tell the location by just analysing the pixels of the photo.
A new effort inside Google's campus led by Tobias Weyland, a computer vision scientist at Google, has helped them create a neural network that has been fed with over 91 million geo-tagged images from across the planet to make it capable enough that it can spot patterns and tell the place where the image was taken. It can also determine different landscapes, locally typical objects, and even plants and animals in photos.
PlaNet analyses the pixels of the photos and then cross references them with the millions of images it has in its database to look for similarities. While it could sound like a tedious job to many of us, but for a neural network that only consumes about 377MB space, that is not really an issue.
In a trial run in which it was tested with 2.3 million images, PlaNet was able to tell the country of origin with 28.4 percent accuracy, but more interestingly, the continent of origin with 48 percent accuracy. PlaNet is also able to determine the location of about 3.6 percent of images at street-level accuracy and 10.1 percent of city-level accuracy. Sure, it isn't 100 percent correct yet, but neither are you. Besides, PlaNet is getting better.
In a number of tests, PlaNet has also beaten the best of us, humans. The reason, Weyand explains to MIT Tech Review, is that PlaNet has seen more places than any of us have, and has "learned subtle cues of different scenes that are even hard for a well-travelled human to distinguish."
For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.