At first, there was a text box. You typed in your query, and in return, you’d get millions of search results, ordered by their relevance and popularity, among several other factors. When the mobile revolution came, thanks in no small way to Android democratising the principles behind the iPhone, the way people interacted with technology changed. But the way information was understood, processed, and sorted on the Internet remained largely text-bound.
“All of Google was built because we started understanding text and webpages,” CEO Sundar Pichai said during his keynote address, at the developer-focused conference, Google I/O 2017. “The fact that computers can understand images and video has profound implications for our core mission.”
Pichai was elaborating on the company’s latest innovation, Google Lens. Just as Google Search knows what you’re looking for, and Google Home can pick up on your voice, Google Lens can understand what you’re looking at. Point it at a flower, and it’ll tell you its name and genus. Look at a restaurant sign or a billboard advertising a concert, and it can display reviews from Google Maps, or add it to your Google Calendar, respectively.
Google Lens is coming to the Assistant and Photos first, sometime later this year. All of it has been made possible by machine learning, and Pichai reminded everyone at I/O that Google is now an AI-first company. With that, it’s rethinking all its products from the ground up, just as the introduction of mobile forced them to reimagine interaction.
Deep Learning: Teaching Computers to See Like People
This isn't exactly new. Google has been using machine learning for very long, and even talking about it for a while now. At I/O last year, Google unveiled the Assistant, its first attempt at making its work in AI more visible. And it's not alone either. Just about every company is making investments in machine learning today, including giants like Microsoft, Intel, and IBM. AI and machine learning are being used to recommend fashion orders, and improve our phone keyboards. Pichai's words, however, will set the course for developers and device makers for the largest mobile platform, which is an important development.
Machine learning is already powering several of Google’s latest features as well. It can be seen in Photos’ search functionality, which is learning to recognise people, places, and objects, even as it’s dealing with some hiccups. It helps craft quick email replies, which is now rolling out to all Gmail users. Machine learning helps provide personalised search results, suggest videos you’ll enjoy on YouTube, and identify multiple users talking to Google Home.
On the Google Pixel, it has helped Google deliver one of the best cameras out there, thanks to its understanding of an image’s makeup. At his I/O keynote, Pichai claimed that its computer vision has surpassed humans in the task of image recognition. Soon, it’ll be able to erase any obstruction (say, a fence) from a picture, as if it never existed.
Amazon, Facebook, and Microsoft are all also investing heavily in the field, and are making strides with their own approaches. Where Amazon has the most recognised and supported digital assistant of all in Alexa, Facebook has been upping its commitment to AI, and Microsoft seeks to embed its AI assistant Cortana in everything from Office to Xbox.
Amazon and Microsoft are also the one-two leaders in corporate cloud computing market, which Pichai believes is crucial as a future revenue stream. “If you are running cloud at scale, that means you are powering the data platforms that will transform many industries,” he told Wired in an interview. “Will there be an economic opportunity there? I absolutely think it will be big.”
That’s why Pichai spent a considerable time talking about what Google has to offer in that regard, in what is ultimately an annual pitch to keep developers on their turf, and excited about the prospects. The biggest among those was ‘Cloud TPU’, Tensor Processing Units. TPUs are custom hardware made for machine learning, and are much more faster and power efficient than regular CPUs and GPUs.
With TPUs, Google could deliver better models for machine learning, giving it a fighting chance against its biggest competitors in the cloud. And by making them available to the larger world via Google Cloud, it intends to lay claim to a larger share of the proceedings. That, in effect, contributes to a cyclical approach. The more data passes through its servers, the more Google can afford to invest in its AI-first mission, and the better products it can deliver to its users, which, in turn, brings more developers.
Pichai opened I/O 2017 by talking about the sheer scale of Google’s products, with each of its seven core services having a billion users of its own. In addition, every day, people watch a billion hours of video on YouTube, they upload over a billion photos on Google Photos, and a billion objects are created in Google Drive. And the elephant in the room, Android, now has 2 billion active devices.
Although its machine learning efforts have been immensely helped by this scale, Google wants to design better models going forward, since the existing ones are not only time consuming, Pichai noted, but are also mostly done by a few engineers and scientists who have a Ph.D. in the field. “What better way to get neural nets to design better neural nets?” he said.
To do precisely that, Google came up with AutoML, designed by the company’s artificial intelligence research group, Google Brain. Pichai said it reminds him of Inception, and he tells his team: “We must go deeper.” Aside from the joke, he’s excited about automating one of the trickiest parts about deep learning, and allow the algorithms to learn without human intervention. For now, it’s too expensive to be widely implemented.
But over the long run, it would help Google establish itself as the best place to work with machine learning, a trump card of sorts at a time when it faces stiff competition from Microsoft and Amazon in cloud computing. In turn, Pichai said, the results would be used to improve all its services, with a focus on its most important products: Search and Assistant. That will pave the way for a shift in how people interact with technology, along with a corresponding shift in how it interacts with us.
We discussed everything that was announced at Google I/O 2017 on Orbital, our weekly technology podcast. You can subscribe to via Apple Podcasts or RSS, or just hit the play button below to listen to this episode.
For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.