Google kicked off its annual I/O developer conference on Tuesday with a typical astonishing keynote, and after its first day, it was clear that every new feature will be boosted by artificial intelligence (AI). Google announced a “continued conversations” update that will enhance Google Assistant’s ability and make the conversations more “normal”. According to the new update, you don’t have to say “OK Google” or “Hey Google” to summon her. Furthermore, you can ask multiple questions at the same time, and guess what, John Legend will now give his voice to Google Assistant. The update will come later this year.
The most jaw-dropping moment in the entire conference was when Sunder Pichai, the CEO of Google played back a conversation of Google Assistant calling hair salon and making an appointment. Google published a blog to explain this new feature, Google Duplex, with the help of soundbites of Duplex. Apart from Google Assistant and Google Duplex, Google Maps will be upgraded with AI and Augmented Reality (AR). Just by pointing camera at a certain lamppost or street, AI will be able to assist in navigating. Google’s AI will pair with Street View data to provide an interactive experience. In addition, if you are on foot, the Google Maps can provide AR-powered turn-by-turn experience with the help of little arrows. Will News Feed get powered by AI, you asked? Well, yes. Google announced that it will use AI to analyze thousands of contents that are published on the internet to organize such stories, news, and video into a storyline. Pichai said, the News Feed option will provide variety of perspective to deliver information and get people out from their “filter bubble”. It will also show a new feature, For You, which has top five stories pulled by monitoring the most frequently followed stories. Pichai explained in his keynote speech, “We’re using AI to offer the best features of journalism. We want to provide user quality of sources they trust and a deeper insight of the topic that they are interested in.”
Google launched a new feature, Smart Compose, which can draft emails by (almost) itself. Smart Compose will use AI to help write an entire email from scratch. While a user is typing, it will give suggestions to complete the sentences. Along with Smart Compose, the company introduced to a “too-awesome-to-be-true” feature, Google Lens. Just a point at phone’s camera at some text and Google Lens will search it. For instant, if one has a password written on a notepad, just point camera at it, grab that text, and paste to the required site. Smart Lens is just that simple! The feature can find name of a building or recognize a dog breed, just snapping a picture of it.
Every single feature announced in the conference uses AI to improve the interaction of digital world with real world. However, what is the cost for all the conveniences? Probably, huge amount of users’ data. It is scary to even think about the quantity that Google holds. To use all these features, one must tell Google about what they are doing, where they want to go, or who they talk to. By feeding more data to Google, people are giving more power and control over their daily choices. Therefore, several users find these features extremely freaky and even scary. Moreover, people are hoping that they won’t have to ask, “Hey Google, what is confidentiality?”