Google I/O conference: looking at the latest AR and AI advances

The conference featured some bold leaps forward in augmented reality and artificial intelligence

Sundar Pichai, chief executive officer of Google Inc., speaks during the Google I/O Developers Conference in Mountain View, California, U.S., on Tuesday, May 8, 2018. Each year, Google uses the start of its annual conference to set a narrative about how developers and the public should view the company. This time, the message was clear: Google is about technology for good. Photographer: David Paul Morris/Bloomberg
Powered by automated translation

The annual Google I/O conference is a rather technical, geeky affair that's mainly concerned with the chips and code that power our computing devices. Over the past decade, however, the event has involved some of Google's most notable and notorious technology unveiled to the public.

Android, its mobile operating system, was showcased in 2008 and is now used by more than 2 billion people; Google Glass, the controversial augmented reality spectacles, appeared in 2008, while 2014 brought us the cheap and cheerful virtual reality viewer, Google Cardboard.

The 2018 conference, which ended last Thursday, featured some bold leaps forward in augmented reality (AR) and artificial intelligence (AI), presenting us with a very strong, vivid picture of how technology will soon become entwined in our lives.

Maps becoming tools of exploration

It’s not always easy to relate the lines of a map to the world around us. In bygone days, it might have involved rotating a cumbersome piece of paper while trying to associate landmarks with printed symbols.

Smartphone technology has made it easier for us to work out our location and in which direction we’re facing, but reaching our destination still relies on a certain amount of map reading. That’s set to change with the incorporation of camera tools within Google Maps: set your destination, then use your phone’s camera to guide you around, with arrows on the screen pointing down streets and around corners. If you prefer, you can even summon a cute augmented-reality fox to appear on the screen and trot off in the right direction. Simply follow the fox.

Google Maps doesn’t just aim to guide us, however; it also wants to tell us where we might want to go. In response to user demand, Maps will soon be able to intelligently suggest locations we might be interested in visiting: restaurants, cafés and local attractions that are, quite literally, up our street.

____________________
Read more:

Dubai's Museum of the Future looks to AI guides 

Ode to Dubai: emirate gets its own theme song - composed by AI 

Ramadan 2018: Google tailors Holy Month searches for Arab world

____________________

Cameras bringing us information

Last year, Google launched Lens, a visual analysis app that tried, with a certain degree of success, to make sense of the world around it. One of its most startling tricks was the ability to scan the label on the back of a Wi-Fi router and have that device automatically connect to it without having to lab­oriously type the code in, but since then it’s got even smarter; it can now recognise, say, breeds of dog or well-known landmarks and give links to more information, while at I/O it demonstrated a new-found ability to display that information in real time as the camera is moved around. It has also got better at recognising text (such as restaurant menus, signs and books) and styles of objects (such as shoes, furniture and ornaments.) It sees things; it recognises them and it tells you stuff about them. No Googling necessary.

Emails writing themselves

We’ve long been familiar with the way that autocorrect can figure out the words we’re trying to type on a phone, although many humorous books have been compiled of ways in which that feature can go wrong. As the technology improves, however, the errors will become fewer and AI’s ability to second guess us will become ever more uncanny.

For some time, Google’s Gmail service has offered suggestions of short, three or four-word replies to emails for those of us who are too busy to respond in full, but Smart Compose – which is due to be incorporated into Gmail over the next few weeks – makes intelligent guesses at whole phrases and sentences.

So in the knowledge that, say, an email about dinner plans will probably end with a suggestion of a time and place, Smart Compose can swing into action and save us the bother of hammering out that suggestion letter by letter.

Computers making voice calls on our behalf

Speech synthesis has come a long way since Stephen Hawking began speaking using the robotic tones of his CallText 5010. Audio technology is now sufficiently advanced that convincing, human-like voices can be made to say anything; this, coupled with advanced AI, is guiding us into a future where digital assistants don’t just do our bidding, they can contact and speak to other people on our behalf. This burgeoning tech­nology provided the most eyebrow-raising moment of the I/O conference, when a phone call between an automated Google Assistant and a hair salon was replayed to the audi­ence.

The person taking the call didn’t realise that the appointment was being booked by a computer, partly because Google Assistant added casual and convincing verbal tics (“mm-hmm”). This branch of AI has been named Duplex by Google; it’s not currently ready for rollout, but widespread testing is due to begin this summer.

Technology to help us stop using technology

All these announcements involve humans handing over responsibility to computers (and, more specifically, Google itself.) Both Lens and Maps require the screen of an internet-connected phone to become the window on our world, while Smart Compose and Duplex become extensions of ourselves; they handle the drudgerous tasks while we get on with something more meaningful. This is all down to the rapidly evolving power of AI; indeed, Google has renamed its research division as “Google AI” in recognition of the critical role it plays in its business.

By way of contrast, Google has announced a “Digital Wellbeing” project which aims to promote “healthy habits” around technology and help us to disconnect from our devices. The tools on offer include reminders to take breaks, statistics to show how much time we spend using various apps, and a “Wind Down” feature for Android which turns the screen grey and notifications off.

During his keynote speech, Google CEO Sundar Pichai admitted the responsibility his company currently faces, given the “very real and important questions being raised about the impact of these [technological] advances.” But while on one hand Google is reminding us to switch off and experience the “real world”, it’s also weaving its technology inextricably into that world. Over the next few years Google, and the users of its services, will actively wrestle with this irresolvable conundrum.