Google I/O 2016 Wrap Up

Google’s 9th I/O developer conference has officially wrapped up and we here at Vokal have compiled our impressions of the various announcements and sessions. Be sure to check back in to Vokal Labs for our Apple WWDC coverage.

 

New Products

 

CEO Sundar Pichai kicked off this year’s keynote to announce Google Assistant – your own personal Google that can be evoked across devices and contexts. Steady improvements to its Knowledge Graph (which now boasts over a billion entities), and emphasis on contextual awareness have evolved Google’s search product to be more assistive ever since Google Now rolled out with Android 4.1 Jelly Bean. With Assistant, Google aims to bolster the service with its recent leaps in machine learning, voice recognition and natural language processing to augment its predictive capabilities and offer a more conversational human-computer interaction.

 

VP of Product Mario Queiroz of Chromecast fame took the stage to announce Google Home – a compact speaker with always-listening, far field microphones that responds to verbal commands and queries. Mr. Queiroz acknowledged Google Home’s debt to Amazon’s Echo, but was quick to point out the key differentiators – multi-room support and the aforementioned Google Assistant. The device, much like the OnHub router, sports an LED light display to indicate status and features interchangeable shells to match your home decor. Google Home will work with Google Cast compatible devices like Chromecast and Android TV, so you can route content to multiple speakers and screens in your home. As one might expect, Google Home will also be able to adjust connected home devices such as it’s own Nest thermostat, though details were scant as to what products will work out of the box when the product launches ‘later this year’. No API is available yet and Google will have to play catch up to achieve parity with the Echo’s integrations.

 

In addition to the new hardware, Google announced three new software offerings with Assistant smarts baked in – Allo, Duo, and Spaces. Allo allows for “expressive” messaging with custom stickers, a photo markup feature called ink, and WhisperShout – a long press on the send key to enlarge or diminish a message’s size. More usefully, Allo will suggest automated replies called suggestion chips that supposedly improve over time based on your typical responses. Automated responses are even generated for photos thanks to DeepMind’s computer vision technology. Allo also employs Google Assistant in the form of a chatbot. You can ask questions or perform tasks with an @google mention directly within a chat. Allo uses end-to-end encryption and features an Incognito Mode.

 

Duo is a dead simple interface for cross-platform (mobile only), two-way calling. Duo is lean on functionality, relying on WebRTC, an open tool for real time communication, to ensure exceptional connectivity. The app dynamically adjusts video and audio quality to whatever connection you’re on and handles phone switching from Wi-Fi to cellular without dropping the call. “Knock knock” allows a call recipient to preview the caller’s video before accepting.

Allo’s feature set looks genuinely useful, and Duo strikes the right balance of laser-focused functionality and clever engineering but given the crowded messaging landscape it’s a hard sell to add two more apps to the fray. There was no mention of Hangouts in the keynote, though Google has confirmed they will continue to support the product.

 

Spaces is a cross-platform group sharing app, similar to Google+ “Communities”, allowing invited users can create topics to store and comment on various links and media. Spaces was made available prior to I/O and created for each session to provide associated resources and facilitate follow up discussion. Google’s unsung physical web was employed at the conference to invite session attendees to participate. We here at Vokal are always excited to see compelling uses of otherwise under-utilized beacon technology.

 

The State of Android

 

This year’s OS announcement broke with tradition a bit, as Developer Preview 1 for Android N was made available as early as March. The long-awaited split-screen multitasking feature is a boon for Android tablet users, and some additional improvements to multitasking were announced, including a clear all button and a quick double tap on the recents button to switch to the last app. Emoji have been overhauled to include more anthropomorphic figures and skin-tones, finally achieving parity with iOS. Some under the hood changes have been made for battery optimization, and seamless updates (no more ‘Android is Updating’ dialog, y’all!). For game-makers, the Vulkan API allows developers to gain low level access to a phone’s GPU for performative 3D graphics. Google opted to crowdsource the Android N name this year (just vote for NERDS!) so we’ll have to wait for the proper release slated for “later this Summer.”

 

Arguably the biggest announcement to accompany Android N is Daydream – Google’s mobile virtual reality platform. Daydream is marketed as a mid-tier VR experience, the closest equivalent being Samsung’s Gear VR. Reference designs were teased for a headset and controller with a clickable trackpad and dedicated buttons. Daydream-ready phones will be available this fall from HTC, ZTE, Huawei, Asus, Xiaomi, Alcatel, LG, and Samsung. VR Apps were teased from YouTube, Street View, Play Store, Movies, Google Photos, New York Times, HBO, Netflix, Ubisoft, and EA. Google is taking a hands on approach to curate the VR experience in it’s infancy to ensure against some of the pitfalls of nausea and discomfort inherent to the medium.

 

Nearly two years since its announcement, Android Wear will receive a 2.0 update with focuses on “Information, People and Health”. The release will allow data display or “complications” on any watch face, as the Apple Watch does today. Messaging tools like automated replies, handwriting recognition mode, and a tiny software keyboard are baked in to 2.0. The platform will allow for standalone apps that are less reliant on smartphones and cellular connections. Designers take note – new guidelines have been provided specifically for Android Wear experiences. Automatic exercise recognition should greatly improve Google Fit’s ability to capture fitness data without the friction of manual activity logging and bring it to parity with most modern fitness wearables. Preview images are available to download for the Huawei Watch and 2nd gen LG Watch Urbane.

 

Android Auto is perspicaciously offering an alternative to today’s integrated dashboards. Whereas the infotainment platform previously required a compatible vehicle and a wired USB connection, a standalone app will now be available, employing a vehicle optimized interface operated primarily through voice commands. For compatible cars, Google has ambitions to bring deep integration to provide vehicle specific functionality such as service alerts, vehicle reports, and other features like roadside assistance and valet alerts to its vehicle manufacturing partners like Hyundai.

 

Android TV continues to announce more hardware from Sony, Sharp & RCA on the way, as well as a 4K ‘Mi Box’ coming from Xiaomi. New apps and channels are coming to the Play Store from CNN, Comedy Central, MTV, Freeform, Nickelodeon, STARZ, ABC, and Disney. Live sports streams through Android TV have also been teased by ESPN. Android TV will also add Picture-in-picture and screen recording APIs for live TV, coming “later this year”.

 

Play Store on Chrome OS & Instant Apps

 

Following years of speculation at the convergence of Chrome OS and Android, (and the efforts of third parties like Remix OS) Google announced on Day 2 that the Play Store is coming to Chromebooks. The rollout will begin mid-June on the Acer Chromebook R11, Asus Chromebook Flip, and the Google Chromebook Pixel (2015) with more to follow. Chromebooks have had somewhat of a landmark year, outpacing Mac for the first time in Q1 – chiefly due to their dominance in the classroom. The addition of the massive Android ecosystem will no doubt make the Chromebook an increasingly attractive option and could be a game changer in the PC market.

 

One of the most interesting conference developments was the left field announcement of Instant Apps. With just “less than a day of work”, developers can modularize their apps to allow them to be downloaded over the open web to achieve certain tasks better suited to a native app (such as an ecommerce checkout with Android Pay) without requiring a full installation. Amazingly, Instant Apps will work on Android devices running 4.1 Jelly Bean and up.

 

Developer Tools

 

Firebase – Google’s backend service for managing app data – received its most significant update since was acquired by Google in October 2014 today. This announcement was timely, given Facebook’s recent decision to sunset it’s own popular Parse API, which offers similar services (analytics, deep linking, file storage and remote app config). The console and support documentation have been overhauled and new tools like a test lab to accelerate test development and crash reporting service have been added. Services such as Adwords and Admob are integrated  in the new console. Many of the previously paid features such as the cloud messaging service have been made free.

 

Android Studio 2.2 picks up some performance improvements, claiming 10x faster build times using Instant Run. Jason Titus, lead of the developer product group, boldly claimed that the new emulators are “actually faster than the physical device in your pocket right now." AS 2.2 includes a new layout designer specially-built for Constraint Layout, which intelligently infers constraints to define a view’s position. Developers can preview their UI across different screen sizes and orientations. The new version also includes improved testing tools to accelerate test development.

 

In early May, Google open sourced it’s language parsing model for English called SyntaxNet (colloquially referred to as Parsey McParseface). Continuing the momentum, the team introduced the Awareness API, which exposes seven different types of context – time, location, place, activity, beacons, headphones, & weather – to developers.

 

Advanced Technology and Projects

 

The ATAP team shared their progress on some of Google’s wildest experiments. With former ATAP head Regina Dugan transitioning to Facebook, the ATAP team has moved under Google’s new hardware division headed by Rick Osterloh, former CEO of Motorola. This year saw the maturation of Soli – a radar capable of gesture recognition sensitive enough to interpret fine hand movements. The Soli team demoed a modified LG Watch Urbane with an integrated Soli radar. Natural gestures have huge potential to augment the mobile experience, providing previously unavailable shortcuts to accomplish common tasks and allowing interaction with screen elements without having to obscure them with your fingers.

 

Project Jacquard, a “conductive yarn” that adds touch and gesture interactivity to any textile, announced this year that they’ve partnered with Levi’s to produce a Jacquard jacket coming this Fall. Levi’s touts that the jacket is designed specifically for urban bike commuters for simple interactions that would otherwise be cumbersome with a smart phone while biking. The concept video teases some use cases such as dismissing phone calls and controlling audio playback (all of which require some form of headset as well).

 

Project ARA continues to court developers to make hardware for its modular smartphone. Somewhat disappointingly, the concept has moved away from it’s “infinitely upgradable” origins in adopting an integrated processor frame. This moves it away from the ethically conscious Fairphone and closer to the “Friends” modules of the LG5. Some concepts like an e-ink display and hot-swappable batteries remain compelling, but not necessarily worth the extra bulk inherent to the form factor. Developer editions are shipping in Fall of 2016 with an aim for a consumer date in 2017

 

Given Google’s VR Ambitions, we here at Vokal were excited to hear more about Project Tango, Google’s spatially aware smartphone sensors. Computer Vision and Augmented Reality are a natural fit, and we wouldn’t be surprised if the projects will converge in really compelling ways if Daydream gains traction. Though things were quiet on the Tango front this year, Lenovo is notably shipping a Tango enabled device for consumer release coming this summer.

 

Wrap Up

 

As one would expect from the industry leader, Google is making big swings in the hotly contested new arenas of VR and AI assistants. Though “later this Fall” was the resounding theme for new product announcements, Google has put its stakes in the ground – we won’t have long to wait and see how competitors respond. Until then, stay tuned to Vokal Labs!