Google I/O 2017 is Google’s annual developer festival that took place this May. This year was packed with exciting developments from Google, so, we’ve pulled together the announcements that you need to know about.
Google release Android O BETA
There’s a new and improved operating system on the block. Google has released the second developer preview of the Android O, officially in BETA. And it boasts a more fluid phone experience.
Android O offers some great new features to make your mobile experience smoother and more intuitive. Firstly, it supports a useful picture-in-picture functionality. This allows users to multi-task by interacting with various apps and functions of your Android phone at the same time. For example, users can shrink the screen of a video view to run in the background and simultaneously carry out other tasks. This is useful if you want to perform a search or write important information in your notes as the video is playing. This means users can move fluidly between apps.
To add to this, their copy and paste function is now ‘context aware.’ This means that your phone is aware of what kind of information you are attempting to select. For example, selecting a number or word within an address will intuitively select the whole address. On top of this it will bring up the appropriate apps associated to your selection, such as a suggested Google Map view based on the address selected.
The Android O also offers notification badges called Notification Dots. With a long hold, notifications can be viewed without opening the app. It will also allow users to ‘snooze’ notifications for selected apps for a particular duration. Normally around 40% of Android users opt out push notifications. Perhaps the new function will offer users much needed control over their notifications. This could make it more likely for them to remain opted in for notifications if they are received on the users’ own terms.
Whilst Android O is still in BETA stage, we are wondering what they will name the new OS when it officially launches…Oreo? Oatmeal?
Android instant apps for everyone!
Android Instant Apps SDK is now available to all developers. Android Instant Apps are native android apps that appear in Google search results. They can be accessed from the results page and run a view of the app without installation. With the launch of Android Studio 3.0, the company now also has the tooling in place to help developers modularize their applications. Google state that development time for Android Instant Apps will take approximately four to five weeks.
Android Instant Apps have been around for a while, however only for their selected launch partners. The launch partners, including New York Times and Vimeo, have together released 50 instant apps. They have relayed an increase in purchases, lead generation and videos watched.
So how can Android Instant Apps help with discoverability and increase retention rates for app owners? Well, Google state that Android Instant Apps offer an optimised user experience. This will continually drive users back into the native app experience and increase conversion rates. It will also change the game for user referrals. If users share the Android Instant App link with a friend, they are no longer pushed to the App Store to download the app. Instead users can directly share the same app experience and functionality which they are experiencing. This direct insight, along with a prompt to download the app, could increase the likelihood of conversion.
However, developers need to ensure that they are differentiating between installed and instant app data. You can monitor events such as visits, physical purchases, time spent and more in instant apps. However, reconciling these metrics with your installed app analytics will be a whole new challenge for app owners.
Google Assistant app for iOS rivals Siri
Google welcomed the new Google Assistant app, an intelligent personal assistant, now available on iOS in the US. That’s right, you can finally put Google Assistant head-to-head with Siri on one device.
Google Assistant can be easily accessed by adding a widget to your iOS home page. Unfortunately, it is less graceful than its integration on an Android phone. Google Assistant is also unable to pull any information from an iOS app if queried.
However, Google Assistant still offers great functionality. If asked, Google Assistant will remember any piece of information that you wish to refer to at a later time. For example, if you need to remember your parking spot, Google Assistant will retain this information until asked to ‘forget.’ Siri, on the other hand, will suggest the Reminders applications. Google Assistant also pulls information from Google apps in search results with ease. This works well for results such as translations from Google Translate or information pulled from Google Maps regarding location or opening times. Siri tends to suggest Wikipedia results whilst Google Assistant offers what could be considered more concise answers, in a conversational manner. Google Assistant also remembers and refers to previous queries in a more fluid manner than Siri.
Using the Actions SDK, developers can engage users in every experience where the Google Assistant is available. This can be on Android, iPhone or through the Google Home device.
Google Home updates
Home devices are currently the talk of the town. And Google have honed-in on the capabilities of Google Home with a whole new set of features to stay ahead of the game.
An interesting update is the ability to view visual responses through Chromecast and Android TV. By asking Google Home to show you your calendar, it will recognise the voice of the speaker and then display the calendar of the individual on their TV screen. Users will also be able to perform ‘hands-free’ calling and play music from iOS and Android devices.
As a direct competitor against Amazon, and with rumours of Apple working on a Siri personal assistant also, 2017 may be the year that the battle of the home assistant comes to a head!
Potentially the most exciting feature that Google announced at Google I/O is the Google Lens technology.
Using your camera, Google apply their knowledge graph to inform you of a particular source from your viewfinder and also let you take action. For example, you can use your phone to scan a street of restaurants. Google Lens will offer information such as Google Reviews, opening times and other business information when scanning a restaurant building. The Lens advocates providing correct information in a meaningful way. An example is recognising a painting or a building that you have taken a picture of. If want to store the information from an advert or flyer, Google Lens will do this and give you the option to save a stated address or call the number directly depicted on the image or even buy concert tickets if applicable.
Google have also solved one of the most aggravating tasks, connecting to WiFi in an unfamiliar place. By taking a picture of the network name and key sticker on your router, Google Lens will save you the hassle by connecting you automatically.
All the user cases show how Google can practically apply their capabilities to apps that will become essential, utilitarian tools for mobile users. It is also a step towards Google’s AI-first ambitions, with the incorporation of augmented reality to bring Google’s AI into the real world. We’re excited to see where this will go.
Want to find out more about optimising your app and keeping up with the latest OS capabilities? Make sure to subscribe to our Mastering Mobile Marketing video series. You can also get in touch by visiting the Contact Us page. Follow us on LinkedIn, chat with us on Twitter @yodelmobile, and join our #mobilemarketingUK LinkedIn group.