I don’t leave home without my AirPods. I don’t take calls without them and I keep them on even when I’m not on calls. It’s mostly out of bad habit, but it feels so natural that I don’t feel the need to take them out. AirPods are one of those devices that if I lost them, I would purchase a new pair without hesitation.
So, when I see headlines like these, I’m not surprised.
Source: Cult of Mac
Source: Above Avalon
Source: OneZero Medium
Source: The Motley Fool, LLC
Now, let’s be clear, this is not a hardware play by Apple. Sure, hardware sales make Apple an even more powerful force and add additional revenue to their bottom line.
The real takeaway from all of this is that you now have a device that is in your customers ears for a good portion of the day. This gives brands the opportunity to re-engage you in a different way. This could come in the form of ads or podcasts, etc.
Here’s what Marc Andreesen has to say about Airpods and voice:
“The really big one right now is audio. Audio is on the rise just generally and particularly with Apple and the AirPods, which has been an absolute home run [for Apple]. It’s one of the most deceptive things because it’s just like this little product, and how important could it be? And I think it’s tremendously important, because it’s basically a voice in your ear any time you want.”
Not only is it a voice in your ear anytime you want, but it’s a device that is ready 24/7 for any command you have for it. This opens the door for many opportunities.
Natural Language from Software is Improving
If you didn’t know, Siri isn’t completely computer generated. In fact, there are voice actors behind Siri that produce the words via audio clips that you hear from Siri.
However, with Apple’s latest iOS 13 release, Siri will be 100% computer generated via their Neural Text to Speech technology. The result is a more natural sounding, which is kind of ironic.
What this means for consumers and businesses is that the chances of you knowing if you’re talking to a robot or someone real on the phone is going to become much harder. The gap between humans and voice bots like Siri, Alexa, and Google Assistant are becoming less and less. The faster humans get answers, the more we will use them. It’s a natural progression.
So, for businesses who run call centers, this could mean a big shift in how they interact with customers. I just don’t see this as a fad or a trend. Natural sounding robots who can answer almost any question are the future.
Millions of Smart Speakers In Homes
Consumers are literally putting always-on, internet-connected listening devices in their home. Amazon alone said they sold tens of millions of Amazon echo’s in 2018. Canalys forecasts the worldwide smart speaker installed base will grow 82.4% from 114.0 million units in 2018 to 207.9 million in 2019.
208 million devices inside homes. That’s incredible.
I have a Google Home, and I don’t think it’s perfect by any means, but my three-year-old definitely uses it all the time to say things like:
Hey Google, play Ryan’s toy review on YouTube.
OK Google, is it going to rain today?
Simplicity is the real innovation here. If my three-year old can communicate with Google, and actively have conversations with Google, and expect Google to give him the right answer, then this is a big sign that the tech will be an integral aspect of our lives going forward. More consumers will use it for eCommerce, smart home type activities, and the ability to get almost any question answered immediately.
We must experiment with new use cases for these devices. If a three-year-old can use it and never has to pick up a remote ever again, maybe it’s something worth exploring further.
The Smart Speakers Are Getting Much Smarter, Too
Let’s be real. Siri doesn’t answer many questions right. However, new data shows Siri is getting smarter.
Munster and Thompson found that Siri correctly answered 74.6 percent of the 800 questions asked during their tests. That was better than Amazon Alexa (72.5 percent) and Microsoft Cortana (63.4 percent), but still behind Google Assistant (87.9 percent). [src]
Not looking to be outpaced, Amazon just announced a new feature that will improve adoption and usage: the ability to handle multiple requests in one conversation. Through a live demo, an Amazon rep showed how Alexa can book movie tickets, set up reservations for dinner, and order you an Uber all in one conversation. The best part? You won’t have to say Alexa’s name more than once during this conversation.
The average conversation time you have with Alexa is going to be longer (well, that’s what Amazon’s pushing at least), a major move for conversational AI. Through several natural language conversation competitions, the winning team trained Alexa to have an average conversation time of 10+ minutes with a human. These new competitions and learning will slowly but surely make it into the new updates on consumers devices when ready.
And when it comes to Google, you can always count on them to impress even with the smallest of feature upgrades. At Google I/0 2019, they announced that you can turn off your alarm by just saying “stop” instead of “OK, Google Stop.” It might seem like a small change, but this is an indication that these companies are pushing for a human-like conversation as much as possible. Are you talking to a human or a bot? Pretty soon, you won’t know the difference and honestly, you won’t care.
The brains behind these devices, millions of computers in the cloud, are getting smarter. All of the big tech companies continue to make large investments in machine learning, which only means that the number of correct answers are only going to go up.
The Numbers Speak Volumes
The data confirms that voice search is growing and will be integrated into our daily routines. In 2016, Google said that 20% of searches done in their app were voice search. Andrew Ng predicted that 50% of all searches would be voice by 2020. In the 2019 Smart Speaker Consumer Adoption Report, the #1 use case for smart speakers were to ask questions, followed by listening to music and checking the weather.
40% of adults now use voice search once per day, according to research by Location World and 72% of people who own a voice-activated speaker say their devices are often used as part of their daily routine, via Think with Google.
The combination of AirPods being in your ear at all times, smart speakers listening at all times, machine learning and AI improving every day, and the voice that responds back to you sounding as natural as your next door neighbor, all point to a bright future for voice.
Voice is not a trend or a fad. It is the future, and we must treat it as such.