The Next UX Wave: Experiential Search, Conversational UI & Augmented Reality
What are the next big trends in UX? At our recent Expero Summit, some of our team of experts participated in a panel that discussed many advances that promise to transform how users interact with technologies. As augmented reality and other technologies take substantive form, it’s more and more about what the user needs from these amazing technologies and less about how cool the technology actually is. It’s a given that the technology is only going to get cooler. What’s not as obvious is whether the user is ready for it.
Think Google Glass. That was (is) a really cool technology. Innovative, cutting-edge, even inspiring. But the public wasn’t ready for it, so it failed—at least for the time being.
The public wasn’t ready for Google Glass, but that doesn’t mean it’s dead.
So what are the emerging user experiences that our users are actually ready for, even excited about? Well, it has to do with a lot of the technologies we’ve been hearing about the past few years, only now we’re starting to see them actually take shape and be used in real, practical ways in our lives. And as the technologies become normalized, so too do the experiences that emerge from them. The Internet of Things, for example, is so ubiquitous that we don’t even think about it anymore. It’s an expectation that our devices will be connected and smart, not a perk or a cool new feature.
Expero specializes in creating custom software for complex domains that often harness huge data sets. So naturally we want to know how our projects can go the distance with these emerging experiences.
Here are a few experiences on the horizon, the technologies that have brought them front and center, and how Expero sees the future of custom software in complex domains.
Search is no longer a distinct interaction with a search engine interface. When thinking about search now, it’s a search experience, not a search engine.
Users continue to expect more from search: more relevance, more personalization, more context. (Note: They also expect more security, which is another issue entirely.)
“The future of search is to try to build the ultimate personal assistant.”
—Behshad Bahzadi, Director of Search Innovation @ Google
Here are 4 interactions that are defining the search experience of the future:
- Voice Search: Voice search, particularly on mobile devices and throughout the Internet of Things, is quickly becoming the norm for how users expect to communicate with search engines. In our fast-paced, always-connected world, typing can be too obnoxious or not always practical. Also, as the search engine begins to learn more about the user specifically and about human behavior generally, it is able to understand intent behind the search, offering more contextual, relevant and personalized results to the user.
- Image Search: Image search enables a user to take a picture and find an image match. This is useful for the CIA to run facial recognition analysis, but the lay consumer will soon be able to take a picture of a pair of shoes or a cool watch and find that product on the Internet–and then buy it!
- Anticipatory Search: With the continued advancement of machine learning mechanisms, predictive analytics, and other next-gen technologies, search engines begin to anticipate what users will search for and offer “push” searches. Garnering data from a user’s social media activity, calendar, search history and other mechanisms, the machine will learn to guess what a user might want before the user even knows it. It’s a sort of advanced recommendation engine.
- Experiential Search: Traditional searches have been informational (to know something), aspirational (to understand something) or navigational (to find something). Soon there will be experiential searches—to feel something. As VR/AR technology advances and headgear is as common as smartphones, users will begin to search for experiences, like trekking Machu Picchu or riding a bike.
Natural language processing is advancing to the point that conversing with technology is attainable. Machine learning advancements mean that systems can react and personalize their interactions with users. Employing these two technologies in tandem means users can continue to expect a more natural, conversation-like experience when interacting with machines, using full sentences themselves and expecting full sentences from their conversant computer.
As our systems learn more about us, we will begin to interact with these systems differently, though exactly how remains to be seen. As for now, the conversational UI is quickly becoming its own form factor, similar to when touch screens emerged.
For more on conversational UI, see Steve Purves and Chris LaCava’s 2016 post Amazon Just Made AI Easier.
Virtual Reality/Augmented Reality
VR/AR has been making a lot of noise over the past couple of years. We’re in that nascent stage where developers are still throwing around a lot of ideas to see what resonates with users. From VR-assisted roller coasters, amusing time wasters and brokerage software to pain management and dementia therapy, the ability to place the user into another environment is incredibly powerful and full of opportunities for developers with the bravery to challenge what is possible.
So which amazing emerging technologies are making all these experiences possible? Machine learning, natural language processing (NLP), harnessing Big Data—all of these.
Machine learning is already being used for dynamic personalization of different websites all around us. Think Netflix recommending movies to you based on your watch history and queue, or Amazon suggesting related products and other products that people bought who also bought your product.
Real-time natural language processing in the form of voice control and chatbots is just beginning to come into widespread use, and it still has a lot of sharp corners. While there are some great examples of using this technology to drive a new paradigm (e.g., Alexa, Google Home, Cortana, Siri), there are also multiple examples where they still need improvement (e.g., Microsoft Tay, Alexa Orders Dollhouse).
Behind all of this technology are tons of data being collected from tons of sources, but processing tends to be done in a batch mode after the fact rather than in real time.
It’s clear that advancements in NLP and artificial intelligence will drive the need for more conversational interfaces. With the rise of real-time analytics and the ability to harness Big Data, the more interactive, personalized, contextual experiences above will be commonplace.
What does this mean for complex domains?
Since we at Expero specialize in hard problems in complex domains, the future looks exciting. Here are just a few ways the Next UX Wave might help out these nuanced complexities:
Consider a hectic emergency room, with docs, nurses, techs, admins, patients, families and even pharmaceutical reps running around from one task to the next, often interrupted in whatever flow they are going for. What could a conversational UI offer them?
Or what about a patient with dementia in an assisted living facility? She gets agitated every evening, but if she puts on her headgear and goes virtually to Myrtle Beach, she calms down immediately.
We’re already seeing enterprise leveraging conversational UI in the form of chatbots that can answer relatively simple questions or help consumers complete relatively simple tasks. The chatbots are just going to get smarter and the tasks more complex.
Education & Training:
Consider the benefits of allowing military or law enforcement trainees to “experience” real, life-threatening scenarios through virtual reality. Or teaching surgical students how to operate without a human guinea pig. Or even just taking elementary school students in California on a virtual trip to the Bronx Zoo in New York.
The future is exciting, and we are ready to tackle the coming innovations and challenges head-on.
7 Potential Future Search Signals (6/2015)
Voice: The future of next-gen search (8/2016)
Mind vs. Machine (3/2011)
What Comes after the Turing Test? (6/2014)
What Is the Alan Turing Test? (6/2014)