Home > Q&A with UX and Prototyping gurus > Conversational UI: Q&A with Aviva’s Senior Experience Designer
Talking conversation UI design and the challenges it brings to UX teams with Aviva's Senior Experience Designer Stratis Valachis

Talking conversation UI design and the challenges it brings to UX teams with Aviva’s Senior Experience Designer Stratis Valachis

Interdisciplinary UX designer, advocate of Lean methodologies and Omnichannel design thinking – everything you could possibly want in an Experience Designer. Stratis Valachis is leading the way for conversational user experience at British multinational insurance company, Aviva.

Justinmind got in touch to hear how he’s innovating in order to make the insurance and finance industry more customer-friendly.

Hi Stratis! Could you briefly explain your experience and expertise for our readers?

I started as a visual and motion designer but quickly moved into UX as I was always fascinated by identifying people’s needs and solving problems for them. I’ve worked at many startups in the US and in Europe and I’m currently at Aviva’s Digital Garage in Shoreditch, leading the design of some of the company’s innovation and conversational UX projects. Our goal is to change the insurance and long term finance industry through technology and customer centered design.

What’s an average day like for you as Experience Designer over at Aviva?

I usually start by attending various standups but the rest of my day varies. Some days I’ll be facilitating team workshops, others I will be observing user interviews and usability testing, others sketching, wireframing and since I started working on Alexa, voice prototyping! Every day is a different and fun challenge.

Your current projects are focused on CUI – could you explain what this is?

CUI stands for conversational user interface. It’s a term that describes user interfaces that emulate human to human conversations. Products using CUIs, like Amazon Echo’s Alexa, ask the user various questions in succession to learn about their needs and provide them with relevant information. Instead of navigating the interface you actually tell the machine what you need it to do for you.

Here are the three types of conversational user interface examples:

  • Voice (e.g. Alexa on Amazon Echo, Google Assistant on Google Home)
  • Visual (e.g. Facebook messenger, Slack)
  • Visual-voice hybrids (e.g. Siri, Google Assistant on Android).

When does it make sense to use a conversational UI and why?

It’s important to remember that you shouldn’t use a conversational interface just because it’s a cool trend.  Both voice and visual CUIs have many usability constraints which in many cases make them less usable than traditional graphical interfaces (GUI). Those usability constraints involve poor error user control and error prevention, reliance on recall and linearity.

However, those usability constraints do not mean that conversational apps cannot add great value to your products and provide a great experience. There are many use cases for which they are a superior option over other interfaces both in terms of enhancing the user experience and in terms of cutting organisational costs. Below you’ll find some occasions where CUIs really shine. I’ve grouped them based on voice and visual:

Voice

Short simple commands:

Since we speak faster than we type, voice interfaces are great for short utilitarian tasks such as searching, calling a cab, setting an alarm, turning of the lights.

Hands-free tasks

The usability of product is relevant to context and in some contexts voice interfaces allow users to perform tasks that they wouldn’t be able to perform in non-voice interfaces. Some examples are listening to a recipe while cooking, making a call while driving, hearing the headlines while you’re dressing for work.

Visual

Customer Support:

Support bots with CUIs can be a great way to answer simple questions quickly. They serve as a form of advanced searching, as they capture the user’s intent and then if needed they follow up with a number of questions to better understand the user’s preferences and provide them with more accurate information.

This not only provides users with a better experience compared to waiting on the phone, waiting for a live chat agent or scanning the page for relevant FAQs, it also reduces operating costs. It’s important to keep in mind that at the moment since most bots are based on decision trees, they can only with help simple tasks. That’s why you should always have the option to hand off to a human if the user has a complex request.

Repetitive concierge tasks:

Products with conversational user interfaces that have natural language understanding can be very good for repetitive tasks like booking a meeting, or setting a reminder as they user could quickly just type in their preferences instead of navigating the interface. Also smarter bots with CUI can learn more about the user’s preferences and be proactive by suggesting common actions or even become an intermediary between humans for tasks like scheduling meetings.

Personal coaching:

A popular use case of bots with conversational interfaces is coaching. Bots sends their users tips to help them with various needs such as finances, training, weight loss and others. The conversational interface makes the experience feel more human and thus more engaging and personal.

It’s important to remember that you shouldn’t use a conversational interface just because it’s a cool trend.

How are you tackling this project at Aviva with regards to voice and visual?

At the moment, at Aviva we have a customer support chatbot that answers common questions from our users; the Aviva skill for Alexa, which helps users get quick answers about insurance terms; and a Facebook Messenger bot for one of our brands – QuoteMeHappy.com – which helps users get a quick home insurance quote.

This is still a new field for us, we’re experimenting with different concepts in order to learn and improve the ways we utilise these web interfaces to meet the needs of our customers. Our vision is to shape a consistent experience across channels allowing customers to talk to us on the device of their choice.


Download Justinmind now and start mapping out the user flow of your CUI now

Download free


What goes into creating a CUI?

The thinking remains the same with any other design project. You still need to  start by understanding your users and your business, identify where the problem lies, ideate solutions through divergent thinking, converge, develop and continuously iterate based on user feedback.  Only some of the tools and methods you use will change.

After you have identified a problem and you decided that a CUI would provide the best experience of your users I would recommend the following process:

  • Map out the omnichannel user journey

Setting the right expectations is key in crafting a great user experience. Before you begin designing, make sure you understand how your users are going to find out about your product, in what circumstances are they going to use it and of how you can meet their needs in all the different touchpoints.

  • Define the personality

Humans are empathetic and they will always personify your conversational UI. That’s why it’s important you make sure this is defined through design. Think of how you would like the users to describe the CUI assistant after they use it and ways of how you can achieve this through the conversation.

  • Write down the core parts of the experience as scripts

Some designers jump directly into crafting out detailed user flows but I believe a more efficient way to start would be to write down the core parts of the experience like a screenplay. This will make it easier to decide the direction you’d like to follow and will also help you initiate conversations with stakeholders.

  • Design more granular functionality with user flows

At this stage, you will be ready to start designing your user flows to define the functionality at a granular level. Again, understanding context is crucial. Make sure you think of the different scenarios in which users will interact with your product and the ways they’re likely to phrase their input.

  • Prototype and test

Finally, you need to prototype and test your experience to get feedback from users, and iterate so that you can meet their needs better. The way to do this varies between visual and voice conversational interfaces. For Alexa I test the conversations on Amazon’s voice simulator to hear how they would sound on the actual channel.

When I need to test interactions I work with a developer to create voice prototypes. For custom visual CUIs I mock the screens up and create simple screen-flows. When designing for Facebook messenger I use Botsociety to get a more accurate idea of the experience will feel like on the platform.

What are some of the challenges with creating a CUI at Aviva and how are you and your team overcoming them?

The biggest challenge we’re facing with most of our CUI projects at Aviva is setting the right expectations for the users. There’s a gap between what users think products using CUI interfaces are capable of, and what they can offer at the moment. That’s something we’ve observed in both visual and voice interfaces and with users who had previous experience in the platforms we were testing.

Also, the fact that Aviva has a diverse range of products , makes this issue even more challenging, as there is a wide variety of different types of needs that our users will expect our conversational bots to be able to help them with. The key to overcoming this problem, is by thinking about the omnichannel journey and making sure we set the right expectations in all touchpoints.

How do you see CUIs developing in the coming years?

I believe we will start seeing a lot more hybrid interfaces which will retain the best parts of both CUIs and GUIs (graphical user interfaces).  We’ve already started seeing examples of this on Facebook Messenger which has given the ability to designers to display webviews with GUI elements within the conversational environment of messenger bots. These webviews facilitate rich browsing experiences when they’re needed to improve the usability of the product.

A good example is the Swelly bot, which allows users to browse and set their notifications preferences through GUI toggles, a task which would be significantly more time-consuming to pull off through a conversation.

Steven is the web editor at Justinmind

Add comment

Your email address will not be published. Required fields are marked *