About Ahmed Bouzid

Dr. Ahmed Bouzid is the Sr. Director of Product and Strategy at Angel. Ahmed has over 15 years of experience in the fields of Speech Automation and Natural Language Processing and has written extensively on Voice Automation and Voice User Interface design. He holds a Bachelor's degree in Computer Science from The George Washington University, a Master's degree in Computer Science from Virginia Tech, and a PhD in the Philosophy of Science, also from Virginia Tech. Ahmed is a co-inventor on several patents, awarded and pending, relating to Natural Language Processing, Speech Automation, and Mobility. And he is the co-author of "The Elements of VUI Style," available in Amazon.com at: http://www.amazon.com/The-Elements-VUI-Style-Practical/dp/1461188172 Ahmed can be followed on Twitter @Didou.

Tips for Effective IVR Menu Design Part 2

Lost in an IVR?

Well designed IVR menus are essential to customer service. In my last post, I covered a few tips for IVR menu design.  Here are a few more that can keep IVR systems help customers get what they need without frustration.

 

1. Never allow holes in your Dual Tone Multi-Frequency (DTMF or touch-tone) choices.

We say that a menu has a hole if the options presented are not sequential. A menu that offers the user the option to press 1, 2 or 4 has a hole. A menu that offers the options 1, 2 and 3 does not. Avoiding holes helps IVR systems avoid confusing users.

 

2. Mark their current position in the menu tree.

A simple “Main menu,” played prior to listing the menu items, will reduce user confusion as to “where” they are in the dialog. The menu position marking becomes even more important as the user is led deeper into the menu tree. When you are leading a user down a menu path, list a menu header whenever you traverse a path and then list the sub-menu options. In case of a no-input or a no-match, then list the full path prior to replaying the menu prompt.

Example:

System: Main menu: you can say, “Check balance,” “Withdraw funds” or “Transfer funds.”

User: Transfer funds.

System: Transferring funds. Which account do you want to transfer funds from? You can say, “Checking,” “Savings” or “Money market.”

User: Savings.

System: Transferring funds from savings.

Click here to read more »

Share

Tips for Effective IVR Menu Design Part 1

IVR Menu Design

As primitive a mechanism as they may seem to be, menus remain the most effective way to elicit information from users. The system offers a list of options, users pick what they want and the system moves on to the next step. Nothing could be more straightforward—yet, if certain basic principles are not observed, menus can easily become very difficult to use.

This is the first in a two part series, in which I’ll cover 12 guidelines to help you design usable menus.  For today, here’s the first six.

1. Present the most requested items first.

Not all menu items are created equal. If you know which items are requested most frequently, place those items at the head of the menu list.

2. Keep the menu list to four items or less.

Because users have to try to remember all the options present, try to keep your menus to four items or less. If you need to present the user with more than four items, split the list into two: the first list should present the user with the items they are most likely to request, with the last option granting access to the second list.

 3. Keep the menu depth to three or less.

People hate deep menus. They are exasperated by them. And the deeper the menu, the stronger their feeling that they are being led into a blind alley with little hope to get where they want to go. If your menu depth is more than three, go back to the drawing board and see if you can’t consolidate some of those tree branches.

 4. Use the construct “You can say….”

If your application is speech enabled, use the construct, “You can say….” to list the menu options.

Example:

System: You can say, “Books,” “Magazines” or “Newspapers.”

 5. Avoid the construct “For X, say X, for Y, say Y, for Z, say Z.”

Simply rewrite the menu prompt as, “You can say, ‘X,’ ‘Y’ or ‘Z.’” In cases where you can’t find the X, Y or Z wordings that will accurately convey the meaning of the options, then use the construct “To A, say ‘X,’ To B, say ‘Y,’ To C, say ‘Z,’” where “To A” briefly explains what the option means.

Example:

System: To get your current balance, say, “Check balance;” to open a new account, say, “Open account;” to transfer funds from one account to another, say, “Transfer funds.”

6. Don’t use “Please select from the following options.”

This is a jaded phrase that needs to be retired. Just get to the point!

These are a good start—but stay tuned for next time when we’ll be coming back with even more best practices.

 

Want more content like this? Check out our quarterly email with IVR best practices.

Share

The Elements of Tuning

No matter how carefully you crafted your VUI design, or how diligently the design was implemented, or how thoroughly the implementation was tested, your application will need regular and careful tuning once deployed if your aim is to maintain a world-class, highly usable voice solution.

Tune up

Tune up

To effectively tune your application, you should have at your disposal three sources of information: (1) Call Logs: which will enable you to identify patterns across calls (e.g., where are people hanging up), (2) Call Recordings: which will enable you to understand the nature of a problem (why are people hanging up?), and (3) Your callers: usually, you do this by assessing their level of satisfaction with the solution.


Here are the basic questions that need to be asked in order to begin tuning a voice application:


Where are people hanging up? A hang up prior to completion of a task is usually a sign of frustration. If the goal of your application is automation, your first tuning task is to identify such hang up spots in your application and understand why people are hanging up.


Where are people asking to be routed to an agent? If you have designed your application with the goal of empowering the caller, you must have provided the caller with the option to route to an agent. A caller actively asking to speak to an agent is a caller who has decided that the application is not successfully enabling them to serve themselves. This is especially true of callers who have engaged the application over several minutes of interaction and then decided to bail out.


Where are people saying the wrong thing? The aim here is to identify those spots in your application where no-match failures are significantly higher than the average or the expected. The remedy is to listen to the prompt the caller hears and then listen what people are saying in response to that prompt. In such situations, adjust your application by either re-writing the prompt or by adding to the language the system is listening to what callers are responding with.


Where are people not saying anything? These are the spots in your application where the caller goes quiet on you. This occurs usually because the prompt is confusing or the caller was asked for some information that they don’t have (or don’t have ready access to, such as a subscription ID or an account number). If the issue is with lack of clarity of ambiguity, then re-craft your prompt (see Chapter 3). If the issue is with lack of readiness, then provide the caller with the time they need to retrieve the information you need from them or suggest that they call back when they have the information handy. Another strategy is to inform the caller at the very outset of the interaction that the subscription ID or the account number will be needed.


Where are people speaking too soon? At times, callers are impatient and speak sooner than they should, often missing crucial information or instructions. To remedy, either turn the barge-in setting off, or re-craft the wording of the prompt the caller is interrupting.


What type of noise level are your callers calling from? When you listen to your recordings, pay attention to the noise level and how the noise is affecting the no-match error rates.


What options are people asking for? If you discover that 80% of your callers are checking their savings balance, then ask 100% of your callers if they are calling about checking their Savings balance. By definition, 80% of the time your will be right.


How are people feeling about the application? You can probably get a good sense of how people feel about the application by just listening to the tone of their voice in your call recordings.

Share

Follow up study to be presented in SpeechTek

Susan Hura, one of the head organizers of SpeechTek this year, just posted following on the VUIDS Yahoogroups group:

For those of you coming to SpeechTEK next month, Tim Pearce from Dimension Data and Mike Bergelson from Cisco are going to present year 2 data from the Alignment Index at the conference. We’re kicking off the Business Goals track with this session, Monday, August 18, 10:15-11 AM. We’ll also be hearing about a similar study conducted in the EU by VoiceObjects.

Here is a link to the session.

Share

Vendors vs. Users: Interesting Alignment Study

Just came across a fascinating study by Dimension Data (in collaboration with Cisco) on the perception gap between “vendors” and “consumers” of speech-enabled self service solutions. By “vendors” the study refers to platform developers, system integrators, voice application developers, and speech technology vendors. 128 such vendors were surveyed for the study. By “consumers” they refer to callers who have interacted with speech-enabled self-service applications. They surveyed 1,203 such consumers.

Misalignment

The key findings revolve around 6 questions:

(1) How often would you prefer to use a speech recognition system rather than a touch-tone system? 9% of vendors answered “As little as possible,” while 45% of users gave that answer. A huge disconnect. On the flip side, 47% of users gave a qualified “Yes” — that is, they would prefer speech under some circumstances (depending on time of day, where the caller is, etc.), which tells us that users are not necessarily reflexively rejecting speech-enabled automation under all circumstances.

(2) What do you think is the main reason organizations provide automated services in their call centers? 69% of vendors said “to save money” compared to 54% of users. In other words, callers are no dupes: they fully understand what motivates to deployment of these solutions.

(3) What do you think is the most important benefit of using an automated system when you phone a call center? 51% of vendors mentioned “to avoid wait time” while 49% of users mentioned “24 x 7 service” against 18% who mentioned “Avoid wait time”! A remarkable mis-alignment and a clear opportunity for marketers and designers to exploit for increasing adoption.

(4) In general, when you’ve used a speech recognition system, which of the following best describes how well it helped you deal with your query? 77% of vendors said that it “Partially addressed the reason I called” while only 43% of users did. Another large gap. 2% of vendors responded with, “Did nothing I needed,” while 13% users gave that response. Again, another noticeable gap that points to excessive optimism from vendors. On the other hand, only 8% of vendors responded with “Fully addressed the reason I called,” while 18% of users gave that answer. In other words, it seems that vendor answers are driven by mushy conservative wishful thinking rather than insight into actual user reception.

(5) Having used a speech recognition automated system, would you now…? 44% of vendors responded with, “Be neutral to use one again” vs. only 28% of users giving the same answers. What is noteworthy is that a greater proportion of users (36%) responded with “Be happy to use one again” vs. 32% of vendors giving that answer, and a greater proportion of users (also 36%) responded with “Be reluctant to use one again” vs. 24% from vendors. In other words, just like question 4, users are more opinionated and have a less neutral disposition than vendors.

(6) The thing that annoys or irritates me most about using an automated speech application is when…. 41% of vendors answered with “System didn’t understand me,” vendors’ number one answer, while users’ number one answer was, “Transfer to agent with no context.” This is a fascinating disconnect. Only 17% of users responded with, “System didn’t understand me.” Which simply means that it’s not speech recognition that users find annoying or irritating, but the experience with the application: an additional 16% of users said, “Can’t skip ahead” and 14% said, “No alternatives”. In other words, 67% of dissatisfaction revolves around the experience with the application. Vendors by contrast focused on technology, in this case ASR and CTI (“Transfer to agent with no context” receiving 38%). “Can’t skip” received 4% and “No alternative” a mere 1%.

The report gives a couple of general recommendations such as establishing “cross-functional engagement within organizations” and ensuring “contributions from non-technology stakeholders, e.g., marketing, customer services, and usability experts.” But that is no revelation to anyone who seriously engages in voice user interface design.

What would have made the study complete would have been including a third category of stakeholders: the companies that deploy these applications — i.e., the actual customers of the vendors. I suspect that since many of these customers are sold on the value of self-service applications by the very vendors surveyed in the study, a parallel mis-alignment between customer expectations and those of the ultimate users also holds.

The authors promise to run the survey year over year. Let’s keep our eyes open. Hopefully, vendors and customers will read the report and will begin to actually align their goals and values along those of end users.

Share
« Older Posts