Skip to content

January 5th, 2021

Getting Back on the Happy Path

  • portrait of Karen White

    Karen White

How to guide users toward successful interactions.

In software development, the happy path refers to situations where the user is behaving as expected, or doing what they're "supposed" to do. Of course, the existence of happy paths suggests that there must also be the opposite: unhappy paths. But despite the bias inherent in the name, unhappy paths aren't necessarily bad. Just because a user isn't doing what was expected doesn't mean their usage isn't valid.

When designing and building AI assistants, happy or unhappy paths reflect the success of the conversation. Happy paths propel the user forward toward their goal: getting an answer, resolving a problem, or completing a transaction. Unhappy paths hit dead ends: going on tangents, resulting frustration or being misunderstood. Unhappy paths are unavoidable, but they can become happy paths when the assistant's design accounts for them.

In this blog post, we'll talk about 5 strategies for turning unhappy paths into happy ones. First, we'll discuss how to figure out which unhappy paths you should focus on. Then, we'll address a few ways you can use conversation design to get the user back on track.

1. Build for the right unhappy paths.

Let's start with the bad news: there are a lot of potential unhappy paths a user could take. Chatbots and voice assistants are open-ended, and that means there are few restrictions on the type of input users can provide. When you start to analyze all of the possible ways conversations can go off track, the possibilities start to seem...infinite.

But here's the good news: just because an infinite number of unhappy paths might exist, that doesn't mean they'll actually happen. When you have an assistant running in production, the things users say can (and will) surprise you. But if you take the time to read through conversations between users and your assistant, you'll notice that patterns start to take shape.

For example, you might notice a significant number of conversations where the user initiates a conversation just to test whether they're talking to a human or a bot. Or, more seriously, let's say your assistant has a flow where the customer is sent a link to a secure page where they can pay their bill. Instead of following the link, a percentage of customers type their credit card number directly into the chat, causing incomplete payments and problems with improper storage of credit card data.

Once you've identified that these types of situations are occurring, you can address them with changes to the assistant's design. You might build out a dialogue to handle users who are just curious. Or in the case of the mis-entered credit card payment, you might adjust the assistant's response text to remind users not to enter their credit card information directly into the chat (some input validation would help here as well).

Handling an infinite number of unhappy paths can feel like trying to boil the ocean, until you realize that the number of unhappy paths users actually follow is much more manageable. We recommend only building for the happy paths you observe users taking. Otherwise, you risk spending cycles writing intents and dialogues that may never be used. Rasa X allows you to review conversations between testers (important early in the development/prototyping phase) and real users (essential for fine-tuning and improving the assistant after launch).

2. Let the user know their options.

When talking to a virtual assistant, there are few questions more disorienting than "How can I help you?"

Too often, this opener launches a frustrating guessing game, with the user ticking off requests and the chatbot demurring that it hasn't learned how to handle that skill. Sending the user into the void of possibilities is like opening a restaurant without printing menus: it puts the burden on the user to choose a path without knowing their options.

Conversational interfaces are effective alternatives to crowded navigation menus and toolbars in many applications. With their single entry point and ability to understand the language of the user, conversational interfaces can greatly simplify the user experience. But as Bruce Tognazzini states in his Principles of Interaction Design: Any attempt to hide complexity will serve to increase it.

The variety of tasks an assistant can help with are a type of complexity, but it does the user a disservice to hide that complexity completely. Instead, the assistant should take into account which stage of the interaction the user is in and present an appropriate set of options. At the beginning of the conversation, it's important for the assistant to define the scope of its domain. Most users know that virtual assistants don't have unlimited knowledge. By setting some boundaries on the interaction, you help the user orient themselves and choose the most direct route to solving their problem.

Cuing users to possible next steps isn't only for the beginning of conversations. After a user completes a task or transaction, it's important to give the user concrete next steps along with the option to end the interaction. Instead of asking, "Is there anything else I can help you with?" consider suggesting two or three potential tasks that are related to what the user just finished. Sometimes next steps might follow naturally from the business use case: for example, a customer who's just finished booking a flight is likely to need a car rental or hotel next. Other times, it serves as a return to the "main menu" where the user can come back to the available options and follow a new path, or exit the conversation.

3. Give users a way to correct errors.

No matter how well you've designed your assistant, errors and misunderstandings are bound to occur. But a misunderstanding doesn't need to be a disaster if the assistant can make a correction and keep going.

When a user message is sent to an assistant, the assistant attempts to classify it against the list of intents, or message topics, that the assistant knows how to recognize. Along with the intent the model chooses, the model will also produce a confidence score, representing how certain it is that the choice is correct. The confidence score can be used as an indicator that prompts the assistant to correct a misunderstanding when the confidence is below a certain threshold (say, 40%).

When this happens, we call it a fallback. A fallback is an exchange that tries to get the conversation back on track by asking the user to rephrase their message. If the assistant still can't match the user's input to something it knows how to handle, it offers an alternative resource.

Thoughtfully designed fallback flows can go a long way toward easing frustration. There are few things a chatbot user wants to hear less than "Sorry, I didn't get that." One tactic used by the Two-stage Fallback in Rasa assistants is letting the user know which intent the bot predicted with low confidence and asking the user to confirm. This gives the user some visibility into what the assistant is doing, even if the predicted intent wasn't right. If the user confirms that the assistant wasn't correct, the assistant asks the user to try phrasing their message another way. If the assistant still can't match the user's message to an intent with high enough confidence, the assistant escalates to a final fallback action. This final fallback can be anything you choose as a bot designer. Effective options include transferring the conversation to a human agent or linking to a self-serve resource like a customer forum.

Importantly, the Two-stage Fallback flow gives your assistant several chances to recover the conversation-something a statement like "Sorry, I didn't get that" doesn't do.

4. Account for off-topic requests.

When you first begin designing your assistant, you're probably focused on building out the conversation flows that support your use case-helping customers reset their passwords, look up an order status, or book a service appointment. But it's important not to overlook the silly, random, friendly (and not-so-friendly) things users will say to your bot as well: a category of dialogue known as chitchat.

Whether your assistant is gated behind a login or available to the public, be prepared for users to go off-topic. People are curious about virtual assistants. They want to test the limits and see if they can elicit a funny response.

First and foremost, it's important to know what type of chitchat your users engage in. As we discussed earlier, you can use Rasa X to view the messages and conversations your assistant is receiving. Reviewing messages is a great way to see what users are actually saying. Are they challenging the bot or sending insults? Maybe they're asking the bot personal questions like its age or birthday (yes, people do this!). Once you have this data, you can decide which messages are common enough to warrant addressing.

One common strategy in Rasa assistants is to group chitchat into sub-intents, like chitchat/ask_age or chitchat/bot_insult. Each sub-intent is matched to a corresponding bot response, where you can define the text the bot should send back to the user. Chitchat responses can be a fun place to express your brand and your bot's personality, but it's also a good idea to nudge the user back onto the happy path at the same time. For example, if the user asks the assistant's age, it might reply: "Age is just a number, but I'm always looking forward to what's next. You can ask me to look up nearby restaurants or book a reservation."

5. Offer an escape hatch.

We've seen how the Two-stage Fallback can give users a chance to correct misunderstandings and stay on the happy path, but sometimes the best option is to give the user a way to exit the interaction. As a user, it's frustrating to feel like you're trapped in a loop, with no other option, if you aren't getting what you need.

One popular strategy is known as human handoff. Human handoff transfers the user to a human agent. From a conversation design standpoint, there are a few considerations to make this transfer a smooth one.

First, is making sure that a fallback isn't the only path to getting a human on the line. You want to be sure that any time a user expresses that they want to speak with a human, it's followed up with the transfer action. Similarly, if a user expresses that the conversation is unhelpful, a transfer can be offered as a possible next step. You can learn more about the technical side of implementing human handoff on our blog.

A live agent transfer isn't the only way to offer external support. Other strategies include allowing the user to leave a message for the support team or requesting a callback during regular business hours.

Conclusion

When your assistant is still early in production, you might have just a few happy paths and many unhappy ones. Over time, you can turn the unhappy paths into successful interactions, but only if you have the data to see which roadblocks users are running into.

This fits into a larger approach to building better assistants called conversation-driven development (CDD). In conversation-driven development, you use insights from past user interactions to iteratively improve your assistant. Just as you do in Agile development, conversation-driven development relies on getting early prototypes into the hands of users, learning from those first interactions, and making small, frequent improvements. CDD also includes considerations unique to machine learning software, like turning past conversations and messages into new training data for your machine learning model.

Conversation-driven development is a strategy that encompasses every role on a conversational AI team: developers, data scientists, product owners, and conversation designers. If you'd like to learn more about how CDD and designing user-oriented assistants can help you get your users back on the happy path, check out a few of these resources: