When designing stories, there are two groups of conversational interactions that need to be accounted for: happy and unhappy paths. Happy paths describe when the user is following the conversation flow as you'd expect and always providing the necessary information when prompted. However, users will often deviate from happy paths with questions, chit chat, or other asks. We call these unhappy path.
It's important for your bot to handle unhappy paths gracefully, but it's also impossible to predict what path a given user might take. Often, developers will try to account for every possible diverging path when designing unhappy paths. Planning for every possible state in a state machine (many of which will never be reached) requires a lot of extra work and increases training time significantly.
Instead, we recommend taking a conversation-driven development approach when designing unhappy paths. Conversation-Driven Development promotes sharing your bot as early as possible with test users and collecting real conversation data that tells you exactly how users diverge from the happy paths. From this data, you can create stories to accomplish what the user is requesting and start to think about ways to guide them back into a happy path.
When to Write Stories vs. Rules
Rules are a type of training data used by the dialogue manager for handling pieces of conversations that should always follow the same path.
Rules can be useful when implementing:
One-turn interactions: Some messages do not require any context to answer them. Rules are an easy way to map intents to responses, specifying fixed answers to these messages.
Because rules do not generalize to unseen conversations, you should reserve them for single-turn conversation snippets, and use stories to train on multi-turn conversations.
An example of a rule where the bot returns a fixed response "utter_greet" to a user message with intent "greet" would be:
For multiple-turn interactions, you should define a story, for example:
Managing the Conversation Flow
Here are some tips for managing the conversation flow in your stories:
When to Use Slots to Influence Conversations
Slots act as your bot’s memory. When you define a slot, you can define whether a
slot should influence the conversation or not.
Slots with the property
influence_conversation set to
false can only store
information. Slots with the property
influence_conversation set to
true can affect
the dialogue flow based on the information stored in it.
Slots which influence the conversation need to be added to your stories or rules. This also applies for the case if the slot was set by a custom action. For example, you can use a boolean slot set by a custom action to control the dialogue flow based on its value using the following stories:
In cases where you don't want a slot to affect the conversation flow, you should
set the slot's property
false. You do not need to
slot_was_set events for slots in your stories which do not influence the
Implementing Branching Logic
When writing stories, sometimes the next action will depend on a value returned in one of your custom actions. In these cases, it's important to find the right balance between returning slots and using custom action code directly to affect what your bot does next.
In cases where a value is used only to determine the bot's response, consider embedding the decision logic inside a custom action as opposed to using a featurized slot in your stories. This can help reduce overall complexity and make your stories easier to manage.
For example, you can convert these stories:
into a single story:
with the custom action code:
In cases where the value is used to influence the action flow going forward, return a featurized slot to determine the stories. For example, if you want to collect information about new users, but not returning ones, your stories might look like this:
Using OR statements and Checkpoints
OR statements and checkpoints can be useful for reducing the number of stories you have to write. However, they should be used with caution. Overusing OR statements or checkpoints will slow down training, and creating too many checkpoints can make your stories hard to understand.
In stories where different intents are handled by your bot in the same way, you can use OR statements as an alternative to creating a new story.
For example, you can merge these two stories:
into a single story with an OR statement:
At training time, this story will be split into the two original stories.
consider restructuring data
If you notice that you are using OR statements frequently in your stories, consider restructuring your intents to reduce their granularity and more broadly capture user messages.
Checkpoints are useful for modularizing your stories into separate blocks that are repeated often. For example, if you want your bot to ask for user feedback at the end of each conversation flow, you can use a checkpoint to avoid having to include the feedback interaction at the end of each story:
do not overuse
Checkpoints are meant to make it easier to re-use certain sections of conversation in lots of different stories. We highly discourage using checkpoints inside existing checkpoints, as this increases training time significantly and makes your stories difficult to understand.
Creating Logical Breaks in Stories
When designing conversation flows, it is often tempting to create long story examples that capture a complete conversational interaction from start to finish. In many cases, this will increase the number of training stories required to account for branching paths. Instead, consider separating your longer stories into smaller conversational blocks that handle sub-tasks.
A happy path story for handling a lost credit card might look like:
Handling a lost credit card involves a series of sub-tasks, namely checking spending history for fraudulent transactions, confirming a mailing address for a replacement card, and then following up with the user with any additional requests. In this conversation arc, there are several places where the bot prompts for user input, creating branching paths that need to be accounted for.
For example, when prompted with "utter_ask_fraudulent_transactions", the user might respond with a "deny" intent if none are applicable. The user might also choose to respond with a "deny" intent when asked if there's anything else the bot can help them with.
We can separate out this long story into several smaller stories as:
Handling Context Switching
Often, users will not respond with the information you ask of them and instead deviate from the happy path with unrelated questions. Using CDD to understand what unhappy paths your users are taking, you can create stories for handling context switching.
Using Rules for Context Switching
Consider this conversation scenario:
In this example, the user is in the middle of paying their credit card bill, asks for their account balance, and is then guided back into the credit card payment form. Because asking for the account balance should always get the same response regardless of context, you can create a rule that will automatically be triggered inside of an existing flow:
By default, the form will continue to stay active and re-prompt for the necessary information, without having to create an additional training story.
Using Stories for Context Switching
You'll need to write additional stories for handling context switching when the user's interjection requires multiple conversation turns. If you have two distinct conversational flows and want the user to be able to switch between the flows, you will need to create stories that specify how the switching will occur and how the context is maintained.
For example, if you want to switch context upon a user ask and then return to the original flow after that ask is complete:
You will need to create a story that describes this context-switching interaction:
Managing Conversation Data Files
You can provide training data to Rasa Open Source as a single file or as a directory containing multiple files. When writing stories and rules, it's usually a good idea to create separate files based on the types of conversations being represented.
For example, you might create a file
chitchat.yml for handling chitchat,
faqs.yml file for FAQs.
Refer to our rasa-demo bot
for examples of story file management in complex assistants.
Using Interactive Learning
Interactive learning makes it easy to write stories by talking to your bot and providing feedback. This is a powerful way to explore what your bot can do, and the easiest way to fix any mistakes it makes. One advantage of machine learning-based dialogue is that when your bot doesn't know how to do something yet, you can just teach it!
In Rasa Open Source, you can run interactive learning in the command line with
Rasa X provides a UI for interactive learning,
and you can use any user conversation as a starting point.
See Talk to Your Bot
in the Rasa X docs.
Command-line Interactive Learning
The CLI command
rasa interactive will start interactive learning on the command line.
If your bot has custom actions, make sure to also
run your action server in a separate terminal window.
In interactive mode, you will be asked to confirm every intent and action prediction before the bot proceeds. Here's an example:
You'll be able to see the conversation history and slot values at each step of the conversation.
If you type y to approve a prediction, the bot will continue. If you type n, you will be given the chance to correct the prediction before continuing:
At any point, you can use Ctrl-C to access the menu, allowing you to create more stories and export the data from the stories you've created so far.