This is documentation for Rasa & Rasa Pro Documentation v2.x, which is no longer actively maintained.
For up-to-date documentation, see the latest version (3.x).
Taking context into account is often key to providing a good user experience. This page is a guide to creating contextual conversation patterns.
In a contextual conversation, something beyond the previous step in the conversation plays a role in what should happen next. For example, if a user asks "How many?", it's not clear from the message alone what the user is asking about. In the context of the assistant saying, "You've got mail!", the response could be "You have five letters in your mailbox". In the context of a conversation about outstanding bills, the response could be, "You have three overdue bills". The assistant needs to know the previous action to choose the next action.
To create a context-aware conversational assistant, you need to define how the conversation history affects the next response.
For example, if a user asks the example concert bot how to get started, the bot responds differently based on whether or not they like music:
A conversation with a user who likes music:
A conversation with a user who doesn't like music:
Step-by-step Guide on Creating Contextual Conversation Patterns
1. Defining Slots
Slots are your assistant's memory. Slots store pieces of information that your
assistant needs to refer to later and can direct the flow of the conversation
slot_was_set events. There are different types of slots,
and each affects the conversation flow in its own way.
In the concert bot example, the
likes_music slot is a boolean slot. If it is true, the bot sends an intro message. If it is false, the bot sends a different message.
You define a slot and its type in the domain:
2. Creating Stories
Stories are examples of how conversations should go. In the example above, the concert bot responds differently for users who like music and users who don't because of these two stories:
These stories diverge based on the user's intent (
deny). Based on
the user's intent, a custom action sets a slot that further directs
3. Configuring the TEDPolicy
In addition to adding stories to account for context, machine learning policies can help your model generalize to unseen conversation paths. It is important to understand that using machine-learning policies does not mean letting go of control over your assistant. If a rule-based policy is able to make a prediction, that prediction will always have a higher policy priority and will predict the next action.
The TEDPolicy is made to handle unexpected user behaviors. For example, in the conversation below (extracted from a conversation on Rasa X):
Here we can see the user has completed a few chitchat tasks first, and then ultimately asks how they can get started with Rasa X. The TEDPolicy correctly predicts that Rasa X should be explained to the user, and then also takes them down the getting started path, without asking all the qualifying questions first.
Since the machine-learning policy has generalized to this situation, you should add this story to your training data to continuously improve your bot and help the model generalize better in future. Rasa X is a tool that can help you improve your bot and make it more contextual.
Usually, only a certain amount of context is relevant to your assistant.
max_history is a hyperparameter for Rasa dialogue management policies
that controls how many steps in a dialogue the model looks at to decide which
action to take next.
In the story below, the user asks for help three times in a row. The first two times, the bot sends the same message, but the third time, it hands them off to a human
In order for the model to learn this pattern, it needs to know at least the previous
four steps i.e.
max_history of four. If
max_history were 3, the model would not have
enough context to see that the user had already sent two help requests, and would never
predict the human handoff action.
You can set the
max_history by passing it to your policy's settings
in your config file, for example:
You want to make sure
max_history is set high enough
to account for the most context your assistant will need to make an accurate
prediction about what to do next.
For more details see the docs on featurizers.
Here's a summary of the concepts you can apply to enable your assistant to have contextual conversations:
- Write stories for contextual conversations
- Use slots to store contextual information for later use
- Set the
max_historyfor your policies appropriately for the amount of context your bot needs
- Use the TEDPolicy for generalization to unseen conversation paths