notice
This is documentation for Rasa Documentation v2.x, which is no longer actively maintained.
For up-to-date documentation, see the latest version (3.x).
Stories
Stories are a type of training data used to train your assistant's dialogue management model. Stories can be used to train models that are able to generalize to unseen conversation paths.
Format
A story is a representation of a conversation between a user and an AI assistant, converted into a specific format where user inputs are expressed as intents (and entities when necessary), while the assistant's responses and actions are expressed as action names.
Here's an example of a dialogue in the Rasa story format:
User Messages
While writing stories, you do not have to deal with the specific contents of the messages that the users send. Instead, you can take advantage of the output from the NLU pipeline, which lets you use just the combination of an intent and entities to refer to all the possible messages the users can send to mean the same thing.
It is important to include the entities here as well because the policies learn to predict the next action based on a combination of both the intent and entities (you can, however, change this behavior using the use_entities attribute).
Actions
All actions executed by the bot, including responses are listed
in stories under the action
key.
You can use a response from your domain as an action by listing it as one
in a story. Similarly, you can indicate that a story should call a custom action by including
the name of the custom action from the actions
list in your domain.
Events
During training, Rasa Open Source does not call the action server. This means that your assistant's dialogue management model doesn't know which events a custom action will return.
Because of this, events such as setting a slot or activating/deactivating a form have to be explicitly written out as part of the stories. For more info, see the documentation on Events.
Slot Events
Slot events are written under slot_was_set
in a story. If this slot is set
inside a custom action, add the slot_was_set
event immediately following the
custom action call. If your custom action resets a slot value to None
, the
corresponding event for that would look like this:
Form Events
There are three kinds of events that need to be kept in mind while dealing with forms in stories.
A form action event (e.g.
- action: restaurant_form
) is used in the beginning when first starting a form, and also while resuming the form action when the form is already active.A form activation event (e.g.
- active_loop: restaurant_form
) is used right after the first form action event.A form deactivation event (e.g.
- active_loop: null
), which is used to deactivate the form.
writing form stories
In order to get around the pitfall of forgetting to add events, the recommended way to write these stories is to use interactive learning.
Checkpoints and OR statements
Checkpoints and OR statements should be used with caution, if at all. There is usually a better way to achieve what you want by using Rules or the ResponseSelector.
Checkpoints
You can use checkpoints to modularize and simplify your training data. Checkpoints can be useful, but do not overuse them. Using lots of checkpoints can quickly make your example stories hard to understand, and will slow down training.
Here is an example of stories that contain checkpoints:
note
Unlike regular stories, checkpoints are not restricted to starting with user input. As long as the checkpoint is inserted at the right points in the main stories, the first event can be a custom action or a response as well.
Or Statements
Another way to write shorter stories, or to handle multiple intents
the same way, is to use an or
statement. For example, if you ask
the user to confirm something, and you want to treat the affirm
and thankyou
intents in the same way. The story below will be
converted into two stories at training time:
or
statements can be useful, but if you are using a
lot of them, it is probably better to restructure your domain and/or intents.
Overusing OR statements will slow down training.
Test Conversation Format
The test conversation format is a format that combines both NLU data and stories into a single file for evaluation. Read more about this format in Testing Your Assistant.
testing only
This format is only used for testing and cannot be used for training.
End-to-end Training
New in 2.2
End-to-end training is an experimental feature. We introduce experimental features to get feedback from our community, so we encourage you to try it out! However, the functionality might be changed or removed in the future. If you have feedback (positive or negative) please share it with us on the Rasa Forum.
With end-to-end training, you do not have to deal with the specific
intents of the messages that are extracted by the NLU pipeline
or with separate utter_
responses in the domain file.
Instead, you can include the text of the user messages and/or bot responses directly in your stories.
See the training data format
for detailed description of how to write end-to-end stories.
You can mix training data in the end-to-end format with labeled training data which has
intent
s and action
s specified: Stories can have some steps defined by intents/actions
and other steps defined directly by user or bot utterances.
We call it end-to-end training because policies can consume and predict actual text. For end-to-end user inputs, intents classified by the NLU pipeline and extracted entities are ignored.
Only Rule Policy and TED Policy allow end-to-end training.
RulePolicy
uses simple string matching during prediction. Namely, rules based on user text will only match if the user text strings inside your rules and input during prediction are identical.TEDPolicy
passes user text through an additional Neural Network to create hidden representations of the text. In order to obtain robust performance you need to provide enough training stories to capture a variety of user texts for any end-to-end dialogue turn.
Rasa policies are trained for next utterance selection.
The only difference to creating utter_
response is how TEDPolicy
featurizes
bot utterances.
In case of an utter_
action, TEDPolicy
sees only the name of the action, while
if you provide actual utterance using bot
key,
TEDPolicy
will featurize it as textual input depending on the NLU configuration.
This can help in case of similar utterances in slightly different situations.
However, this can also make things harder to learn because the fact that different
utterances have similar texts make it easier for TEDPolicy
to confuse these utterances.
End-to-end training requires significantly more parameters in TEDPolicy
.
Therefore, training an end-to-end model might require significant computational
resources depending on how many end-to-end turns you have in your stories.