Retrieval actions are designed to make it simpler to work with small talk and simple questions. For example, if your assistant can handle 100 FAQs and 50 different small talk intents, you can use a single retrieval action to cover all of these. From a dialogue perspective, these single-turn exchanges can all be treated equally, so this simplifies your stories.
Instead of having a lot of stories like:
You can cover all of these with a single story where the above intents are grouped
under a common
A retrieval action uses the output of a ResponseSelector pipeline component which learns a retrieval model to predict the correct response from a list of candidate responses given a user message text.
check out the blog post
There is an in-depth blog post here about how to use retrieval actions for handling single turn interactions.
Configuring Retrieval Actions
Retrieval actions learn to select the correct response from a list of candidates. As with other intent data, you need to include examples of what your users will say in your training data file:
First, all of these examples will be combined into a single
retrieval intent that NLU will predict. All retrieval intents have a suffix
added to them which identifies a particular response key for your assistant. In the
ask_weather are response keys. Response keys are separated from
the intent name by a
Special meaning of
As shown in the above examples, the
/ symbol is reserved as a delimiter to separate
retrieval intents from response text identifier. Make sure not to use it in the
names of your non-retrieval intents.
Next, include responses for all retrieval intents in a training data file:
All such responses (e.g.
utter_chitchat/ask_name) should start with the
utter_ prefix followed by the retrieval intent name (
and the associated response key (
The response variations do however use the same format as the responses in the domain. This means you can also use buttons, images and any other multimedia elements in your responses, and have multiple response variations for a response.
You need to include the ResponseSelector
component in your configuration. The component needs a tokenizer, a featurizer and an
intent classifier to operate on the user message before it can predict a response.
ResponseSelector should be placed after these components in the
pipeline configuration. For example:
The retrieval model is trained separately as part of the NLU training pipeline
to select the correct response. The default configuration uses the user message text as input and the retrieval intent combined with the
response key suffix (e.g.
chitchat/ask_name) as the correct label for that user message. However, the
retrieval model can also be configured to use the text of the response message as the label by setting
true in the component's configuration:
Rasa uses a naming convention to match a retrieval intent name to its corresponding retrieval action.
By this convention, the
utter_chitchat action is configured as a response to the
retrieval intent, and
utter_faq is a response to
faq. These actions do not need to be
added to the domain file.
The best way to ensure that the retrieval action is predicted after the chitchat intent is to a rule. A rule will tell your bot to respond appropriately to a retrieval intent at any point in the conversation:
However, you can also include this action in your stories. For example, if you want to repeat a question after handling unexpected chitchat:
Multiple Retrieval Actions
If your assistant includes both FAQs and chitchat, it is possible to
separate these into separate retrieval actions, for example having sub-intents
faq/returns_policy. Rasa supports adding multiple
utter_faq. To train separate retrieval models for each of the retrieval intents,
you need to include a separate
ResponseSelector component in the config for each retrieval intent:
Alternatively, if you want the retrieval actions for both the intents to share a single retrieval model,
specifying just one
ResponseSelector component is enough.