Natural Language Generation (NLG) Servers
Retraining the bot just to change the text copy can be suboptimal for some workflows. That's why Rasa also allows you to outsource the response generation and separate it from the dialogue learning. The assistant will still learn to predict actions and to react to user input based on past dialogues, but the responses it sends back to the user will be generated outside of Rasa. When the assistant wants to send a message to the user, it will call an external HTTP server that you define.
Responding to Requests
Request Format
When your model predicts that your bot should send a response to the user, it will send a request to your server, giving you the information required to select or generate a response.
The body of the POST
request sent to your NLG endpoint will be structured
like this:
New in 3.6
We have added an id
field to the request body.
This field contains the ID of the response variation.
You can use this information to compose/select a proper response variation on your NLG server.
Here is an overview of the high-level keys in the post request:
Key | Description |
---|---|
response | The name of the response predicted by Rasa. |
id | An optional string representing the response variation ID, can be null. |
arguments | Optional keyword arguments that can be provided by custom actions. |
tracker | A dictionary containing the entire conversation history. |
channel | The output channel this message will be sent to. |
You can use any or all of this information to decide how to generate your response.
Response Format
The endpoint needs to respond with the generated response. Rasa will then send this response back to the user.
Below are the possible keys of a response and their (empty) types:
You can choose to provide just text, or a combination of different types of rich responses.
Just like the responses defined in the domain file, a response needs to contain at the very least
either text
or custom
to be a valid response.
Calling responses from stories
If you use an external NLG service, you don't need to specify the
responses under responses
in the domain. However, you still need to add the response names
to the actions
list of the domain if you want to call them directly from
your stories.
Configuration
To set up Rasa with your NLG server the following steps are required:
Add required configuration to your
endpoints.yml
endpoints.ymlnlg:url: http://localhost:5055/nlgIf your NLG server is protected and Rasa will need authentication to access it, you can configure authentication in the endpoints:
endpoints.ymlnlg:url: http://localhost:5055/nlg## You can also specify additional parameters, if you need them:# headers:# my-custom-header: value# token: "my_authentication_token" # will be passed as a GET parameter# basic_auth:# username: user# password: passTo start the Rasa server using your NLG backend, add the
--endpoints
flag, e.g.:rasa run -m models --endpoints endpoints.yml