Deploy Your Assistant¶
As emphasized in the guide to sharing your assistant, it’s important to give your prototype to users to test as early as possible. To do so, you need to deploy your assistant to one or more channels.
You should deploy your assistant to external text or voice channels once you’ve done a first round of testing using Rasa-X’s built in channels. An external channel introduces some additional complexity, which is easier to troubleshoot and test once you have some idea of how your assistant behaves.
The most successful product teams using Rasa apply software engineering best practices to developing their assistants, including:
Versioning training data and action code in Git
Reviewing changes before they go into production
Running automated tests on proposed changes
If you already have a running Rasa Open Source deployment and you just want to connect it to Rasa X, see the Connect an Existing Deployment.
Integrated Version Control encourages best practices by integrating itself into your existing development workflows. It lets you automate data synchronization with your Git repository, annotate new data and push those changes with Git.
In order to connect Rasa X with your assistant’s Git repository, you will need two things:
A Rasa X instance running in server mode (local mode does not support Integrated Version Control)
A Git repository containing a project in the default Rasa Open Source project layout
When you connect your remote Git repository to Rasa X it will overwrite the training data which is currently stored in Rasa X. Please use a fresh Rasa X instance or export your training data if you want to keep the old training data.
For Rasa X to correctly visualize and modify your AI assistant’s data, your project needs to follow the default Rasa Open Source project layout created by rasa init:
. ├── config.yml ├── ... ├── data │ ├── nlu.md │ ├── ... │ └── stories.md └── domain.yml
You can connect your Git repository via the Rasa X UI.
If you prefer to provide your own SSH keys, please see Integrated Version Control: Connecting a Repository via the API.
To connect your Git repository click on the branch icon and click Connect to a repository.
Configure the repository connection:
SSH URL: Rasa X will clone the repository using the given SSH URL. Cloning via HTTP is currently not supported.
target branch: The target branch is the branch that Rasa X will:
use to show the initial data
branch off from when you make new changes
return to after you discard or push changes
By default users can choose if they want to push their changes directly to the target branch or to a new branch. If want to disable pushing changes directly to the target branch, select Require users to add changes to a new branch.
Add the provided public SSH key to your Git server. This allows Rasa X to authenticate with the Git server using its private SSH key. Please make sure to only give the key access to one specific repository instead of giving it global access to all of your Git repositories. For instructions specific to your Git platform, see below.
Add the generated public SSH key as a
Deploy keyto your GitHub repository. See the GitHub docs for more information on how to do so.
Add the generated public SSH key as a
Deploy keyto your GitLab repository. See the GitLab docs for more information on how to do so.
Add the generated public SSH key as an
Access keyto your Bitbucket repository. See the Bitbucket docs for more information on how to do so.
Once you added the public SSH key to your Git server, hit the
Verify Connectionbutton. Rasa X will now show that it is connected to your repository.
When improving your assistant, you’ll make different kinds of fixes to your bot. To automate the testing and integration of these improvements into your deployed assistant, you should set up a CI/CD (Continuous Integration/Continuous Deployment) pipeline on your connected git repository.
For example, you could add a step in your pipeline that pushes a newly trained model to Rasa X everytime a change is merged into your master branch. For more information on setting up a CI/CD pipeline, check out the Rasa Open Source user guide on CI/CD.
Here are a few examples of CI/CD pipelines in Github Actions to get you started:
The rasa-demo CI/CD pipeline includes the following steps; some are conditional:
Lints and type-tests the action code
Validates the data
Runs NLU cross-validation
Trains a model
Tests the model on test conversations
Builds and tags an action image
Pushes the action image to a private Google Cloud Container Registry
These two examples include some of the steps above, but with fewer conditions:
To deploy your assistant using Rasa X you need to:
If you already have a running Rasa Open Source deployment and you just want to connect it to Rasa X, see the guide here.
You can upload a model to Rasa X either using the UI, or
using the HTTP API.
If you have Integrated Version Control set up, you can also train a model from within Rasa X.
To deploy your trained model, you need to tag it as
active (Rasa X) or
production (Rasa Enterprise).
Once you have a model available on the Models screen, you can either tag the model using the UI, or
tag it via the HTTP API.
Note that for both Rasa X and Rasa Enterprise, you need to use the
production tag when tagging via the HTTP API.
In the long term, you should consider automating training, uploading, and tagging a model as part of a CI/CD pipeline.
If you have written any custom actions, you need to connect your action server to your Rasa X deployment. Follow the instructions for the installation method you used:
For details on setting up external channels, see the Rasa Open Source docs.
Once set up, connecting an external channel is a matter of adding the credentials in the right place for the installation method you used:
If you’re already running a Rasa Open Source deployment, you can connect it to Rasa X to annotate conversations - even if it’s running on a separate system.
To achieve this, you have two options:
Import historical conversations from your Rasa Open Source deployment into Rasa X
Automatically forward all new incoming messages directly from Rasa Open Source to Rasa X
In order to annotate conversations users have already had with a live Rasa Open Source deployment, you can import conversations from your current tracker store into Rasa X.
.env file in your Rasa X installation directory (
default) and make a note of the entry for
If you’ve deployed Rasa X in a cluster using Helm, Kubernetes or OpenShift, check out
Enable your RabbitMQ service to receive events from a different server by following Exposing the RabbitMQ Port. Make a note of the exposed IP address.
Now go to your Rasa Open Source deployment and note down the following values used in your Rasa tracker store: the database username, the database password and the name of the database.
On the same machine in a different directory, or an a different machine, create a new
endpoints.yml (alternatively you
can go to your project directory in your Rasa Open Source deployment and modify the
Have a look at Exposing the Database Port to make your database accessible from the outside world if you’re working on a different machine than your Rasa Open Source deployment.
In this file, create two sections defining the tracker store from which
to import conversations, as well as the event broker that’s used to move the
conversations from your Rasa Open Source deployment to Rasa X. You will forward
historical events in your Rasa Open Source deployment to the Rasa X
RabbitMQ is used as the message broker for Rasa X, so we will use the
Python configuration in the Rasa
Create the following two entries in
tracker_store: type: sql # other databases are supported dialect: <if using SQL, your Rasa Open Source deployment's SQL dialect, e.g. "postgresql"> url: <URL of the database service in your Rasa Open Source deployment> username: <tracker database username in your Rasa Open Source deployment> password: <tracker database password in your Rasa Open Source deployment> db: <name of the tracker database in your Rasa Open Source deployment> event_broker: type: "pika" url: <URL of the exposed RabbitMQ service in your Rasa X deployment> port: 5672 # change if your RabbitMQ service is exposed on a different port username: "user" # if customized, value of the RABBITMQ_USERNAME environment variable password: <value of RABBITMQ_PASSWORD> queues: - "rasa_production_events" # if using a custom queue, value of the RABBITMQ_QUEUE environment variable
You can use Rasa’s
rasa export command-line tool to export conversations from the
tracker database to an event broker. From there, your running Rasa X deployment will
consume the events and save them to your Rasa X database.
Head to the same directory as your
endpoints.yml file from above. To export all
conversations contained in the tracker database, simply run
rasa export command allows you to specify a subset of conversation IDs to
export, or restrict the time range of events. Just run
rasa export --help for an
overview of options, or check out the Rasa CLI docs on rasa export.
You can now log in to your Rasa X deployment and view the migrated conversations.
This configuration allows Rasa X to monitor conversations taking place in the live Rasa Open Source deployment environment without modifying the deployment architecture.
These instructions assume Rasa Open Source and Rasa X are deployed and running on two separate systems.
.env file in your Rasa X installation directory (
default) and make a note of the following values:
RABBITMQ_QUEUE. If you’ve deployed Rasa X in a cluster
using Helm, Kubernetes or OpenShift, check out Accessing Secrets. We
will need these when configuring the event broker on the Rasa Open Source server.
Read the section on Exposing the RabbitMQ Port in order to enable your RabbitMQ service to accept events.
You need to configure your Rasa Open Source deployment to forward messages to the
Rasa X event broker. The configuration is done in the
which can be found in the Rasa project directory. Configure the
section as described in Rasa Open Source Configuration (you do not need to modify the
tracker store section).
Once you’ve updated the
endpoints.yml file, restart the Rasa server.
If verbose logging is on using the
--debug option, you should see the
following messages in the Rasa logs indicating that messages are being forwarded to
rasa-production_1 | 2020-01-13 13:18:17 DEBUG rasa.core.brokers.pika - RabbitMQ connection to 'rasax.mydomain.com' was established. rasa-production_1 | 2020-01-13 13:20:52 DEBUG rasa.core.brokers.pika - Published Pika events to queue 'rasa_production_events' on host 'rasax.mydomain.com':
Before RabbitMQ can accept external events, you need to expose the port to the outside world. Depending on your chosen deployment method, read one of the following sections.
Follow these instructions if you’ve deployed Rasa X using the quick-install script, or if you’ve taken the Helm Chart deployment route.
First, find the namespace your deployment is running under. To list the available namespaces, run:
kubectl get namespaces
Now, find the name of your RabbitMQ service. Run the following command, looking for
an entry containing
-rabbit in the
kubectl get services -n <your namespace>
Expose the RabbitMQ service as a load balancer on port 5672 by running:
kubectl expose service <rabbit service> -n <namespace> --name rabbit-mq-load-balancer --type LoadBalancer --port 5672 --target-port 5672
Make sure the port was exposed by running the following command, checking for an
external IP address assigned to the
kubectl get services -n <your namespace>
Use the IP address listed under
EXTERNAL-IP in the
event_broker section of your
endpoints.yml. Make sure that your cluster or VM firewall settings allow traffic
to port 5672.
Once you’re done importing historical conversations, or you no longer want to stream events to your Rasa X deployment, you can remove the RabbitMQ load balancer by running:
kubectl delete service rabbit-mq-load-balancer -n <namespace>
If you’ve deployed Rasa X using Docker Compose, add the following block to
version: '3.4' services: rabbit: ports: - "5672:5672"
Now, restart the RabbitMQ service with the
sudo docker-compose restart rabbit
command. Make sure you’ve allowed traffic to port 5672 in your VM firewall.
If you’re following the steps on importing historical conversations on a machine that’s different from where you’ve deployed Rasa Open Source, you need to make the database accessible from the outside world. How to open your database port depends highly on the way you deployed Rasa Open Source, but following similar steps as in Exposing the RabbitMQ Port is a good idea if you’ve deployed Rasa Open Source alongside a database container using Docker Compose or Kubernetes.