notice
This is documentation for Rasa Documentation v2.x, which is no longer actively maintained.
For up-to-date documentation, see the latest version (3.x).
Event Brokers
An event broker allows you to connect your running assistant to other services that process the data coming in from conversations. For example, you could connect your live assistant to Rasa X to review and annotate conversations or forward messages to an external analytics service. The event broker publishes messages to a message streaming service, also known as a message broker, to forward Rasa Events from the Rasa server to other services.
Format
All events are streamed to the broker as serialized dictionaries every time
the tracker updates its state. An example event emitted from the default
tracker looks like this:
The event
field takes the event's type_name
(for more on event
types, check out the events docs).
Pika Event Broker
The example implementation we're going to show you here uses Pika , the Python client library for RabbitMQ.
Adding a Pika Event Broker Using the Endpoint Configuration
You can instruct Rasa to stream all events to your Pika event broker by adding an event_broker
section to your
endpoints.yml
:
Rasa will automatically start streaming events when you restart the Rasa server.
Adding a Pika Event Broker in Python
Here is how you add it using Python code:
Implementing a Pika Event Consumer
You need to have a RabbitMQ server running, as well as another application
that consumes the events. This consumer to needs to implement Pika's
start_consuming()
method with a callback
action. Here's a simple
example:
Kafka Event Broker
While RabbitMQ is the default event broker, it is possible to use Kafka as the main broker for your events. Rasa uses the kafka-python library, a Kafka client written in Python. You will need a running Kafka server.
Partition Key
New in 2.5
The partition_by_sender
parameter was added.
Rasa Open Source's Kafka producer can optionally be configured to partition messages by conversation ID.
This can be configured by setting partition_by_sender
in the endpoints.yml
file to True.
By default, this parameter is set to False
and the producer will randomly assign a partition to each message.
Authentication and Authorization
Rasa's Kafka producer accepts the following types of security protocols: SASL_PLAINTEXT
, SSL
, PLAINTEXT
and SASL_SSL
.
For development environments, or if the brokers servers and clients are located
into the same machine, you can use simple authentication with SASL_PLAINTEXT
or PLAINTEXT
.
By using this protocol, the credentials and messages exchanged between the clients and servers
will be sent in plaintext. Thus, this is not the most secure approach, but since it's simple
to configure, it is useful for simple cluster configurations.
SASL_PLAINTEXT
protocol requires the setup of the username
and password
previously configured in the broker server.
If the clients or the brokers in the kafka cluster are located in different
machines, it's important to use the SSL
or SASL_SSL
protocol to ensure encryption of data
and client authentication. After generating valid certificates for the brokers and the
clients, the path to the certificate and key generated for the producer must
be provided as arguments, as well as the CA's root certificate.
When using the SASL_PLAINTEXT
and SASL_SSL
protocols, the sasl_mechanism
can be
optionally configured and is set to PLAIN
by default. Valid values for sasl_mechanism
are: PLAIN
, GSSAPI
, OAUTHBEARER
, SCRAM-SHA-256
, and SCRAM-SHA-512
.
If GSSAPI
is used for the sasl_mechanism
, you will need to additionally install
python-gssapi and the necessary C library
Kerberos dependencies.
If the ssl_check_hostname
parameter is enabled, the clients will verify
if the broker's hostname matches the certificate. It's used on client's connections
and inter-broker connections to prevent man-in-the-middle attacks.
Adding a Kafka Event Broker Using the Endpoint Configuration
You can instruct Rasa to stream all events to your Kafka event broker by adding an event_broker
section to your
endpoints.yml
.
Using the SASL_PLAINTEXT
protocol the endpoints file must have the following entries:
Using the PLAINTEXT
protocol the endpoints file must have the following entries:
If using the SSL
protocol, the endpoints file should look like:
If using the SASL_SSL
protocol, the endpoints file should look like:
SQL Event Broker
It is possible to use an SQL database as an event broker. Connections to databases are established using SQLAlchemy, a Python library which can interact with many different types of SQL databases, such as SQLite, PostgreSQL and more. The default Rasa installation allows connections to SQLite and PostgreSQL databases. To see other options, please see the SQLAlchemy documentation on SQL dialects.
Adding a SQL Event Broker Using the Endpoint Configuration
To instruct Rasa to save all events to your SQL event broker, add an event_broker
section to your
endpoints.yml
. For example, a valid SQLite configuration
could look like this:
PostgreSQL databases can be used as well:
With this configuration applied, Rasa will create a table called events
on the database,
where all events will be added.