notice

This is documentation for Rasa Documentation v2.x, which is no longer actively maintained.
For up-to-date documentation, see the latest version (3.x).

Version: 2.x

Event Brokers

An event broker allows you to connect your running assistant to other services that process the data coming in from conversations. For example, you could connect your live assistant to Rasa X to review and annotate conversations or forward messages to an external analytics service. The event broker publishes messages to a message streaming service, also known as a message broker, to forward Rasa Events from the Rasa server to other services.

Format

All events are streamed to the broker as serialized dictionaries every time the tracker updates its state. An example event emitted from the default tracker looks like this:

{
"sender_id": "default",
"timestamp": 1528402837.617099,
"event": "bot",
"text": "what your bot said",
"data": "some data about e.g. attachments"
"metadata" {
"a key": "a value",
}
}

The event field takes the event's type_name (for more on event types, check out the events docs).

Pika Event Broker

The example implementation we're going to show you here uses Pika , the Python client library for RabbitMQ.

Adding a Pika Event Broker Using the Endpoint Configuration

You can instruct Rasa to stream all events to your Pika event broker by adding an event_broker section to your endpoints.yml:

event_broker:
type: pika
url: localhost
username: username
password: password
queues:
- queue-1
# you may supply more than one queue to publish to
# - queue-2
# - queue-3
exchange_name: exchange

Rasa will automatically start streaming events when you restart the Rasa server.

Adding a Pika Event Broker in Python

Here is how you add it using Python code:

import asyncio
from rasa.core.brokers.pika import PikaEventBroker
from rasa.core.tracker_store import InMemoryTrackerStore
event_loop = asyncio.get_event_loop()
pika_broker = PikaEventBroker('localhost',
'username',
'password',
queues=['rasa_events'],
event_loop=event_loop
)
event_loop.run_until_complete(pika_broker.connect())
tracker_store = InMemoryTrackerStore(domain=domain, event_broker=pika_broker)

Implementing a Pika Event Consumer

You need to have a RabbitMQ server running, as well as another application that consumes the events. This consumer to needs to implement Pika's start_consuming() method with a callback action. Here's a simple example:

import json
import pika
def _callback(self, ch, method, properties, body):
# Do something useful with your incoming message body here, e.g.
# saving it to a database
print('Received event {}'.format(json.loads(body)))
if __name__ == '__main__':
# RabbitMQ credentials with username and password
credentials = pika.PlainCredentials('username', 'password')
# Pika connection to the RabbitMQ host - typically 'rabbit' in a
# docker environment, or 'localhost' in a local environment
connection = pika.BlockingConnection(
pika.ConnectionParameters('rabbit', credentials=credentials))
# start consumption of channel
channel = connection.channel()
channel.basic_consume(_callback,
queue='rasa_events',
no_ack=True)
channel.start_consuming()

Kafka Event Broker

While RabbitMQ is the default event broker, it is possible to use Kafka as the main broker for your events. Rasa uses the kafka-python library, a Kafka client written in Python. You will need a running Kafka server.

Partition Key

New in 2.5

The partition_by_sender parameter was added.

Rasa Open Source's Kafka producer can optionally be configured to partition messages by conversation ID. This can be configured by setting partition_by_sender in the endpoints.yml file to True. By default, this parameter is set to False and the producer will randomly assign a partition to each message.

endpoints.yml
event_broker:
type: kafka
partition_by_sender: True
security_protocol: PLAINTEXT
topic: topic
url: localhost
client_id: kafka-python-rasa

Authentication and Authorization

Rasa's Kafka producer accepts the following types of security protocols: SASL_PLAINTEXT, SSL, PLAINTEXT and SASL_SSL.

For development environments, or if the brokers servers and clients are located into the same machine, you can use simple authentication with SASL_PLAINTEXT or PLAINTEXT. By using this protocol, the credentials and messages exchanged between the clients and servers will be sent in plaintext. Thus, this is not the most secure approach, but since it's simple to configure, it is useful for simple cluster configurations. SASL_PLAINTEXT protocol requires the setup of the username and password previously configured in the broker server.

If the clients or the brokers in the kafka cluster are located in different machines, it's important to use the SSL or SASL_SSL protocol to ensure encryption of data and client authentication. After generating valid certificates for the brokers and the clients, the path to the certificate and key generated for the producer must be provided as arguments, as well as the CA's root certificate.

When using the SASL_PLAINTEXT and SASL_SSL protocols, the sasl_mechanism can be optionally configured and is set to PLAIN by default. Valid values for sasl_mechanism are: PLAIN, GSSAPI, OAUTHBEARER, SCRAM-SHA-256, and SCRAM-SHA-512.

If GSSAPI is used for the sasl_mechanism, you will need to additionally install python-gssapi and the necessary C library Kerberos dependencies.

If the ssl_check_hostname parameter is enabled, the clients will verify if the broker's hostname matches the certificate. It's used on client's connections and inter-broker connections to prevent man-in-the-middle attacks.

Adding a Kafka Event Broker Using the Endpoint Configuration

You can instruct Rasa to stream all events to your Kafka event broker by adding an event_broker section to your endpoints.yml.

Using the SASL_PLAINTEXT protocol the endpoints file must have the following entries:

event_broker:
type: kafka
security_protocol: SASL_PLAINTEXT
topic: topic
url: localhost
partition_by_sender: True
sasl_username: username
sasl_password: password
sasl_mechanism: PLAIN

Using the PLAINTEXT protocol the endpoints file must have the following entries:

event_broker:
type: kafka
security_protocol: PLAINTEXT
topic: topic
url: localhost
client_id: kafka-python-rasa

If using the SSL protocol, the endpoints file should look like:

event_broker:
type: kafka
security_protocol: SSL
topic: topic
url: localhost
ssl_cafile: CARoot.pem
ssl_certfile: certificate.pem
ssl_keyfile: key.pem
ssl_check_hostname: True

If using the SASL_SSL protocol, the endpoints file should look like:

event_broker:
type: kafka
security_protocol: SASL_SSL
topic: topic
url: localhost
sasl_username: username
sasl_password: password
sasl_mechanism: PLAIN
ssl_cafile: CARoot.pem
ssl_certfile: certificate.pem
ssl_keyfile: key.pem
ssl_check_hostname: True

SQL Event Broker

It is possible to use an SQL database as an event broker. Connections to databases are established using SQLAlchemy, a Python library which can interact with many different types of SQL databases, such as SQLite, PostgreSQL and more. The default Rasa installation allows connections to SQLite and PostgreSQL databases. To see other options, please see the SQLAlchemy documentation on SQL dialects.

Adding a SQL Event Broker Using the Endpoint Configuration

To instruct Rasa to save all events to your SQL event broker, add an event_broker section to your endpoints.yml. For example, a valid SQLite configuration could look like this:

endpoints.yml
event_broker:
type: SQL
dialect: sqlite
db: events.db

PostgreSQL databases can be used as well:

endpoints.yml
event_broker:
type: SQL
url: 127.0.0.1
port: 5432
dialect: postgresql
username: myuser
password: mypassword
db: mydatabase

With this configuration applied, Rasa will create a table called events on the database, where all events will be added.