This is documentation for Rasa Documentation v2.x, which is no longer actively maintained. For up-to-date documentation, see the latest version (3.x).
Version: 2.x
Command Line Interface
The command line interface (CLI) gives you easy-to-remember commands for common tasks. This page describes the behavior of the commands and the parameters you can pass to them.
This command sets up a complete assistant for you with some example training data:
rasa init
It creates the following files:
.
├── actions
│ ├── __init__.py
│ └── actions.py
├── config.yml
├── credentials.yml
├── data
│ ├── nlu.yml
│ └── stories.yml
├── domain.yml
├── endpoints.yml
├── models
│ └── <timestamp>.tar.gz
└── tests
└── test_stories.yml
It will ask you if you want to train an initial model using this data.
If you answer no, the models directory will be empty.
Any of the default CLI commands will expect this project setup, so this is the
best way to get started. You can run rasa train, rasa shell and rasa test
without any additional configuration.
The following command trains a Rasa Open Source model:
rasa train
If you have existing models in your directory (under models/ by default), only
the parts of your model that have changed will be re-trained. For example, if you edit
your NLU training data and nothing else, only the NLU part will be trained.
If you want to train an NLU or dialogue model individually, you can run
rasa train nlu or rasa train core. If you provide training data only for one one of
these, rasa train will fall back to one of these commands by default.
rasa train will store the trained model in the directory defined by --out, models/ by default.
The name of the model by default is <timestamp>.tar.gz. If you want to name your model differently,
you can specify the name using the --fixed-model-name flag.
The following arguments can be used to configure the training process:
usage: rasa train [-h] [-v] [-vv] [--quiet] [--data DATA [DATA ...]]
This feature is experimental.
We introduce experimental features to get feedback from our community, so we encourage you to try it out!
However, the functionality might be changed or removed in the future.
If you have feedback (positive or negative) please share it with us on the Rasa Forum.
In order to improve the performance of an assistant, it's helpful to practice CDD
and add new training examples based on how your users have talked to your assistant. You can use rasa train --finetune
to initialize the pipeline with an already trained model and further finetune it on the
new training dataset that includes the additional training examples. This will help reduce the
training time of the new model.
By default, the command picks up the latest model in the models/ directory. If you have a specific model
which you want to improve, you may specify the path to this by
running rasa train --finetune <path to model to finetune>. Finetuning a model usually
requires fewer epochs to train machine learning components like DIETClassifier, ResponseSelector and TEDPolicy compared to training from scratch.
Either use a model configuration for finetuning
which defines fewer epochs than before or use the flag
--epoch-fraction. --epoch-fraction will use a fraction of the epochs specified for each machine learning component
in the model configuration file. For example, if DIETClassifier is configured to use 100 epochs,
specifying --epoch-fraction 0.5 will only use 50 epochs for finetuning.
You can also finetune an NLU-only or dialogue management-only model by using
rasa train nlu --finetune and rasa train core --finetune respectively.
To be able to fine tune a model, the following conditions must be met:
The configuration supplied should be exactly the same as the
configuration used to train the model which is being finetuned.
The only parameter that you can change is epochs for the individual machine learning components and policies.
The set of labels(intents, actions, entities and slots) for which the base model is trained
should be exactly the same as the ones present in the training data used for finetuning. This
means that you cannot add new intent, action, entity or slot labels to your training data
during incremental training. You can still add new training examples for each of the existing
labels. If you have added/removed labels in the training data, the pipeline needs to be trained
from scratch.
The model to be finetuned is trained with MINIMUM_COMPATIBLE_VERSION of the currently installed rasa version.
If you'd rather use the command line, you can start an interactive learning session by running:
rasa interactive
This will first train a model and then start an interactive shell session.
You can then correct your assistants predictions as you talk to it.
If UnexpecTEDIntentPolicy is
included in the pipeline, action_unlikely_intent
can be triggered at any conversation turn. Subsequently, the following message will be displayed:
The bot wants to run 'action_unlikely_intent' to indicate that the last user message was unexpected
at this point in the conversation. Check out UnexpecTEDIntentPolicy docs to learn more.
As the message states, this is an indication that you have explored a conversation path
which is unexpected according to the current set of training stories and hence adding this
path to training stories is recommended. Like other bot actions, you can choose to confirm
or deny running this action.
New in 2.8
UnexpecTEDIntentPolicy was added.
If you provide a trained model using the --model argument, training is skipped
and that model will be loaded instead.
During interactive learning, Rasa will plot the current conversation
and a few similar conversations from the training data to help you
keep track of where you are. You can view the visualization
at http://localhost:5005/visualization.html
as soon as the session has started. This diagram can take some time to generate.
To skip the visualization, run rasa interactive --skip-visualization.
The following arguments can be used to configure the interactive learning session:
If you'd rather use the command line, you can start a chat session by running:
rasa shell
By default this will load up the latest trained model.
You can specify a different model to be loaded by using the --model flag.
If you start the shell with an NLU-only model, rasa shell will output the
intents and entities predicted for any message you enter.
If you have trained a combined Rasa model but only want to see what your model
extracts as intents and entities from text, you can use the command rasa shell nlu.
To increase the logging level for debugging, run:
rasa shell --debug
note
In order to see the typical greetings and/or session start behavior you might see
in an external channel, you will need to explicitly send /session_start
as the first message. Otherwise, the session start behavior will begin as described in
Session configuration.
The following arguments can be used to configure the command:
To start a server running your trained model, run:
rasa run
By default the Rasa server uses HTTP for its communication. To secure the communication with
SSL and run the server on HTTPS, you need to provide a valid certificate and the corresponding
private key file. You can specify these files as part of the rasa run command.
If you encrypted your keyfile with a password during creation,
you need to add the --ssl-password as well.
rasa run --ssl-certificate myssl.crt --ssl-keyfile myssl.key --ssl-password mypassword
The following arguments can be used to configure your Rasa server:
usage: rasa run [-h] [-v] [-vv] [--quiet] [-m MODEL] [--log-file LOG_FILE]
This will test your latest trained model on any end-to-end stories you have
defined in files with the test_ prefix.
If you want to use a different model, you can specify it using the --model flag.
If you want to evaluate the dialogue and NLU
models separately, you can use the commands below:
To create a train-test split of your NLU training data, run:
rasa data split nlu
This will create a 80/20 split of train/test data by default.
You can specify the training data, the fraction, and the output directory using
the following arguments:
usage: rasa data split nlu [-h] [-v] [-vv] [--quiet] [-u NLU]
[--training-fraction TRAINING_FRACTION]
[--random-seed RANDOM_SEED] [--out OUT]
optional arguments:
-h, --help show this help message and exit
-u NLU, --nlu NLU File or folder containing your NLU data. (default:
data)
--training-fraction TRAINING_FRACTION
Percentage of the data which should be in the training
data. (default: 0.8)
--random-seed RANDOM_SEED
Seed to generate the same train/test split. (default:
None)
--out OUT Directory where the split files should be stored.
(default: train_test_split)
Python Logging Options:
-v, --verbose Be verbose. Sets logging level to INFO. (default:
None)
-vv, --debug Print lots of debugging statements. Sets logging level
to DEBUG. (default: None)
--quiet Be quiet! Sets logging level to WARNING. (default:
None)
If you have NLG data for retrieval actions, this will be saved to seperate files:
You can check your domain, NLU data, or story data for mistakes and inconsistencies.
To validate your data, run this command:
rasa data validate
The validator searches for errors in the data, e.g. two intents that have some
identical training examples.
The validator also checks if you have any stories where different assistant actions follow from the same
dialogue history. Conflicts between stories will prevent a model from learning the correct
pattern for a dialogue.
If you pass a max_history value to one or more policies in your config.yml file, provide the
smallest of those values in the validator command using the --max-history <max_history> flag.
You can also validate only the story structure by running this command:
rasa data validate stories
note
Running rasa data validate does not test if your rules are consistent with your stories.
However, during training, the RulePolicy checks for conflicts between rules and stories. Any such conflict will abort training.
Also, if you use end-to-end stories, then this might not capture all conflicts. Specifically, if two user inputs
result in different tokens yet exactly the same featurization, then conflicting actions after these inputs
may exist but will not be reported by the tool.
To interrupt validation even for minor issues such as unused intents or responses, use the --fail-on-warnings flag.
check your story names
The rasa data validate stories command assumes that all your story names are unique!
You can use rasa data validate with additional arguments, e.g. to specify the location of your data and
domain files:
usage: rasa data validate [-h] [-v] [-vv] [--quiet]
[--max-history MAX_HISTORY] [-c CONFIG]
[--fail-on-warnings] [-d DOMAIN]
[--data DATA [DATA ...]]
{stories} ...
positional arguments:
{stories}
stories Checks for inconsistencies in the story files.
optional arguments:
-h, --help show this help message and exit
--max-history MAX_HISTORY
Number of turns taken into account for story structure
validation. (default: None)
-c CONFIG, --config CONFIG
The policy and NLU pipeline configuration of your bot.
(default: config.yml)
--fail-on-warnings Fail validation on warnings and errors. If omitted
only errors will result in a non zero exit code.
(default: False)
-d DOMAIN, --domain DOMAIN
Domain specification. This can be a single YAML file,
or a directory that contains several files with domain
specifications in it. The content of these files will
be read and merged together. (default: domain.yml)
--data DATA [DATA ...]
Paths to the files or directories containing Rasa
data. (default: data)
Python Logging Options:
-v, --verbose Be verbose. Sets logging level to INFO. (default:
None)
-vv, --debug Print lots of debugging statements. Sets logging level
to DEBUG. (default: None)
--quiet Be quiet! Sets logging level to WARNING. (default:
None)
New in 2.8
Story validation is no longer an experimental feature as of 2.8. The feature behaviour remains unchanged.
To export events from a tracker store using an event broker, run:
rasa export
You can specify the location of the environments file, the minimum and maximum
timestamps of events that should be published, as well as the conversation IDs that
should be published:
usage: rasa export [-h] [-v] [-vv] [--quiet] [--endpoints ENDPOINTS]
[--minimum-timestamp MINIMUM_TIMESTAMP]
[--maximum-timestamp MAXIMUM_TIMESTAMP]
[--conversation-ids CONVERSATION_IDS]
optional arguments:
-h, --help show this help message and exit
--endpoints ENDPOINTS
Endpoint configuration file specifying the tracker
store and event broker. (default: endpoints.yml)
--minimum-timestamp MINIMUM_TIMESTAMP
Minimum timestamp of events to be exported. The
constraint is applied in a 'greater than or equal'
comparison. (default: None)
--maximum-timestamp MAXIMUM_TIMESTAMP
Maximum timestamp of events to be exported. The
constraint is applied in a 'less than' comparison.
(default: None)
--conversation-ids CONVERSATION_IDS
Comma-separated list of conversation IDs to migrate.
If unset, all available conversation IDs will be
exported. (default: None)
Python Logging Options:
-v, --verbose Be verbose. Sets logging level to INFO. (default:
None)
-vv, --debug Print lots of debugging statements. Sets logging level
to DEBUG. (default: None)
--quiet Be quiet! Sets logging level to WARNING. (default:
None)
Import conversations into Rasa X
This command is most commonly used to import old conversations into Rasa X to annotate
them. Read more about importing conversations into Rasa X.
Rasa X is a tool for practicing Conversation-Driven Development.
You can find more information about it here.You can start Rasa X in local mode by executing
rasa x
To be able to start Rasa X you need to have Rasa X local mode installed
and you need to be in a Rasa project directory.
The following arguments are available for rasa x:
usage: rasa x [-h] [-v] [-vv] [--quiet] [-m MODEL] [--data DATA [DATA ...]]