Version: Latest

Command Line Interface

The Rasa Pro command line interface (CLI) gives you easy-to-remember commands for common tasks. This page describes the behavior of the commands and the parameters you can pass to them.

Cheat Sheet

The following commands are relevant to all assistants built with Rasa.

CommandEffect
rasa initCreates a new project with example training data, actions, and config files.
rasa trainTrains a model using your NLU data and stories, saves trained model in ./models.
rasa shellLoads your trained model and lets you talk to your assistant on the command line.
rasa runStarts a server with your trained model.
rasa run actionsStarts an action server using the Rasa SDK.
rasa test e2eRuns end-to-end testing fully integrated with the action server that serves as acceptance testing.
rasa data convert e2eConverts sample conversation data into end-to-end test cases.
rasa llm finetune prepare-dataPrepares data to fine-tune a base model for the task of command generator.
rasa inspectOpens Rasa Inspector.
rasa data split nluPerforms a 80/20 split of your NLU training data.
rasa data validateChecks the domain, NLU, flows and conversation data for inconsistencies.
rasa exportExports conversations from a tracker store to an event broker.
rasa marker uploadUpload marker configurations to Analytics Data Pipeline
rasa licenseDisplay licensing information.
rasa -hShows all available commands.
rasa --versionShows version information about Rasa Pro, Python and the expiration date for Rasa Pro License

The following commands are only relevant if you are building NLU-based assistants.

CommandEffect
rasa interactiveStarts an interactive learning session to create new training data by chatting to your assistant.
rasa visualizeGenerates a visual representation of your stories.
rasa testTests a trained Rasa model on any files starting with test_.
rasa data split storiesDo the same as rasa data split nlu, but for your stories data.
rasa data convertConverts training data between different formats.
rasa data migrateMigrates 2.0 domain to 3.0 format.

The following commands are only relevant if you are using Rasa Studio.

CommandEffect
rasa studio downloadDownloads the latest assistant data from Rasa Studio.
rasa studio trainTrains your assistant with data from Rasa Studio.
rasa studio uploadUploads your assistant data to Rasa Studio.
rasa studio configUpdate the global.yml file with the studio config.
rasa studio loginRetrieve credentials from Rasa Studio.
note

If you run into character encoding issues on Windows like: UnicodeEncodeError: 'charmap' codec can't encode character ... or the terminal is not displaying colored messages properly, prepend winpty to the command you would like to run. For example winpty rasa init instead of rasa init

Log Level

Rasa produces log messages at several different levels (eg. warning, info, error and so on). You can control which level of logs you would like to see with --verbose (same as -v) or --debug (same as -vv) as optional command line arguments. See each command below for more explanation on what these arguments mean.

In addition to CLI arguments, several environment variables allow you to control log output in a more granular way. With these environment variables, you can configure log levels for messages created by external libraries such as Matplotlib, Pika, and Kafka. These variables follow standard logging level in Python. Currently, following environment variables are supported:

  1. LOG_LEVEL_LIBRARIES: This is the general environment variable to configure log level for the main libraries Rasa uses. It covers Tensorflow, asyncio, APScheduler, SocketIO, Matplotlib, RabbitMQ, Kafka.
  2. LOG_LEVEL_MATPLOTLIB: This is the specialized environment variable to configure log level only for Matplotlib.
  3. LOG_LEVEL_RABBITMQ: This is the specialized environment variable to configure log level only for AMQP libraries, at the moment it handles log levels from aio_pika and aiormq.
  4. LOG_LEVEL_KAFKA: This is the specialized environment variable to configure log level only for kafka.
  5. LOG_LEVEL_PRESIDIO: This is the specialized environment variable to configure log level only for Presidio, at the moment it handles log levels from presidio_analyzer and presidio_anonymizer.
  6. LOG_LEVEL_FAKER: This is the specialized environment variable to configure log level only for Faker.
  7. LOG_LEVEL_MLFLOW: This is the specialized environment variable to configure log level only for MLFlow.

General configuration (LOG_LEVEL_LIBRARIES) has less priority than library level specific configuration (LOG_LEVEL_MATPLOTLIB, LOG_LEVEL_RABBITMQ etc); and CLI parameter sets the lowest level log messages which will be handled. This means variables can be used together with a predictable result. As an example:

LOG_LEVEL_LIBRARIES=ERROR LOG_LEVEL_MATPLOTLIB=WARNING LOG_LEVEL_KAFKA=DEBUG rasa shell --debug

The above command run will result in showing:

  • messages with DEBUG level and higher by default (due to --debug)
  • messages with WARNING level and higher for Matplotlib
  • messages with DEBUG level and higher for kafka
  • messages with ERROR level and higher for other libraries not configured

Note that CLI config sets the lowest level log messages to be handled, hence the following command will set the log level to INFO (due to --verbose) and no debug messages will be seen (library level configuration will not have any effect):

LOG_LEVEL_LIBRARIES=DEBUG LOG_LEVEL_MATPLOTLIB=DEBUG rasa shell --verbose

As an aside, CLI log level sets the level at the root logger (which has the important handler - coloredlogs handler); this means even if an environment variable sets a library logger to a lower level, the root logger will reject messages from that library. If not specified, the CLI log level is set to INFO.

Log Level LLM Components

Rasa provides enhanced control over the debugging process of LLM-driven components via a fine-grained, customizable logging specified through environment variables.

For example, set the LOG_LEVEL_LLM environment variable to enable detailed logging at the desired level for all the LLM components or specify the component you are debugging by setting for example the LOG_LEVEL_LLM_ENTERPRISE_SEARCH environment variable:

export LOG_LEVEL_LLM=INFO
export LOG_LEVEL_LLM_COMMAND_GENERATOR=INFO
export LOG_LEVEL_LLM_ENTERPRISE_SEARCH=DEBUG
export LOG_LEVEL_LLM_INTENTLESS_POLICY=INFO
export LOG_LEVEL_LLM_REPHRASER=INFO
export LOG_LEVEL_NLU_COMMAND_ADAPTER=INFO
export LOG_LEVEL_LLM_BASED_ROUTER=INFO

These settings override logging level for the specified components.

The LOG_LEVEL_LLM_COMMAND_GENERATOR variable applies to all types of LLM-based command generators.

Custom logging configuration

New in 3.4

The Rasa CLI now includes a new argument --logging-config-file which accepts a YAML file as value.

You can now configure any logging formatters or handlers in a separate YAML file. The logging config YAML file must follow the Python built-in dictionary schema, otherwise it will fail validation. You can pass this file as argument to the --logging-config-file CLI option and use it with any of the rasa commands.

Custom logging configuration example

The following example illustrates how to customize the logging configuration using a YAML file. Here we define a custom formatter, a stream handler for the root logger and a file handler for the rasa logger.

version: 1
disable_existing_loggers: false
formatters:
customFormatter:
format: "{\"time\": \"%(asctime)s\", \"name\": \"[%(name)s]\", \"levelname\": \"%(levelname)s\", \"message\": \"%(message)s\"}"
handlers:
console:
class: logging.StreamHandler
level: INFO
formatter: customFormatter
stream: ext://sys.stdout
file:
class: logging.FileHandler
filename: "rasa_debug.log"
level: DEBUG
formatter: customFormatter
loggers:
root:
handlers: [console]
rasa:
handlers: [file]
propagate: 0
info

In Rasa Pro 3.9, running rasa shell or rasa interactive in debug mode could result in BlockingIOError when using the default logging configuration. This issue is resolved by using a custom logging configuration file. If you encounter this issue, you can use the above example to create a custom logging configuration file and pass it to the --logging-config-file argument.

rasa init

This command sets up a complete assistant for you with some example training data:

rasa init

With no arguments, rasa init creates the following files:

.
├── actions
│ ├── __init__.py
│ └── actions.py
├── config.yml
├── credentials.yml
├── data
│ ├── nlu.yml
│ └── stories.yml
├── domain.yml
├── endpoints.yml
├── models
│ └── <timestamp>.tar.gz
└── tests
└── test_stories.yml

It will ask you if you want to train an initial model using this data. If you answer no, the models directory will be empty.

This is the best way to get started writing an NLU assistant. You can run rasa train, rasa shell and rasa test without any additional configuration.

Rasa supplies two other templates in addition to the default NLU template described above. Both of these are great ways to get started building your own CALM bots:

  • rasa init --template calm generates a CALM assistant with flows and a custom action to manage a simple contact list.
  • rasa init --template tutorial generates the codebase used in the CALM Tutorial.

rasa train

The following command trains a Rasa model:

rasa train

If you have existing models in your directory (under models/ by default), only the parts of your model that have changed will be re-trained. For example, if you edit your NLU training data and nothing else, only the NLU part will be trained.

If you want to train an NLU or dialogue model individually, you can run rasa train nlu or rasa train core. If you provide training data only for one one of these, rasa train will fall back to one of these commands by default.

rasa train will store the trained model in the directory defined by --out, models/ by default. The name of the model by default is <timestamp>.tar.gz. If you want to name your model differently, you can specify the name using the --fixed-model-name flag.

By default validation is run before training the model. If you want to skip validation, you can use the --skip-validation flag. If you want to fail on validation warnings, you can use the --fail-on-validation-warnings flag. The --validation-max-history is analogous to the --max-history argument of rasa data validate.

Run rasa train --help to see the full list of arguments.

See the section on data augmentation for info on how data augmentation works and how to choose a value for the flag. Note that TEDPolicy is the only policy affected by data augmentation.

See the following section on incremental training for more information about the --epoch-fraction argument.

Incremental training

New in 2.2

This feature is experimental. We introduce experimental features to get feedback from our community, so we encourage you to try it out! However, the functionality might be changed or removed in the future. If you have feedback (positive or negative) please share it with us on the Rasa Forum.

In order to improve the performance of an assistant, it's helpful to practice CDD and add new training examples based on how your users have talked to your assistant. You can use rasa train --finetune to initialize the pipeline with an already trained model and further finetune it on the new training dataset that includes the additional training examples. This will help reduce the training time of the new model.

By default, the command picks up the latest model in the models/ directory. If you have a specific model which you want to improve, you may specify the path to this by running rasa train --finetune <path to model to finetune>. Finetuning a model usually requires fewer epochs to train machine learning components like DIETClassifier, ResponseSelector and TEDPolicy compared to training from scratch. Either use a model configuration for finetuning which defines fewer epochs than before or use the flag --epoch-fraction. --epoch-fraction will use a fraction of the epochs specified for each machine learning component in the model configuration file. For example, if DIETClassifier is configured to use 100 epochs, specifying --epoch-fraction 0.5 will only use 50 epochs for finetuning.

You can also finetune an NLU-only or dialogue management-only model by using rasa train nlu --finetune and rasa train core --finetune respectively.

To be able to fine tune a model, the following conditions must be met:

  1. The configuration supplied should be exactly the same as the configuration used to train the model which is being finetuned. The only parameter that you can change is epochs for the individual machine learning components and policies.

  2. The set of labels(intents, actions, entities and slots) for which the base model is trained should be exactly the same as the ones present in the training data used for finetuning. This means that you cannot add new intent, action, entity or slot labels to your training data during incremental training. You can still add new training examples for each of the existing labels. If you have added/removed labels in the training data, the pipeline needs to be trained from scratch.

  3. The model to be finetuned is trained with MINIMUM_COMPATIBLE_VERSION of the currently installed rasa version.

rasa interactive

You can start an interactive learning session by running:

rasa interactive

This will first train a model and then start an interactive shell session. You can then correct your assistants predictions as you talk to it. If UnexpecTEDIntentPolicy is included in the pipeline, action_unlikely_intent can be triggered at any conversation turn. Subsequently, the following message will be displayed:

The bot wants to run 'action_unlikely_intent' to indicate that the last user message was unexpected
at this point in the conversation. Check out UnexpecTEDIntentPolicy docs to learn more.

As the message states, this is an indication that you have explored a conversation path which is unexpected according to the current set of training stories and hence adding this path to training stories is recommended. Like other bot actions, you can choose to confirm or deny running this action.

If you provide a trained model using the --model argument, training is skipped and that model will be loaded instead.

During interactive learning, Rasa will plot the current conversation and a few similar conversations from the training data to help you keep track of where you are. You can view the visualization at http://localhost:5005/visualization.html as soon as the session has started. This diagram can take some time to generate. To skip the visualization, run rasa interactive --skip-visualization.

Add the assistant_id key introduced in 3.5

Running interactive learning with a pre-trained model whose metadata does not include the assistant_id will exit with an error. If this happens, add the required key with a unique identifier value in config.yml and re-run training.

Run rasa interactive --help to see the full list of arguments.

rasa shell

You can start a chat session by running:

rasa shell

By default, this will load up the latest trained model. You can specify a different model to be loaded by using the --model flag.

If you start the shell with an NLU-only model, rasa shell will output the intents and entities predicted for any message you enter.

If you have trained a combined Rasa model but only want to see what your model extracts as intents and entities from text, you can use the command rasa shell nlu.

To increase the logging level for debugging, run:

rasa shell --debug
note

In order to see the typical greetings and/or session start behavior you might see in an external channel, you will need to explicitly send /session_start as the first message. Otherwise, the session start behavior will begin as described in Session configuration.

The following arguments can be used to configure the command. Most arguments overlap with rasa run; see the following section for more info on those arguments.

Note that the --connector argument will always be set to cmdline when running rasa shell. This means all credentials in your credentials file will be ignored, and if you provide your own value for the --connector argument it will also be ignored.

Run rasa shell --help to see the full list of arguments.

rasa run

To start a server running your trained model, run:

rasa run

By default the Rasa server uses HTTP for its communication. To secure the communication with SSL and run the server on HTTPS, you need to provide a valid certificate and the corresponding private key file. You can specify these files as part of the rasa run command. If you encrypted your keyfile with a password during creation, you need to add the --ssl-password as well.

rasa run --ssl-certificate myssl.crt --ssl-keyfile myssl.key --ssl-password mypassword

Rasa by default listens on each available network interface. You can limit this to a specific network interface using the -i command line option.

rasa run -i 192.168.69.150

Rasa will by default connect to all channels specified in your credentials file. To connect to a single channel and ignore all other channels in your credentials file, specify the name of the channel in the --connector argument.

rasa run --connector rest

The name of the channel should match the name you specify in your credentials file. For supported channels see the page about messaging and voice channels.

Run rasa run --help to see the full list of arguments.

For more information on important additional parameters, see Model Storage

See the Rasa REST API page for detailed documentation of all the endpoints.

rasa run actions

To start an action server with the Rasa SDK, run:

rasa run actions

Run rasa run actions --help to see the full list of arguments.

rasa visualize

To generate a graph of your stories in the browser, run:

rasa visualize

If your stories are located somewhere other than the default location data/, you can specify their location with the --stories flag.

Run rasa visualize --help to see the full list of arguments.

rasa test

To evaluate a model on your test data, run:

rasa test

This will test your latest trained model on any end-to-end stories you have defined in files with the test_ prefix. If you want to use a different model, you can specify it using the --model flag.

To evaluate the dialogue and NLU models separately, use the commands below:

rasa test core

and

rasa test nlu

You can find more details on specific arguments for each testing type in Evaluating an NLU Model and Evaluating a Dialogue Management Model.

Run rasa test --help to see the full list of arguments.

rasa test e2e

New in 3.5

You can now use end-to-end testing to test your assistant as a whole, including dialogue management and custom actions.

To run end-to-end testing on your trained model, run:

rasa test e2e

This will test your latest trained model on any end-to-end test cases you have. If you want to use a different model, you can specify it using the --model flag.

New in 3.10

By adding the --coverage-report flag you obtain a report describing how well your end-to-end tests cover the assistant's flows in terms of share of steps tested per flow. The report includes a histogram of tested commands and allows you to specify the output path with the --coverage-output-path flag.

This feature is currently released in a beta version. The feature might change in the future. If you want to enable this beta feature, set the environment variable RASA_PRO_BETA_FINE_TUNING_RECIPE=true.

Run rasa test e2e --help to see the full list of arguments.

rasa llm finetune prepare-data

New in 3.10

This command is part of the fine-tuning recipe available starting with version 3.10.0. As this feature is a beta feature, please set the environment variable RASA_PRO_BETA_FINETUNING_RECIPE to true to enable it.

This command creates a dataset of prompt to commands pairs from E2E tests that can be used to fine-tune a base model for the task of command generation. To execute the command run

rasa llm finetune prepare-data <path-to-e2e-test-cases>

Here are some of the arguments available:

positional arguments:
path-to-e2e-test-cases
Input file or folder containing end-to-end test cases. (default: e2e_tests)
options:
-o OUT, --out OUT The output folder to store the data to. (default: output)
-m MODEL, --model MODEL
Path to a trained Rasa model. If a directory is specified, it will use the latest model in this directory. (default: models)
Rephrasing Module:
--num-rephrases {0, ..., 49}
Number of rephrases to be generated per user utterance. (default: 10)
--rephrase-config REPHRASE_CONFIG
Path to config file that contains the configuration of the rephrasing module. (default: None)
Train/Test Split Module:
--train-frac TRAIN_FRAC
The amount of data that should go into the training dataset. The value should be >0.0 and <=1.0. (default: 0.8)
--output-format [{instruction,conversational}]
Format of the output file. (default: instruction)

Run rasa finetune prepare-data --help to see all available arguments.

rasa inspect

New in 3.7

This command is part of Rasa's new Conversational AI with Language Models (CALM) approach and available starting with version 3.7.0.

Opens the Rasa Inspector, a debugging tool that offers developers an in-depth look into the conversational mechanics of their Rasa assistant.

Run rasa inspect --help to see the full list of arguments.

rasa data split

To create a train-test split of your NLU training data, run:

rasa data split nlu

This will create a 80/20 split of train/test data by default. Run rasa data split nlu --help to see the full list of arguments.

If you have NLG data for retrieval actions, this will be saved to separate files:

ls train_test_split
nlg_test_data.yml test_data.yml
nlg_training_data.yml training_data.yml

To split your stories, you can use the following command:

rasa data split stories

It has the same arguments as split nlu command, but loads yaml files with stories and perform random splitting. Directory train_test_split will contain all yaml files processed with prefixes train_ or test_ containing train and test parts.

rasa data convert nlu

You can convert NLU data from

  • LUIS data format,
  • WIT data format,
  • Dialogflow data format, or
  • JSON

to

  • YAML or
  • JSON

You can start the converter by running:

rasa data convert nlu

You can specify the input file or directory, output file or directory, and the output format. Run rasa data convert nlu --help to see the full list of arguments.

rasa data migrate

The domain is the only data file whose format changed between 2.0 and 3.0. You can automatically migrate a 2.0 domain to the 3.0 format.

You can start the migration by running:

rasa data migrate

You can specify the input file or directory and the output file or directory with the following arguments:

rasa data migrate -d DOMAIN --out OUT_PATH

If no arguments are specified, the default domain path (domain.yml) will be used for both input and output files.

This command will also back-up your 2.0 domain file(s) into a different original_domain.yml file or directory labeled original_domain.

Note that the slots in the migrated domain will contain mapping conditions if these slots are part of a form's required_slots.

caution

Exceptions will be raised and the migration process terminated if invalid domain files are provided or if they are already in the 3.0 format, if slots or forms are missing from your original files or if the slots or forms sections are spread across multiple domain files. This is done to avoid duplication of migrated sections in your domain files. Please make sure all your slots' or forms' definitions are grouped into a single file.

You can learn more about this command by running:

rasa data migrate --help

rasa data validate

You can check your domain, NLU data, flows or story data for mistakes and inconsistencies. To validate your data, run this command:

rasa data validate

The validator searches for errors in the data, e.g. two intents that have some identical training examples. The validator also checks if you have any stories where different assistant actions follow from the same dialogue history. Conflicts between stories will prevent a model from learning the correct pattern for a dialogue. To learn more about the checks performed by the validator on flows, continue reading in the next section.

Searching for the assistant_id key introduced in 3.5

The validator will check whether the assistant_id key is present in the config file and will issue a warning if this key is missing or if the default value has not been changed.

If you pass a max_history value to one or more policies in your config.yml file, provide the smallest of those values in the validator command using the --max-history <max_history> flag.

Validate flows

The validator will perform the following checks on flows:

  • determine whether flow names or descriptions are unique after stripping punctuation
  • verify whether logical expressions in conditions or collect step rejections are valid pypred expressions
  • determine whether slots used in flows are defined in the domain
  • disallow list slots from being used in flows collect steps: CALM supports only filling slots with values of type int, string or bool in flows.
  • disallow dialogue_stack internal slot from being used in flows
  • ensure that bool and categorical slots are validated against acceptable values in conditions

For every failure, the validator will log an error and exit the command with exit code 1.

You can validate flows only by running this command:

rasa data validate flows

Validate story structure

You can also validate only the story structure by running this command:

rasa data validate stories
note

Running rasa data validate does not test if your rules are consistent with your stories. However, during training, the RulePolicy checks for conflicts between rules and stories. Any such conflict will abort training.

Also, if you use end-to-end stories, then this might not capture all conflicts. Specifically, if two user inputs result in different tokens yet exactly the same featurization, then conflicting actions after these inputs may exist but will not be reported by the tool.

To interrupt validation even for minor issues such as unused intents or responses, use the --fail-on-warnings flag.

check your story names

The rasa data validate stories command assumes that all your story names are unique!

You can use rasa data validate with additional arguments, e.g. to specify the location of your data and domain files. Run rasa data validate --help to see the full list of arguments.

rasa export

To export events from a tracker store using an event broker, run:

rasa export

You can specify the location of the environments file, the minimum and maximum timestamps of events that should be published, as well as the conversation IDs that should be published. Run rasa export --help to see the full list of arguments.

Import conversations into Rasa X/Enterprise

This command is most commonly used to import old conversations into Rasa X/Enterprise to annotate them. Read more about importing conversations into Rasa X/Enterprise.

rasa markers upload

New in 3.6

This command is available from Rasa Pro 3.6.0 and requires Rasa Analytics Data Pipeline

This command applies to markers and their real-time processing. Running this command validates the marker configuration file against the domain file and uploads the configuration to Analytics Data Pipeline

Run rasa markers upload --help to see the full list of arguments.

rasa license

New in 3.3

This command was introduced.

Use rasa license to display information about licensing in Rasa Pro, especially information about 3rd party dependencies licenses.

Run rasa license --help to see the full list of arguments.

rasa studio download

New in 3.7

This command is available from Rasa Pro 3.7.0 and requires Rasa Studio

This command downloads the data from Rasa Studio and saves it to files inside data folder. If local files use a single domain file, it is updated accordingly. If there is a domain folder instead, domain changes are written to <domain_folder>/studio_domain.yml.

The command downloads Studio data that is available in Studio but not in local files. The following data is supported:

  • configuration
  • endpoints
  • flows (for CALM assistants)
  • responses
  • slots
  • custom action declarations
  • intents
  • entities

The --overwrite flag can be used to overwrite the existing data in the existing files when a primitive has the same ID as the one downloaded from Rasa Studio. Special cases:

  • If an intent exists in local files, but Studio has examples missing locally, they will be downloaded.
  • If local config and endpoints files exist during the download of a CALM assistant, the user needs to confirm their intent to overwrite them, even when the --overwrite flag is provided.

Example:

rasa studio download my_awesome_assistant -d my_domain_folder

Run rasa studio download --help to see the full list of arguments.

rasa studio train

New in 3.7

This command is available from Rasa Pro 3.7.0 and requires Rasa Studio

This command is analogous to rasa train. This command combines data from local files and Rasa Studio to train a model. In case both Studio and local files have a primitive with the same ID, local one is used for training.

Example:

rasa studio train my_awesome_assistant -d my_domain_folder

Run rasa studio train --help to see the full list of arguments.

rasa studio upload

New in 3.7

This command is available from Rasa Pro 3.7.0 and requires Rasa Studio

Uploads an assistant from local files to Rasa Studio.

Import of NLU-based assistants

For NLU-based assistants, it will upload the intent and entity definitions to Rasa Studio to an existing assistant in Rasa Studio. When arguments for specifying which intents or entities to upload are not given, all intents and entities get uploaded. When uploading an intent, all entities used in annotations of that intent's utterance examples are uploaded as well.

tip

At the moment, only some intents and entities can be uploaded to Studio. The following can't be uploaded:

  • Retrieval intents
  • Entities that have entity_group
  • Intents with use_entities and ignore_entities
  • Entities with influence_conversation

Example:

rasa studio upload

Run rasa studio upload --help to see the full list of arguments.

Import of CALM assitants

To upload a CALM assistant to Rasa Studio, run this command with --calm flag.

Important!
  • When uploading a CALM assistant, a new Rasa Studio assistant with specified name will be created. This is different from the NLU-based assistant upload, which will reuse an existing Rasa Studio assistant.

  • During CALM upload, we also upload config and endpoints that can be edited in the UI.

Example:

rasa studio upload --calm

Possible errors

Assistant name errors

These include the following:

  • Assistant with name <assistant_name> already exists
  • <assistant_name> is not a valid name

A valid assistant name will not exceed the length of 128 characters and will not contain spaces.

Invalid YAML errors

If something is wrong with the YAML files structure, a specific error will be logged. You will see these errors when, for example, a required field is missing for an action, slot, response, config or flow.

Examples:

Invalid domain: responses.utter_greeting.0.text: Required
Invalid flows: flows.transfer_money.description: Required
Reference errors

If a flow references a response, slot, action or another flow (with a link step), the following errors will be logged:

Can't find <response/slot/action> utter_ask_add_contact_handle in domain
Can't find flow <flow_name> in flows
Unsupported feature errors

Not all the features available in Rasa Pro are supported by Rasa Studio. Trying to import an assistant with unsupported features will result in an error. To find out which versions of Studio support the version of Rasa Pro you are using, check the Studio compatibility matrix.

Examples:

Flows with cycles are not supported, flow: <flow_name>
Comparing two slots is not supported. Condition: slots.recurrent_payment_end_date < slots.recurrent_payment_start_date
Having multiple rejections on one slot is not supported, collect: <slot_name>
Authentication errors

User needs to be logged into Rasa Studio before uploading. Use the rasa studio login command.

rasa studio config

New in 3.7

This command is available from Rasa Pro 3.7.0 and requires Rasa Studio

This command prompts for parameters of Rasa Studio installation and configures rasa to target that Rasa Studio instance when executing rasa studio commands. Configuration is saved to: $HOME/.config/rasa/global.yml

The command will use default arguments for the configuration of the authentication server (realm name, client id and authentication url). If you want to use a different configuration, you can specify the parameters by running the command with rasa studio config --advanced.

The command will overwrite the existing configuration file with the new configuration.

Example:

rasa studio config

The command will use SSL strict verification by default to verify the connection to the Rasa Studio authentication server. If you want to skip the strict verification of this connection, you can use the --disable-verify or -x flag:

rasa studio config --disable-verify

Run rasa studio config --help to see the full list of arguments.

rasa studio login

New in 3.7

This command is available from Rasa Pro 3.7.0 and requires Rasa Studio

This command is used to retrieve the access token from Rasa Studio. All other studio commands use this token to authenticate with Rasa Studio. The token is saved to: $HOME/.config/rasa/studio_token.yaml

Example:

rasa studio login --username my_user_name --password my_password

Run rasa studio login --help to see the full list of arguments.

Deprecated commands

The following commands are deprecated and will be removed in a future release.

rasa evaluate markers

caution

This feature is currently experimental and might change or be removed in the future. Share your feedback in the forum to help us make it production-ready.

The following command applies the markers you defined in your marker configuration file, to pre-existing dialogues stored in your tracker store, and produces .csv files containing the extracted markers and summary statistics:

rasa evaluate markers all extracted_markers.csv

Run rasa evaluate markers --help to see the full list of arguments.