The command line interface (CLI) gives you easy-to-remember commands for common tasks. This page describes the behavior of the commands and the parameters you can pass to them.
Creates a new project with example training data, actions, and config files.
rasa train
Trains a model using your NLU data and stories, saves trained model in ./models.
rasa interactive
Starts an interactive learning session to create new training data by chatting to your assistant.
rasa shell
Loads your trained model and lets you talk to your assistant on the command line.
rasa run
Starts a server with your trained model.
rasa run actions
Starts an action server using the Rasa SDK.
rasa visualize
Generates a visual representation of your stories.
rasa test
Tests a trained Rasa model on any files starting with test_.
rasa test e2e
Runs end-to-end testing fully integrated with the action server that serves as acceptance testing.
rasa data split nlu
Performs a 80/20 split of your NLU training data.
rasa data split stories
Do the same as rasa data split nlu, but for your stories data.
rasa data convert
Converts training data between different formats.
rasa data migrate
Migrates 2.0 domain to 3.0 format.
rasa data validate
Checks the domain, NLU and conversation data for inconsistencies.
rasa export
Exports conversations from a tracker store to an event broker.
rasa evaluate markers
Extracts markers from an existing tracker store.
rasa marker upload
Upload marker configurations to Analytics Data Pipeline
rasa license
Display licensing information.
rasa -h
Shows all available commands.
note
If you run into character encoding issues on Windows like: UnicodeEncodeError: 'charmap' codec can't encode character ... or
the terminal is not displaying colored messages properly, prepend winpty to the command you would like to run.
For example winpty rasa init instead of rasa init
Rasa produces log messages at several different levels (eg. warning, info, error and so on). You can control which level of logs you would like to see with --verbose (same as -v) or --debug (same as -vv) as optional command line arguments. See each command below for more explanation on what these arguments mean.
In addition to CLI arguments, several environment variables allow you to control log output in a more granular way. With these environment variables, you can configure log levels for messages created by external libraries such as Matplotlib, Pika, and Kafka. These variables follow standard logging level in Python. Currently, following environment variables are supported:
LOG_LEVEL_LIBRARIES: This is the general environment variable to configure log level for the main libraries Rasa uses. It covers Tensorflow, asyncio, APScheduler, SocketIO, Matplotlib, RabbitMQ, Kafka.
LOG_LEVEL_MATPLOTLIB: This is the specialized environment variable to configure log level only for Matplotlib.
LOG_LEVEL_RABBITMQ: This is the specialized environment variable to configure log level only for AMQP libraries, at the moment it handles log levels from aio_pika and aiormq.
LOG_LEVEL_KAFKA: This is the specialized environment variable to configure log level only for kafka.
LOG_LEVEL_PRESIDIO: This is the specialized environment variable to configure log level only for Presidio, at the moment it handles log levels from presidio_analyzer and presidio_anonymizer.
LOG_LEVEL_FAKER: This is the specialized environment variable to configure log level only for Faker.
General configuration (LOG_LEVEL_LIBRARIES) has less priority than library level specific configuration (LOG_LEVEL_MATPLOTLIB, LOG_LEVEL_RABBITMQ etc); and CLI parameter sets the lowest level log messages which will be handled. This means variables can be used together with a predictable result. As an example:
LOG_LEVEL_LIBRARIES=ERROR LOG_LEVEL_MATPLOTLIB=WARNING LOG_LEVEL_KAFKA=DEBUG rasa shell --debug
The above command run will result in showing:
messages with DEBUG level and higher by default (due to --debug)
messages with WARNING level and higher for Matplotlib
messages with DEBUG level and higher for kafka
messages with ERROR level and higher for other libraries not configured
Note that CLI config sets the lowest level log messages to be handled, hence the following command will set the log level to INFO (due to --verbose) and no debug messages will be seen (library level configuration will not have any effect):
LOG_LEVEL_LIBRARIES=DEBUG LOG_LEVEL_MATPLOTLIB=DEBUG rasa shell --verbose
As an aside, CLI log level sets the level at the root logger (which has the important handler - coloredlogs handler); this means even if an environment variable sets a library logger to a lower level, the root logger will reject messages from that library. If not specified, the CLI log level is set to INFO.
The Rasa CLI now includes a new argument --logging-config-file which accepts a YAML file as value.
You can now configure any logging formatters or handlers in a separate YAML file.
The logging config YAML file must follow the Python built-in dictionary schema, otherwise it will fail validation.
You can pass this file as argument to the --logging-config-file CLI option and use it with any of the rasa commands.
This command sets up a complete assistant for you with some example training data:
rasa init
It creates the following files:
.
├── actions
│ ├── __init__.py
│ └── actions.py
├── config.yml
├── credentials.yml
├── data
│ ├── nlu.yml
│ └── stories.yml
├── domain.yml
├── endpoints.yml
├── models
│ └── <timestamp>.tar.gz
└── tests
└── test_stories.yml
It will ask you if you want to train an initial model using this data.
If you answer no, the models directory will be empty.
Any of the default CLI commands will expect this project setup, so this is the
best way to get started. You can run rasa train, rasa shell and rasa test
without any additional configuration.
If you have existing models in your directory (under models/ by default), only
the parts of your model that have changed will be re-trained. For example, if you edit
your NLU training data and nothing else, only the NLU part will be trained.
If you want to train an NLU or dialogue model individually, you can run
rasa train nlu or rasa train core. If you provide training data only for one one of
these, rasa train will fall back to one of these commands by default.
rasa train will store the trained model in the directory defined by --out, models/ by default.
The name of the model by default is <timestamp>.tar.gz. If you want to name your model differently,
you can specify the name using the --fixed-model-name flag.
By default validation is run before training the model. If you want to skip validation, you can use the --skip-validation flag.
If you want to fail on validation warnings, you can use the --fail-on-validation-warnings flag.
The --validation-max-history is analogous to the --max-history argument of rasa data validate.
The following arguments can be used to configure the training process:
usage: rasa train [-h] [-v] [-vv] [--quiet]
[--logging-config-file LOGGING_CONFIG_FILE]
[--data DATA [DATA ...]] [-c CONFIG] [-d DOMAIN] [--out OUT]
The policy and NLU pipeline configuration of your bot.
(default: config.yml)
-d DOMAIN, --domain DOMAIN
Domain specification. This can be a single YAML file,
or a directory that contains several files with domain
specifications in it. The content of these files will
be read and merged together. (default: domain.yml)
--out OUT Directory where your models should be stored.
(default: models)
--dry-run If enabled, no actual training will be performed.
Instead, it will be determined whether a model should
be re-trained and this information will be printed as
the output. The return code is a 4-bit bitmask that
can also be used to determine what exactly needs to be
retrained: - 0 means that no extensive training is
required (note that the responses still might require
updating by running 'rasa train'). - 1 means the model
needs to be retrained - 8 means the training was
forced (--force argument is specified) (default:
False)
--skip-validation Skip validation step before training. (default: False)
--fail-on-validation-warnings
Fail on validation warnings. If omitted only errors
will exit with a non zero status code (default: False)
--validation-max-history VALIDATION_MAX_HISTORY
Number of turns taken into account for story structure
validation. (default: None)
--augmentation AUGMENTATION
How much data augmentation to use during training.
(default: 50)
--debug-plots If enabled, will create plots showing checkpoints and
their connections between story blocks in a file
called `story_blocks_connections.html`. (default:
False)
--num-threads NUM_THREADS
Maximum amount of threads to use when training.
(default: None)
--fixed-model-name FIXED_MODEL_NAME
If set, the name of the model file/directory will be
set to the given name. (default: None)
--persist-nlu-data Persist the NLU training data in the saved model.
(default: False)
--force Force a model training even if the data has not
changed. (default: False)
--finetune [FINETUNE]
Fine-tune a previously trained model. If no model path
is provided, Rasa Open Source will try to finetune the
latest trained model from the model directory
specified via '--out'. (default: None)
--epoch-fraction EPOCH_FRACTION
Fraction of epochs which are currently specified in
the model configuration which should be used when
finetuning a model. (default: None)
--endpoints ENDPOINTS
Configuration file for the connectors as a yml file.
(default: endpoints.yml)
Python Logging Options:
You can control level of log messages printed. In addition to these
arguments, a more fine grained configuration can be achieved with
environment variables. See online documentation for more info.
-v, --verbose Be verbose. Sets logging level to INFO. (default:
None)
-vv, --debug Print lots of debugging statements. Sets logging level
to DEBUG. (default: None)
--quiet Be quiet! Sets logging level to WARNING. (default:
None)
--logging-config-file LOGGING_CONFIG_FILE
If set, the name of the logging configuration file
will be set to the given name. (default: None)
See the section on data augmentation for info on how data augmentation works
and how to choose a value for the flag. Note that TEDPolicy is the only policy affected by data augmentation.
See the following section on incremental training for more information about the --epoch-fraction argument.
This feature is experimental.
We introduce experimental features to get feedback from our community, so we encourage you to try it out!
However, the functionality might be changed or removed in the future.
If you have feedback (positive or negative) please share it with us on the Rasa Forum.
In order to improve the performance of an assistant, it's helpful to practice CDD
and add new training examples based on how your users have talked to your assistant. You can use rasa train --finetune
to initialize the pipeline with an already trained model and further finetune it on the
new training dataset that includes the additional training examples. This will help reduce the
training time of the new model.
By default, the command picks up the latest model in the models/ directory. If you have a specific model
which you want to improve, you may specify the path to this by
running rasa train --finetune <path to model to finetune>. Finetuning a model usually
requires fewer epochs to train machine learning components like DIETClassifier, ResponseSelector and TEDPolicy compared to training from scratch.
Either use a model configuration for finetuning
which defines fewer epochs than before or use the flag
--epoch-fraction. --epoch-fraction will use a fraction of the epochs specified for each machine learning component
in the model configuration file. For example, if DIETClassifier is configured to use 100 epochs,
specifying --epoch-fraction 0.5 will only use 50 epochs for finetuning.
You can also finetune an NLU-only or dialogue management-only model by using
rasa train nlu --finetune and rasa train core --finetune respectively.
To be able to fine tune a model, the following conditions must be met:
The configuration supplied should be exactly the same as the
configuration used to train the model which is being finetuned.
The only parameter that you can change is epochs for the individual machine learning components and policies.
The set of labels(intents, actions, entities and slots) for which the base model is trained
should be exactly the same as the ones present in the training data used for finetuning. This
means that you cannot add new intent, action, entity or slot labels to your training data
during incremental training. You can still add new training examples for each of the existing
labels. If you have added/removed labels in the training data, the pipeline needs to be trained
from scratch.
The model to be finetuned is trained with MINIMUM_COMPATIBLE_VERSION of the currently installed rasa version.
You can start an interactive learning session by running:
rasa interactive
This will first train a model and then start an interactive shell session.
You can then correct your assistants predictions as you talk to it.
If UnexpecTEDIntentPolicy is
included in the pipeline, action_unlikely_intent
can be triggered at any conversation turn. Subsequently, the following message will be displayed:
The bot wants to run 'action_unlikely_intent' to indicate that the last user message was unexpected
at this point in the conversation. Check out UnexpecTEDIntentPolicy docs to learn more.
As the message states, this is an indication that you have explored a conversation path
which is unexpected according to the current set of training stories and hence adding this
path to training stories is recommended. Like other bot actions, you can choose to confirm
or deny running this action.
If you provide a trained model using the --model argument, training is skipped
and that model will be loaded instead.
During interactive learning, Rasa will plot the current conversation
and a few similar conversations from the training data to help you
keep track of where you are. You can view the visualization
at http://localhost:5005/visualization.html
as soon as the session has started. This diagram can take some time to generate.
To skip the visualization, run rasa interactive --skip-visualization.
Add the assistant_id key introduced in 3.5
Running interactive learning with a pre-trained model whose metadata does not include the assistant_id
will exit with an error. If this happens, add the required key with a unique identifier value in config.yml
and re-run training.
The following arguments can be used to configure the interactive learning session:
By default, this will load up the latest trained model.
You can specify a different model to be loaded by using the --model flag.
If you start the shell with an NLU-only model, rasa shell will output the
intents and entities predicted for any message you enter.
If you have trained a combined Rasa model but only want to see what your model
extracts as intents and entities from text, you can use the command rasa shell nlu.
To increase the logging level for debugging, run:
rasa shell --debug
note
In order to see the typical greetings and/or session start behavior you might see
in an external channel, you will need to explicitly send /session_start
as the first message. Otherwise, the session start behavior will begin as described in
Session configuration.
The following arguments can be used to configure the command.
Most arguments overlap with rasa run; see the following section for more info on those arguments.
Note that the --connector argument will always be set to cmdline when running rasa shell.
This means all credentials in your credentials file will be ignored,
and if you provide your own value for the --connector argument it will also be ignored.
To start a server running your trained model, run:
rasa run
By default the Rasa server uses HTTP for its communication. To secure the communication with
SSL and run the server on HTTPS, you need to provide a valid certificate and the corresponding
private key file. You can specify these files as part of the rasa run command.
If you encrypted your keyfile with a password during creation,
you need to add the --ssl-password as well.
rasa run --ssl-certificate myssl.crt --ssl-keyfile myssl.key --ssl-password mypassword
Rasa by default listens on each available network interface. You can limit this to a specific
network interface using the -i command line option.
rasa run -i 192.168.69.150
Rasa will by default connect to all channels specified in your credentials file.
To connect to a single channel and ignore all other channels in your credentials file,
specify the name of the channel in the --connector argument.
This will test your latest trained model on any end-to-end stories you have
defined in files with the test_ prefix.
If you want to use a different model, you can specify it using the --model flag.
To evaluate the dialogue and NLU
models separately, use the commands below:
This will test your latest trained model on any end-to-end test cases you have.
If you want to use a different model, you can specify it using the --model flag.
The following arguments are available for rasa test e2e:
usage: rasa test e2e [-h] [-v] [-vv] [--quiet] [--logging-config-file LOGGING_CONFIG_FILE] [--fail-fast] [-o] [--remote-storage REMOTE_STORAGE] [-m MODEL] [--endpoints ENDPOINTS] [path-to-test-cases]
Set the remote location where your Rasa model is stored, e.g. on AWS. (default: None)
-m MODEL, --model MODEL
Path to a trained Rasa model. If a directory is specified, it will use the latest model in this directory. (default: models)
--endpoints ENDPOINTS
Configuration file for the model server and the connectors as a yml file. (default: endpoints.yml)
Python Logging Options:
You can control level of log messages printed. In addition to these arguments, a more fine grained configuration can be achieved with environment variables. See online documentation for more info.
-v, --verbose Be verbose. Sets logging level to INFO. (default: None)
-vv, --debug Print lots of debugging statements. Sets logging level to DEBUG. (default: None)
--quiet Be quiet! Sets logging level to WARNING. (default: None)
--logging-config-file LOGGING_CONFIG_FILE
If set, the name of the logging configuration file will be set to the given name. (default: None)
Testing Settings:
path-to-test-cases Input file or folder containing end-to-end test cases. (default: tests/e2e_test_cases.yml)
--fail-fast Fail the test suite as soon as a unit test fails. (default: False)
To create a train-test split of your NLU training data, run:
rasa data split nlu
This will create a 80/20 split of train/test data by default.
You can specify the training data, the fraction, and the output directory using
the following arguments:
usage: rasa data split nlu [-h] [-v] [-vv] [--quiet]
[--logging-config-file LOGGING_CONFIG_FILE]
[-u NLU] [--training-fraction TRAINING_FRACTION]
[--random-seed RANDOM_SEED] [--out OUT]
options:
-h, --help show this help message and exit
-u NLU, --nlu NLU File or folder containing your NLU data. (default:
data)
--training-fraction TRAINING_FRACTION
Percentage of the data which should be in the training
data. (default: 0.8)
--random-seed RANDOM_SEED
Seed to generate the same train/test split. (default:
None)
--out OUT Directory where the split files should be stored.
(default: train_test_split)
Python Logging Options:
You can control level of log messages printed. In addition to these
arguments, a more fine grained configuration can be achieved with
environment variables. See online documentation for more info.
-v, --verbose Be verbose. Sets logging level to INFO. (default:
None)
-vv, --debug Print lots of debugging statements. Sets logging level
to DEBUG. (default: None)
--quiet Be quiet! Sets logging level to WARNING. (default:
None)
--logging-config-file LOGGING_CONFIG_FILE
If set, the name of the logging configuration file
will be set to the given name. (default: None)
If you have NLG data for retrieval actions, this will be saved to separate files:
ls train_test_split
nlg_test_data.yml test_data.yml
nlg_training_data.yml training_data.yml
To split your stories, you can use the following command:
rasa data split stories
It has the same arguments as split nlu command, but loads yaml files with stories and perform random splitting.
Directory train_test_split will contain all yaml files processed with prefixes train_ or test_ containing
train and test parts.
The domain is the only data file whose format changed between 2.0 and 3.0.
You can automatically migrate a 2.0 domain to the 3.0 format.
You can start the migration by running:
rasa data migrate
You can specify the input file or directory and the output file or directory with the following arguments:
rasa data migrate -d DOMAIN --out OUT_PATH
If no arguments are specified, the default domain path (domain.yml) will be used for both input and output files.
This command will also back-up your 2.0 domain file(s) into a different original_domain.yml file or
directory labeled original_domain.
Note that the slots in the migrated domain will contain mapping conditions if these
slots are part of a form's required_slots.
caution
Exceptions will be raised and the migration process terminated if invalid domain files are provided or if they are
already in the 3.0 format, if slots or forms are missing from your original files or if the slots or forms sections
are spread across multiple domain files.
This is done to avoid duplication of migrated sections in your domain files.
Please make sure all your slots' or forms' definitions are grouped into a single file.
You can check your domain, NLU data, or story data for mistakes and inconsistencies.
To validate your data, run this command:
rasa data validate
The validator searches for errors in the data, e.g. two intents that have some
identical training examples.
The validator also checks if you have any stories where different assistant actions follow from the same
dialogue history. Conflicts between stories will prevent a model from learning the correct
pattern for a dialogue.
Searching for the assistant_id key introduced in 3.5
The validator will check whether the assistant_id key is present in the config file and will issue a warning if this
key is missing or if the default value has not been changed.
If you pass a max_history value to one or more policies in your config.yml file, provide the
smallest of those values in the validator command using the --max-history <max_history> flag.
You can also validate only the story structure by running this command:
rasa data validate stories
note
Running rasa data validate does not test if your rules are consistent with your stories.
However, during training, the RulePolicy checks for conflicts between rules and stories. Any such conflict will abort training.
Also, if you use end-to-end stories, then this might not capture all conflicts. Specifically, if two user inputs
result in different tokens yet exactly the same featurization, then conflicting actions after these inputs
may exist but will not be reported by the tool.
To interrupt validation even for minor issues such as unused intents or responses, use the --fail-on-warnings flag.
check your story names
The rasa data validate stories command assumes that all your story names are unique!
You can use rasa data validate with additional arguments, e.g. to specify the location of your data and
domain files:
usage: rasa data validate [-h] [-v] [-vv] [--quiet]
[--logging-config-file LOGGING_CONFIG_FILE]
[--max-history MAX_HISTORY] [-c CONFIG]
[--fail-on-warnings] [-d DOMAIN]
[--data DATA [DATA ...]]
{stories} ...
positional arguments:
{stories}
stories Checks for inconsistencies in the story files.
options:
-h, --help show this help message and exit
--max-history MAX_HISTORY
Number of turns taken into account for story structure
validation. (default: None)
-c CONFIG, --config CONFIG
The policy and NLU pipeline configuration of your bot.
(default: config.yml)
--fail-on-warnings Fail validation on warnings and errors. If omitted
only errors will result in a non zero exit code.
(default: False)
-d DOMAIN, --domain DOMAIN
Domain specification. This can be a single YAML file,
or a directory that contains several files with domain
specifications in it. The content of these files will
be read and merged together. (default: domain.yml)
--data DATA [DATA ...]
Paths to the files or directories containing Rasa
data. (default: data)
Python Logging Options:
You can control level of log messages printed. In addition to these
arguments, a more fine grained configuration can be achieved with
environment variables. See online documentation for more info.
-v, --verbose Be verbose. Sets logging level to INFO. (default:
None)
-vv, --debug Print lots of debugging statements. Sets logging level
to DEBUG. (default: None)
--quiet Be quiet! Sets logging level to WARNING. (default:
None)
--logging-config-file LOGGING_CONFIG_FILE
If set, the name of the logging configuration file
To export events from a tracker store using an event broker, run:
rasa export
You can specify the location of the environments file, the minimum and maximum
timestamps of events that should be published, as well as the conversation IDs that
should be published:
This feature is currently experimental and might change or be removed in the future. Share your feedback in the forum to help us make it production-ready.
The following command applies the markers you defined in your marker configuration file,
to pre-existing dialogues stored in your tracker store, and produces .csv files containing
the extracted markers and summary statistics:
rasa evaluate markers all extracted_markers.csv
Use the following arguments to configure the marker extraction process:
output_filename The filename to write the extracted markers to (CSV format).
{first_n,sample,all}
first_n Select trackers sequentially until N are taken.
sample Select trackers by sampling N.
all Select all trackers.
optional arguments:
-h, --help show this help message and exit
--config CONFIG The config file(s) containing marker definitions. This can be a single YAML file, or a directory that contains several files with marker definitions in it. The content of these files will be read and
merged together. (default: markers.yml)
--no-stats Do not compute summary statistics. (default: True)
--stats-file-prefix [STATS_FILE_PREFIX]
The common file prefix of the files where we write out the compute statistics. More precisely, the file prefix must consist of a common path plus a common file prefix, to which suffixes `-overall.csv` and
`-per-session.csv` will be added automatically. (default: stats)
--endpoints ENDPOINTS
Configuration file for the tracker store as a yml file. (default: endpoints.yml)
-d DOMAIN, --domain DOMAIN
Domain specification. This can be a single YAML file, or a directory that contains several files with domain specifications in it. The content of these files will be read and merged together. (default:
domain.yml)
Python Logging Options:
-v, --verbose Be verbose. Sets logging level to INFO. (default: None)
-vv, --debug Print lots of debugging statements. Sets logging level to DEBUG. (default: None)
--quiet Be quiet! Sets logging level to WARNING. (default: None)
This command applies to markers and their real-time processing.
Running this command validates the marker configuration file against the domain file and uploads the configuration to Analytics Data Pipeline
usage: rasa markers upload [-h] [-v] [-vv] [--quiet]
[--logging-config-file LOGGING_CONFIG_FILE]
[--config CONFIG]
[--rasa-pro-services-url RASA_PRO_SERVICES_URL]
[-d DOMAIN]
optional arguments:
-h, --help show this help message and exit
--config CONFIG The marker configuration file(s) containing marker
definitions. This can be a single YAML file, or a
directory that contains several files with marker
definitions in it. The content of these files will be
read and merged together. (default: markers.yml)
--rasa-pro-services-url RASA_PRO_SERVICES_URL
The URL of the Rasa Pro Services instance to upload
markers to.Specified URL should not contain a trailing
slash. (default: )
-d DOMAIN, --domain DOMAIN
Domain specification. This can be a single YAML file,
or a directory that contains several files with domain
specifications in it. The content of these files will
be read and merged together. (default: domain.yml)
Python Logging Options:
You can control level of log messages printed. In addition to these
arguments, a more fine grained configuration can be achieved with
environment variables. See online documentation for more info.
-v, --verbose Be verbose. Sets logging level to INFO. (default:
None)
-vv, --debug Print lots of debugging statements. Sets logging level
to DEBUG. (default: None)
--quiet Be quiet! Sets logging level to WARNING. (default:
None)
--logging-config-file LOGGING_CONFIG_FILE
If set, the name of the logging configuration file
will be set to the given name. (default: None)
Description:
The `rasa markers upload` command allows you to upload markers to the Rasa Pro Services. Markers are custom conversational events that provide additional context for analysis and insights generation. By uploading markers, you can enable real-time analysis and enhance the performance of your Rasa Assistant.
Examples:
Upload Markers to Rasa Pro Services:
rasa markers upload --config markers.yml --rasa-pro-services-url https://example.com/rasa-pro -d domain.yml
You'll need a license to get started with Rasa Pro. Talk with Sales
New in 3.3
This command was introduced.
Use rasa license to display information about licensing in Rasa Pro, especially information about
3rd party dependencies licenses.
Here is the list of all possible arguments:
usage: rasa license [-h] [-v] [-vv] [--quiet] [--logging-config-file LOGGING_CONFIG_FILE]
Display licensing information.
options:
-h, --help show this help message and exit
Python Logging Options:
You can control level of log messages printed. In addition to these arguments, a more fine grained configuration can be achieved with environment variables. See online documentation for more info.
-v, --verbose Be verbose. Sets logging level to INFO. (default: None)
-vv, --debug Print lots of debugging statements. Sets logging level to DEBUG. (default: None)
--quiet Be quiet! Sets logging level to WARNING. (default: None)
--logging-config-file LOGGING_CONFIG_FILE
If set, the name of the logging configuration file will be set to the given name. (default: None)