Overview
You can customise many aspects of how Rasa works by modifying the config.yml
file.
A minimal configuration for a CALM assistant looks like this:
recipe: default.v1
language: en
assistant_id: 20230405-114328-tranquil-mustard
pipeline:
- name: SingleStepLLMCommandGenerator
policies:
- name: rasa.core.policies.flow_policy.FlowPolicy
For backwards compatibility, running rasa init
will create an NLU-based assistant.
To create a CALM assistant with the right config.yml
, add the
additional --template
argument:
rasa init --template calm
The recipe, language, and assistant_id keys
The recipe
key only needs to be modified if you want to use a custom graph recipe.
The vast majority of projects should use the default value "default.v1"
.
The language
key is a 2-letter ISO code for the language your assistant supports.
The assistant_id
key should be a unique value and allows you to distinguish multiple
deployed assistants.
This id is added to each event's metadata, together with the model id.
See event brokers for more information.
Note that if the config file does not include this required key or the placeholder default value
is not replaced, a random assistant name will be generated and added to the configuration
every time you run rasa train
.
Pipeline
The pipeline
key lists the components which will be used to process and understand the messages
that end users send to your assistant.
In a CALM assistant, the output of your components pipeline is a list of commands.
The main component in your pipeline is the LLMCommandGenerator
.
Here is what an example configuration looks like:
pipeline:
- name: SingleStepLLMCommandGenerator
llm:
model_group: openai_llm
flow_retrieval:
embeddings:
model_group: openai_embeddings
user_input:
max_characters: 420
model_groups:
- id: openai_direct
models:
- model: "gpt-4-0613"
provider: "openai"
timeout: 7
temperature: 0.0
- id: openai_embeddings
models:
- model: "text-embedding-ada-002"
provider: "openai"
The full set of configurable parameters is listed here.
All components which make use of LLMs have common configuration parameters which are listed here
Policies
The policies
key lists the dialogue policies your assistant will use
to progress the conversation.
policies:
- name: rasa.core.policies.flow_policy.FlowPolicy
The FlowPolicy currently doesn't have an additional configuration parameters.