Overview
You can customise many aspects of how your assistant project works by modifying the following files: config.yml
, endpoints.yml
, and domain.yml
.
A minimal configuration for a CALM assistant looks like this:
recipe: default.v1
language: en
assistant_id: 20230405-114328-tranquil-mustard
pipeline:
- name: CompactLLMCommandGenerator
policies:
- name: rasa.core.policies.flow_policy.FlowPolicy
For backwards compatibility, running rasa init
will create an NLU-based assistant.
To create a CALM assistant with the right config.yml
, add the
additional --template
argument:
rasa init --template calm
Assistant ID
The assistant_id
key should be a unique value and allows you to distinguish multiple
deployed assistants.
This id is added to each event's metadata, together with the model id.
See event brokers for more information.
Note that if the config file does not include this required key or the placeholder default value
is not replaced, a random assistant name will be generated and added to the configuration
every time you run rasa train
.
Recipe
The recipe
key only needs to be modified if you want to use a custom graph recipe.
The vast majority of projects should use the default value "default.v1"
.
Language
- The
language
key sets the primary language your assistant supports. Use a two-letter ISO 639-1 code (e.g., "en" for English). additional_languages
key lists codes of other languages your assistant supports.
With these settings, your assistant will default to its primary language but can recognize and respond in all configured languages. You can further translate your assistant’s content. For more details, refer to our Translating Your Assistant guide.
Here’s the example for assistant which is using English as default while also supporting Italian, German, and French:
config.yml
# ...
language: "en"# Default language: English
additional_languages:
- "it"# Italian
- "de"# German
- "fr"# French
# ...
You can use any valid language or locale-specific code following the BCP 47 standard:
- Basic language codes: e.g., "en", "de", "it".
- Locale-specific codes: e.g., "en-US", "fr-CA", "de-CH".
- Custom language codes: e.g., "x-en-formal".
Make sure all language codes adhere strictly to this format to avoid unexpected validation errors.
Rasa adheres to the BCP 47 standard for language codes. This ensures compatibility with platforms such as Twilio Voice, Genesys Cloud, and Amazon Connect.
Pipeline
The pipeline
key lists the components which will be used to process and understand the messages
that end users send to your assistant.
In a CALM assistant, the output of your components pipeline is a list of commands.
The main component in your pipeline is the LLMCommandGenerator
.
Here is what an example configuration looks like:
pipeline:
- name: CompactLLMCommandGenerator
llm:
model_group: openai_llm
flow_retrieval:
embeddings:
model_group: openai_embeddings
user_input:
max_characters: 420
model_groups:
- id: openai_direct
models:
- model: "gpt-4o-2024-11-20"
provider: "openai"
timeout: 7
temperature: 0.0
- id: openai_embeddings
models:
- model: "text-embedding-3-large"
provider: "openai"
The full set of configurable parameters is listed here.
All components which make use of LLMs have common configuration parameters which are listed here
Policies
The policies
key lists the dialogue policies your assistant will use
to progress the conversation.
policies:
- name: rasa.core.policies.flow_policy.FlowPolicy
The FlowPolicy currently doesn't have an additional configuration parameters.
Silence Timeout Handling
Silence timeouts help your assistant handle situations where the user doesn’t respond. For now, this setting only works with voice-stream channels, such as:
- Twilio Media Streams
- Browser Audio
- Genesys
- Jambonz Stream
- Audiocodes Stream
There are two types of timeouts you can configure.
Global Silence Timeout
You can set a default silence timeout across your assistant by adding this to your endpoints.yml
:
interaction_handling:
global_silence_timeout: 7
This means the assistant will wait 7 seconds (or your configured value) for a user reply before treating it as silence and triggering fallback logic.
Local (Per-Step) Silence Timeout
You can override the global value for specific Collect steps:
steps:
- collect:
name: ask_email
silence_timeout: 10
This allows you to fine-tune the timing for specific questions. For example, you may want to:
- Wait longer on more complex or sensitive questions (e.g., "Can you describe your issue?")
- Use shorter timeouts for quick prompts (e.g., yes/no questions)
Tailoring silence handling this way improves the conversational experience.
Enabling/Disabling Silence Timeouts
If you want to disable silence detection so it never triggers during a conversation, you can set the timeout to a very high value. For example, to disable it globally:
interaction_handling:
global_silence_timeout: 70000
Use this approach if you want to avoid fallback interruptions but still need a valid numeric value for configuration or platform compatibility.
Customizing the Assistant's Response to Silence
When the silence timeout is reached, the assistant triggers the pattern_user_silence
. You can customize how your assistant responds to silence by modifying this pattern.