Version: Latest

Dialogue Understanding

Dialogue Understanding aims to understand how the end user of an AI assistant wants to progress the conversation.

New in 3.7

The Command Generator is part of Rasa's new Conversational AI with Language Models (CALM) approach and available starting with version 3.7.0.

The dialogue understanding module of CALM transforms the latest user message with the conversation context into a set of commands which the assistant uses to execute the defined business logic. It does so with the help of Command Generators

CommandGenerator

A Command Generator takes as input the latest user message with the conversation context and transforms it into a set of commands Currently, there are three command generators available:

  1. SingleStepLLMCommandGenerator
  2. MultiStepLLMCommandGenerator
  3. NLUCommandAdapter

To use one of the LLM-based command generators follow the instructions on the LLM-based command generators page.

It is also possible to fine-tune a LLM within CALM to use it as a command generator in order to mitigate issues of latency, reliability, and strategic dependency. Follow the steps of the fine-tuning recipe to create and use a fine-tuned model.

Using NLUCommandAdapter and LLM-based Command Generators

If you want to utilize both LLMs and a classic NLU pipeline to predict commands then you can do so by adding the NLUCommandAdapter before one of the LLM-based command generators, for e.g. SingleStepLLMCommandGenerator -

config.yml
pipeline:
# - ...
- name: NLUCommandAdapter
- name: SingleStepLLMCommandGenerator
# - ...

The components are executed one after another. If the first component (i.e. NLUCommandAdapter) successfully predicts a StartFlow command, SingleStepLLMCommandGenerator will be skipped (i.e. no calls to the LLM are made).

In general, if the first Command Generator predicts a command, all other Command Generators that come next in the pipeline are skipped. Keep that in mind when adding a custom Command Generator to the pipeline.

Command reference

Like its name indicates, the CommandGenerator generates "commands" that are then internally processed to trigger operations on the current conversation. Below are references to all supported commands, indicating the AI assistant should:

Start Flow

Start a new flow.

Cancel Flow

Cancel the current flow. It powers the Conversation Repair's Cancellation use case.

Skip Question

Intercepting user messages intending to bypass the current collect step in the flow. It powers the Conversation Repair's Skipping collect steps use case.

Set Slot

Set a slot to a given value.

Correct Slots

Change the value of a given slot to a new value. It powers the Conversation Repair's Correction use case.

Clarify

Ask for clarification. It powers the Conversation Repair's Clarification use case.

Chit-Chat Answer

Respond with answers in a chitchat style, whether they are predefined or free-form. It powers the Conversation Repair's Chitchat use case.

Knowledge Answer

Reply a knowledge-based free-form answer. It works together with the Enterprise Search policy.

Human Handoff

Hand off the conversation to a human.

Error

This command indicates the AI assistant failed to handle the dialogue due to an internal error.

Cannot Handle

This command indicates that the command generator failed to generate any commands. It powers the Conversation Repair's Cannot handle use case. By default, this command is not included in the prompts provided to an LLM in SingleStepLLMCommandGenerator, but it is included in MultiStepLLMCommandGenerator.

Change Flow

This command indicates a change of flow was requested by the comman generator. This command is prediced exclusively by the MultiStepLLMCommandGenerator and is used internally.