Skip to main content

Command Generator

At the heart of CALM’s dialogue understanding is a component called the Command Generator. Whenever a new user message arrives, this component takes the entire conversation context—such as active flows on the dialogue stack, previously filled slots, relevant patterns, and the user’s conversation history—and generates a list of high-level commands.

Commands can:

  • Begin a new flow.
  • Terminate the current flow.
  • Intercept user messages to bypass the current collection step.
  • Assign a specified value to a slot.
  • Request clarification.
  • Provide a chitchat-style response, whether predefined or generated.
  • Deliver a free-form, knowledge-based response.
  • Transition the conversation to a human operator.
  • Signal an internal error in handling the dialogue.
  • Indicate a failure in generating any commands.

By framing user intent as commands rather than single-label “intents,” you can handle more complex scenarios—like when a user answers a question and requests a new flow simultaneously.

Tip

For more details on all command generator types, advanced configurations, and usage examples, see the LLM Command Generators reference.

What Is the SingleStepLLMCommandGenerator?

The SingleStepLLMCommandGenerator is the simplest LLM-based approach for converting user messages into commands. It operates on a single prompt that encapsulates:

  1. Conversation History: The full or partial conversation so far, including user and assistant messages.
  2. Active Flow and Slots: Which flow is currently on top of the dialogue stack and which slot (if any) is currently being asked for.
  3. Relevant Flows: A subset of the flows in your assistant that are likely relevant to the user’s request (handled automatically by CALM’s flow retrieval mechanism).
  4. Patterns / Repairs: Predefined flows or patterns that can “interrupt” if the user changes their mind, wants to cancel, or triggers some other conversation repair scenario.

This single in-context prompt is passed to the underlying LLM (e.g., GPT-4) whenever the assistant needs to interpret the user’s latest message. The LLM’s response is turned into a list of commands, which the system executes in a single step.

Flow Retrieval

By default, CALM does not include all possible flows in the LLM prompt. Instead, it matches the incoming user message to each flow and includes only the top matching flows in the prompt. This keeps the prompt size manageable (and your costs lower). If you do want to disable or tweak flow retrieval, or always include certain flows, you can adjust that in your assistant configuration. For more details, see the Flow Retrieval reference.

Customizing the Prompt Template

One of the main benefits of an LLM-based approach is in-context learning—that is, the ability to guide the model through instructions and context in the prompt. By default, the SingleStepLLMCommandGenerator uses a built-in prompt template that dynamically assembles relevant flows, the current conversation, and other context. However, you can override and customize this template to further tailor the model’s behavior.

When Should You Customize?

  1. Flow Descriptions Aren’t Enough

    Usually, you can steer the LLM by enriching your flows with clear, unambiguous descriptions and step-by-step instructions. But if you find the model still isn’t producing the commands you expect, or if you have domain-specific language the model often confuses, you may want to go further and rewrite the template.

  2. You Want Specific Formatting or Additional Examples

    Suppose you need to show the LLM a set of few-shot examples or domain-specific instructions that can’t be captured solely in the flow/slot descriptions. A custom prompt template gives you full control over how that context appears.

How to Customize

  1. Create a Jinja2 Template

    You’ll provide a custom .jinja2 file that contains the static text you want plus references to dynamic variables (like {{ current_flow }}, {{ flow_slots }}, etc.).

  2. Reference That File in Your config.yml

    Under SingleStepLLMCommandGenerator, set the prompt_template property:

    config.yml
    pipeline:
    - name: SingleStepLLMCommandGenerator
    prompt_template: prompts/my_custom_generator.jinja2
  3. Leverage Available Variables

    In your Jinja2 file, you have access to multiple variables, such as:

    VariableDescription
    current_conversationA readable transcript of the conversation so far.
    user_messageThe latest user message.
    available_flowsA list of all flows potentially relevant to this conversation.
    current_flowThe name of the currently active flow.
    flow_slotsThe slots associated with the current flow (name, value, etc.).

    You can iterate over lists to print out details about flows or slots. For example:

    {% for flow in available_flows %}
    {{ flow.name }}: {{ flow.description }}
    {% endfor %}
  4. Make Your Template Clear and Consistent

    • If your slot descriptions are long or bullet-pointed, consider adjusting how you render them. Use numbered lists or add separators so the LLM easily distinguishes the slot’s name from its instructions.
    • Keep the entire prompt in one language if your assistant is multilingual or works in a language other than English. Smaller LLMs often do better if the entire prompt is consistently in a single language.

Reference: Find the complete list of variables and more advanced ways to structure your prompt in the Reference section on the command generator.

Important Note on Customizing the Command Set

For more technical details on configuration parameters, advanced prompt tuning, or combining multiple Command Generators, head over to the Command Generators reference.

While CALM’s Command Generator is designed to be flexible, the Rasa team actively tests and maintains a fixed set of built-in commands (e.g., StartFlow, CancelFlow, SetSlot, etc.) to ensure the highest level of performance and accuracy. If you override or replace these built-in commands:

  • Reduced Accuracy Guarantees

    We can’t guarantee that your assistant will maintain the same level of accuracy if you remove or rename these commands. Our tests and improvements assume that these commands exist in your system.

  • Potential Maintenance Overheads

    Custom commands require additional testing to ensure they behave consistently across different user inputs. You’ll need to invest in ongoing QA to match the quality of the maintained commands.

  • Possibility of Breaking Changes

    Future updates or improvements to CALM may assume the presence of these default commands. If you’ve drastically modified them, you could need to refactor your assistant for compatibility.