Skip to main content

NLU Command Adapter

How the NLUCommandAdapter Works

The NLUCommandAdapter uses the classic way to start flows, such as using predicted intents by an intent classifier. It looks at the predicted intent from the intent classifier and tries to find a flow with a corresponding NLU trigger defined. If a flow has a NLU trigger matching the predicted intent and the confidence is larger than the given threshold defined in the NLU trigger, the NLUCommandAdapter will return a StartFlow command to begin the corresponding flow.

Using the NLUCommandAdapter

To use this component in your assistant, add the NLUCommandAdapter to your NLU pipeline in the config.yml file. You also need to have an intent classifier listed in your NLU pipeline. Read more about the config.yml file here.

config.yml
pipeline:
# - ...
- name: NLUCommandAdapter
# - ...

When to use the NLUCommandAdapter

We recommend to use the NLUCommandAdapter in two scenarios:

  • You want to use NLU data containing intent and examples along with the CALM paradigm. Using the NLUCommandAdapter you can initiate a flow based on a predicted intent, given you already have a solid intent classifier in place. Once the flow is initiated, the business logic would be executed as usual in the CALM paradigm with commands predicted by the LLMCommandGenerator and policies predicting the next best action.

  • You want to minimize the costs by not making an API call to the LLM each time. The NLUCommandAdapter does not make any API call to a LLM compared to the LLMCommandGenerator. Using the NLUCommandAdapter saves some costs. Make sure you have a solid intent classifier in place when using the NLUCommandAdapter; otherwise, incorrect flows will begin.

    New in 3.12

    If you are using a LLM-based command generator alongside the NLUCommandAdapter in the config pipeline, note that both the LLM-based command generator and the NLUCommandAdapter can by default now issue commands at any given conversation turn. To maintain the prior behaviour of minimizing number of LLM invocations, you can set the minimize_num_calls boolean parameter to true in the LLM-based command generator configuration.

Customization

To restrict the length of user messages, you can set the user_input.max_characters (default value is 420 characters).

config.yml
pipeline:
- name: NLUCommandAdapter
user_input:
max_characters: 420