notice
This is unreleased documentation for Rasa Documentation Main/Unreleased version.
For the latest released documentation, see the latest version (3.x).
Customizing LLM based Components
Rasa Labs access - New in 3.7.0b1
Rasa Labs features are experimental. We introduce experimental features to co-create with our customers. To find out more about how to participate in our Labs program visit our Rasa Labs page.
We are continuously improving Rasa Labs features based on customer feedback. To benefit from the latest bug fixes and feature improvements, please install the latest pre-release using:
The LLM components can be extended and modified with custom versions. This allows you to customize the behavior of the LLM components to your needs and experiment with different algorithms.
Customizing a component
The LLM components are implemented as a set of classes that can be extended
and modified. The following example shows how to extend the
LLMIntentClassifier
component to add a custom behavior.
For example, we can change the logic that selects the intent labels that are
included in the prompt to the LLM model. By default, we only include a selection
of the available intents in the prompt. But we can also include all available
intents in the prompt. This can be done by extending the LLMIntentClassifier
class and overriding the select_intent_examples
method:
The custom component can then be used in the Rasa configuration file:
To reference a component in the Rasa configuration file, you need to use the
full name of the component class. The full name of the component class is
<module>.<class>
.
All components are well documented in their source code. The code can
be found in your local installation of the rasa_plus
python package.
Common functions to be overridden
Below is a list of functions that could be overwritten to customize the LLM components:
LLMIntentClassifier
select_intent_examples
Selects the intent examples to use for the LLM prompt. The selected intent labels are included in the generation prompt. By default, only the intent labels that are used in the few shot examples are included in the prompt.
closest_intent_from_training_data
The LLM generates an intent label which might not always be part of the domain. This function can be used to map the generated intent label to an intent label that is part of the domain.
The default implementation embedds the generated intent label and all intent labels from the domain and returns the closest intent label from the domain.
select_few_shot_examples
Selects the NLU training examples that are included in the LLM prompt. The selected examples are included in the prompt to help the LLM to generate the correct intent. By default, the most similar training examples are selected. The selection is based on the message that should be classified. The most similar examples are selected by embedding the incoming message, all training examples and doing a similarity search.
LLMResponseRephraser
rephrase
Rephrases the response generated by the LLM. The default implementation rephrases the response by prompting an LLM to generate a response based on the incoming message and the generated response. The generated response is then replaced with the generated response.
IntentlessPolicy
select_response_examples
Samples responses that fit the current conversation. The default implementation samples responses from the domain that fit the current conversation. The selection is based on the conversation history, the history will be embedded and the most similar responses will be selected.
select_few_shot_conversations
Samples conversations from the training data. The default implementation samples conversations from the training data that fit the current conversation. The selection is based on the conversation history, the history will be embedded and the most similar conversations will be selected.