Fine-tuning Recipe for Command Generator
New in 3.10
The fine-tuning recipe helps you to fine-tune a small language model, for e.g. Llama-3.1 8B, for the task of
command generation and integrate it with your CALM assistant.
The feature is available starting
with version 3.10.0
as a beta feature. If you are already familiar with the concepts of the recipe, head over to
the user guide to follow the exact steps needed to try the recipe.
CALM is LLM agnostic which means that when you start building your CALM assistant, you can use an off the shelf powerful LLM like GPT-4 via the OpenAI / Azure OpenAI platform. This is a great way to bootstrap your CALM assistant.
However, as the assistant is scaled up to more use cases and higher traffic, the assistant can run into the following issues -
- Response time of the assistant can be high, spoiling the UX of the end user talking to the assistant.
- Relying on 3rd party LLM providers can mean having to adhere to rate limits imposed by them resulting in some user messages not receiving a reply.
- Some of these powerful LLMs can be quite costly at scale.
Fine-tuning recipe helps you to fine-tune a small language model, for e.g. Llama-3.1 8B, for the task of command generation and integrate it with your CALM assistant. Doing so can help mitigate the issues around response times and LLM availability by a huge margin and lowers down the runtime costs of the assistant as well.
This page provides a conceptual understanding of how the recipe works under the hood. You can refer to the user guide to follow the exact steps needed to try the recipe.
Conceptual Overview
The recipe semi-automates the following steps in order to produce a fine-tuned LLM:
- Annotate commands for each user step for every sample conversation available.
- Generate synthetic data using an LLM to create new conversations by rephrasing every user step.
- Construct a fine-tuning dataset by aggregating prompt and commands of every user step across all generated conversations.
- Fine-tune an LLM on the fine-tuning dataset.
We explain each of the steps in more detail in the following sections.
Preparation
The feature assumes that the user already has a CALM assistant built with the SingleStepLLMCommandGenerator
as
the command generator using a strong LLM like gpt-4
and
E2E tests written for the same assistant.
To fine-tune your model effectively, it’s crucial to ensure that your system is comprehensively covered by E2E tests. These tests provide the data needed for fine-tuning. If your E2E tests do not sufficiently cover the assistant's functionality, the fine-tuned model may not perform well due to a lack of relevant training examples.
To address this, you can use an E2E test diagnostic tool, which is available as part of Rasa’s CLI. This tool helps you evaluate whether your E2E tests adequately cover the system's capabilities. It also identifies areas where existing tests may need to be updated or where new tests should be created before proceeding with fine-tuning.
Assessing test coverage for fine-tuning
When reviewing the results of the coverage report, there are three key areas to focus on to ensure your data is suitable for fine-tuning:
- Representation of All Commands: Ensure that all commands your assistant might generate are represented in your tests. If certain commands are not covered, the model may struggle to generate them correctly, having never "seen" these scenarios during training. This can evaluated by inspecting the command coverage histograms
- Demonstration of Desired Skills: Ensure that the skills you want your bot to demonstrate are well-represented in the tests. This ensures the model learns from a variety of examples and scenarios, increasing its robustness and reliability. This can evaluated by inspecting the flow coverage report
By carefully analyzing and expanding your test coverage, you can better prepare your model for fine-tuning, resulting in improved performance and a more reliable assistant.
Command Annotation
important
If an E2E test is failing on your assistant, it will be ignored by the command annotation module and subsequently by all other steps of the recipe. Hence, please ensure that the assistant is able to successfully pass the input E2E tests. We also recommend using the E2E coverage analysis tool to understand the coverage of the passing tests against the flows of your assistant.
As the first step of the recipe, the command annotator module runs the E2E tests through the CALM assistant
and extracts commands predicted by the SingleStepLLMCommandGenerator
at every user step.
The module is run as part of the
rasa llm finetune prepare-data
CLI command
and each E2E test is augmented with the commands the LLM should predict at every user step. The output of this step
converts each E2E test into a conversation finally looking like this:
Only user steps that the SingleStepLLMCommandGenerator
processes, are annotated with commands and will
end up in the final training dataset for fine-tuning.
For example, if you bypass the SingleStepLLMCommandGenerator
by using
buttons that issue set slot commands the user step will not
be annotated.
Synthetic data generation
After the user steps of each conversation are annotated with commands, the synthetic data generation module creates n number of rephrases for each annotated user step and validates if the rephrased user step produces the same set of commands as the original user step in the corresponding conversation. Only the rephrased user steps that pass this validation are taken for the fine-tuning dataset.
Note: User utterances that come from buttons, e.g. the user clicked on a button instead of typing a response, are not rephrased and skipped by the synthetic data generator.
The conversation with its failed and passing rephrased user steps looks like this, assuming we produced 3 rephrasings per user step:
Rephraser LLM
The Rephraser LLM
uses, by default, gpt-4o-mini
to create 10 paraphrases of a user step.
The rephraser uses the following prompt to create the rephrasings:
If you want to modify the prompt or use a different LLM for the Rephraser LLM you can specify a custom config via
the argument --rephrase-config <path-to-config-file>
on the CLI command
rasa llm finetune prepare-data
.
The default config looks like this
You can specify the number of rephrasings per user step by adding the flag --num-rephrases <number>
on the CLI
command rasa llm finetune prepare-data
.
If you set num-rephrases
to 0
, the synthetic data generator will be skipped.
As the synthetic data generator adds linguistic diversity to the dataset it is recommended to use at least a couple
of rephrases.
Our internal experiments showed that adding rephrases to the dataset increases the performance of the fine-tuned model.
Validation of rephrased user steps
To validate the rephrased user steps we take the prompt of the original user step and update it, i.e. we replace the original user utterance with the rephrased one. Then the prompt is sent to the same LLM that was used to annotate the conversation. If the response of the LLM after parsing and processing matches the response of the original user step, the rephrased user utterance passes the test and is added to the synthetic conversation dataset for fine-tuning.
Fine-tuning dataset generator
The fine-tuning dataset generator takes passing rephrasings for each user step across all sample conversations and creates new conversations out of them. Each user step in a new conversation is then converted into a data point for fine-tuning. Each data point contains the prompt, which includes the conversation history and the current user message (original or rephrased), and the commands that should be produced by the prompt. Afterwards, the data points are split into a training and validation dataset that can then be used to fine-tune a base LLM.
Every data point is added to the final .jsonl
file (train / val) and would look like this:
Creating new conversations
Let's take a look at an example to understand how we construct the new conversations. Take this original conversation:
and the following rephrasings per user step:
orginal user message | passing rephrase 1 | passing rephrase 2 | passing rephrase 3 |
---|---|---|---|
I'd like to book a car | I need to reserve a car. | Could I arrange for a car rental? | I'm interested in hiring a car. |
to Basel | The destination is Basel. | I'd like to go to Basel. | |
from may 14th to the 17th | The rental period will be May 14th to 17th. | I need the car from May 14th to May 17th. | I'll require the vehicle from the 14th to the 17th of May. |
I'll take the luxury one! looks nice | I'd like to go with the luxury option; it looks appealing. | I'll choose the luxury model; it seems nice. | I'm opting for the luxury car; it looks great. |
To construct a new conversation, we combine passing rephrases at the same index position to build a new conversation. If no passing rephrase exists for a particular user step at a specific index, we reset the index for the user step and use the first passing rephrase for that user step again.
So, the final conversations would look like this:
Split data into training and validation
By default we take 80% of the fine-tuning data for the training dataset.
The remaining data points go to the validation set.
During that process we ensure that all commands present in the fine-tuning dataset will end up at least once in the
training dataset.
This ensures that the fine-tuned model sees all available commands during training.
You can update the fraction of data that goes into the training dataset by setting the flag
--train-frac <float-number>
on the CLI command rasa llm finetune prepare-data
.
When you fine-tune a base model, the base model expects the data to be in a specific format.
By default the training and validation datasets are in the
instruction data format.
If you want to use the conversational data format
instead set the flag --output-format conversational
on the CLI command rasa llm finetune prepare-data
.
Model Fine-tuning
Once you have the dataset prepared, the next logical step is to actually fine-tune a small enough open source LLM to help it excel at the task of command generation. Specifically, parameter efficient finetuning using LoRA is employed with input being the prompt prepared for every data point in the previous step and and the output being the set of commands to be predicted by the LLM.
Rasa provides this example python notebook as a reference for fine-tuning. It has been tested on GCP Vertex AI and AWS Sagemaker, it can be easily adapted to work on other cloud platforms. By default, it:
- Uses the Unsloth library as it comes with a lot of optimizations for memory and speed.
- Downloads a base model from huggingface hub. Using Llama-3.1 8b Instruct model is recommended.
- Loads the base model in 8 bit using BitsandBytes library for efficient memory usage.
- Provides default hyperparameters that have worked well in our internal experiments.
- A chat template will be persisted if the model does not already have one.
- Runs the fine-tuning and visualizes loss as the metric to monitor across training and validation set. When testing this step on an NVIDIA A100 with the default hyperparameters, it took around 12 minutes to perform fine-tuning with a training dataset containing around 500 examples. Hence, this step is relatively cheap and quick to run.
- Allows persisting the model on the cloud.
note
CALM exclusively utilizes the chat completions endpoint of the model server, so it's essential that the model's tokenizer includes a chat template. Models lacking a chat template will not be compatible with CALM.