June 11th, 2018
How to handle multiple intents per input using Rasa NLU TensorFlow pipeline
With the release of Rasa NLU 0.12, we introduced a new, TensorFlow based, Rasa NLU pipeline and we are stoked to see developers getting excited about it - big thanks to everyone who has already tried it and shared their feedback! In short, the new pipeline tackles two main problems which chatbot developers face:
- How do you go beyond the limitations of pre-trained embeddings?
- People often say more than one thing in a message, so how do you build a chatbot which can understand multiple intents?
In this post, we are going to take a comprehensive look at how the TensorFlow-based pipeline can help us solve the second problem: multiple intents. The result of this tutorial will be a very simple chatbot, that can recommend meetups to attend in Berlin.
If you want to follow along, here you can find the code and the datasets used in this tutorial.
What is the new TensorFlow pipeline about?
A processing pipeline is a building block of any Rasa NLU model. The pipeline defines how user inputs are parsed, tokenized and how the features are extracted. The components of the pipeline are important because they have a direct impact on how the NLU model performs. In contrast to regular Rasa NLU pipelines, the new TensorFlow pipeline makes it possible to train models which can assign two or more intents to a single input message. For example, when a user says "Yes, make a booking. Can you also book me a taxi from the airport to the hotel?" there are two intentions - confirmation that the booking should be made and an additional request to book a taxi. We can model such inputs by assigning them multi-intents, which in the example above would be confirm+book_taxi.
Let's see how it's all done in practice!
Creating a meetup chatbot
I have recently moved to Berlin and I think that joining meetups is the best way to meet new people in the area. This is why for this tutorial I decided to build a little chatbot which can recommend cool meetups to attend in Berlin. A little disclaimer - for reproducibility reasons I am not going to use any fancy APIs, but I would like to encourage you to play around with the code, implement custom actions, connect to real-time meetup, location or other APIs and make this chatbot a lot more fun!
Defining the pipeline
Let's start with what this tutorial is all about - the pipeline. The code block below contains the pipeline configuration which I am going to use for my chatbot (check the
config.yml file). It consists a processing parameter
CountVectorsFeaturizer which defines how model features are extracted (you can read more about the parameters here) and one more component
EmbeddingIntentClassifier which states that we are going to use TensorFlow embeddings for intent classification. By setting the flag
intent_tokenization_flag: true, we tell the model that we want to split intent labels into tokens which means that the model will know which intents are multi-intents, and with
intent_split_symbol we define which character should be used to make a split, which in this case is a
NLU training data
What does the training data look like for models using the TensorFlow pipeline? Not that different from the regular approach - the only addition is that we have to add examples of multi-intent inputs and assign them the corresponding multi-intent labels. Below I have a snippet of training data which I am going to use to train the NLU model (check the
data/nlu_data.md file). As you can see, I have some regular examples with one intent per input as well as examples which have multiple intents assigned. For example, the input "Can you suggest any cool meetups in Berlin area?" has only one intention - the user asks for meetup recommendations, that's why it has a single intent assigned to it. On the flipside, the input "Sounds good. Do you know how I could get there from home?" means two things - confirmation that a user wants to join the meetup and a query about the transport to get to the venue, and this is why such examples have a combined
Training and testing the NLU model
Once the NLU data is ready, we can train the model by executing the following command:
It calls the Rasa NLU train function, provides pipeline configuration and data files, and prints out the training results.
When the model is trained we can test its performance on various inputs. To do that run:
Below we can see the model output for an input message "Yes. Can you give me suggestions on how to get there?". As we can see, the input was classified as a multi-intent
affirm+ask_transport which based on the training data is what we would expect for this example.
Defining the domain and training data
To demonstrate how all the pieces fit together let's build a dialogue management model with a few templates as responses (as mentioned before, for the sake of reproducibility and simplicity we are not going to use any real-time APIs or databases). The domain file contains templates, which dialogue management model will use to respond to the user (check the
These templates are going to be used as responses to user inputs depending on how they are used in creating stories data. We are going to look into it in more detail in the next section.
Before moving on, I would like to point out that templates like
utter_goodbye have more than one possible response. Adding options like this is a great way to make a chatbot more interesting and prevent it from repeating the same answers in every conversation.
Generating the stories
As usual, to train a dialogue management model we need some stories. The new TensorFlow pipeline doesn't require any special format for stories data - we can use previously defined multiple or single intents and corresponding actions. In a table below you can find two very similar stories which I am going to use for my model - one with multiple intents and one with single intents (check the
The first story has two multi-intents -
affirm+ask_transport which corresponds to a user saying "Yes, book me a spot at the meetup. Also, can you tell me how should I get to the venue?" and another multi-intent
thanks+goodbye which corresponds to a user saying "Thank you. Talk to you later". The second story represents a very similar conversation but it only uses single intents. Compared to the second story, the first one reflects a much more organic and human-like conversation.
Another thing that needs emphasizing is that there are a lot of different ways of how the stories with multi-intents can be written. The table below shows tree different representations of the same conversation:
The story find_meetup_01 uses a special action
utter_affirm_suggest_transport as a response to a multi-intent
affirm+ask_transport. Alternatively, just like in find_meetup_03, we can write this story by using two separate templates -
utter_suggest_transport, which can also be used as responses to a single-intent inputs. One more important note about stories with multi-intents is that it is not necessary to execute an action for each multi-intent token. For example, the story find_meetup_03 has two actions as a response to a multi-intent
thanks+goodbye, however, just like in the story find_meetup_04, it is totally ok to skip an action for one of the tokens.
The decision of which approach is the best to use highly depends on the domain and the logic of the chatbot - in some cases creating separate actions for multi-intents is absolutely not necessary and you can use all the same actions as responses for multi-intents and for single-intents. It is always a good practice to map out which actions are really needed to ensure the natural conversation flow with your chatbot before building it.
Training the dialogue management model and testing the bot
To train the model we can use the command below which trans both NLU and Core models and saves them as a compressed tar.gz file. You can specify parameters like number of epochs used in training inside the
Once the training is done it's time for the most exciting part - testing the chatbot. We can launch it using the command in the code block below. It will load the dialogue management and nlu models and launch the chatbot in the command line.
This is what the actual conversation with a chatbot looks like:
User: Hello Bot: Hey, how can I help you? User: I am new to Berlin and I would like to join some meetups. Any suggestions? Bot: Rasa Bots Berlin meetup is definitely worth checking out! They are having an event today at Behrenstrasse 42. Would you like to join? User: Sure! Can you also tell me how can I get to the venue? Bot: Great, just made an RSVP for you. Bot: The venue is close to the Berlin Friedrichstrasse station, so the best option is to catch a U-Bahn U6. User: Thanks a lot. Talk to you later. Bot: Glad I could help! Bot: Talk to you later!
And that's it - this is how we can build a simple chatbot which can understand and use multiple intents.
As you have probably noticed, in this tutorial we didn't perform entity recognition. For now, TensorFlow pipeline is only performing intent classification tasks, but some really exciting updates regarding entity recognition are on the way so stay tuned for more tutorials and posts coming from us on this new pipeline!
When should you use multi-intents?
Since we used a very simple example here, it is important to note that you should consider using multi-intents only when the logic of your chatbot requires it. Excessive use of multi-intents can overcomplicate the chatbot so we suggest using them only when they are really necessary to ensure the natural flow of the conversation with your chatbot.
Let us know how you are getting on!
We would love to see how TensorFlow pipeline performs on your own datasets. If you are building bots with this new pipeline or just testing it out, keep us posted on how it goes. Find us on Twitter @RasaHQ, Rasa Community Forum, or drop us a message at firstname.lastname@example.org. We would love to hear from you!