Skip to content

January 14th, 2020

The Rasa Masterclass Handbook: Episode 8

  • portrait of Rasa


The Rasa Masterclass is a weekly video series that takes viewers through the process of building an AI assistant, all the way from idea to production. Hosted by Head of Developer Relations Justina Petraityte, each episode focuses on a key concept of building sophisticated AI assistants with Rasa and applies those learnings to a hands-on project. At the end of the series, viewers will have built a fully-functioning AI assistant that can locate medical facilities in US cities.

To supplement the video content, we'll be releasing these handbooks to summarize each episode. You can follow along as you watch to reinforce your understanding, or you can use them as a quick reference. We'll also include links to additional resources you can use to help you along your journey.


In this edition of the Rasa Masterclass + Handbook, we will take the theoretical learnings from the prior 7, and apply them in practice. In the previous episodes we covered a great deal of theoretical concepts involved in the development of AI assistants with Rasa. Episode #8 of the Rasa Masterclass takes much of what you've learned in prior episodes and helps you use these learnings in a hands-on way. In this episode, we will focus more on the actual development of our medicare locator assistant to show you how things work in practice.

At the end of this tutorial, we will have completed the following for our medicare locator assistant:

  • Update the NLU and stories data to expand the knowledge and skills of our medicare locator assistant
  • Implement custom actions which will leverage API calls and connections to the database to extract necessary information
  • Implement the FormAction
  • Enable our assistant to fail gracefully

Real-World Dataset for our Medicare Locator

At the moment, the medicare locator assistant is capable of understanding some simple inputs like greetings & goodbyes, handling some simple interactions with a user, such as a request to find a specific health facility, and dialogue in which the user provides some information, like their location or the type of facility they are seeking. With this information, we enabled our assistant to run a simple action (action_facility_search) once the user asks for a suggestion. In this section, we will upgrade this ability to enable the assistant to collect all necessary information from the user, and run a real backend integration to provide the location of the type of facility requested.

We will start with the data source for the backend integration. We will use a publicly available database from, which provides a variety of open datasets, including information about different health facilities across the U.S.

The data the assistant needs can be pulled using the provided API. You can see above where the endpoint is specified and can be copied. It looks like this:

There are different endpoints for different types of facilities, and they are differentiated using a resource code. In the example above, the endpoint contains xubh-q36u which specifies the dataset for hospitals. Two other important resource codes (that we will use later) are f7df-2ac7 for home health agencies and b27b-2uc7 for nursing homes.

In the API call itself, we can further narrow the search using parameters like city name or zip code. This is a perfect dataset to provide real-world data about specific healthcare facility locations to our assistant.

Improving the NLU

Previously, we built a simple NLU model capable of classifying intents and extracting a few entities. The model works, but is prone to mistakes: our limited training data doesn't provide enough information for the NLU to be very accurate. A good next step will be to update the NLU data and add more training examples to all of the intents in our training data file to improve the performance of the model.

Remember, you can download the full code of the assistant from:

If you examine the file, you will see we added many new intents (e.g., intent: affirm, intent: out_of_scope), as well as additional examples for existing intents (e.g., catch you later, gotta go for the intent:goodbye). We also added more examples for entity values (e.g.,, increasing from 8 to more than 50 examples for the entity location). Adding more examples will improve performance, but there's a lot more we can do to improve entity extraction.

Using Regex in Entities

We will start with the location entity. Users can provide this information by responding with either a city name or a zip code. Since standard U.S. zip codes follow a specific pattern - 5 digits, an easy and effective way to improve the extraction of zip codes is to allow the NLU model to use regex features. Specifying a regex for the `location` entity allows the model to learn that certain patterns should be associated with specific entities. This can be achieved by defining a regex for an entity location and including the pattern. The regex for extracting zip codes is entered into the file as follows:

## regex:location
- [0-9]{5}

Using Synonyms

Another thing we can do to improve the NLU model is use synonyms. The site, where we get our facility information, uses resource codes to specify the type of medical facility.

Providing the request using words rather than resource codes is more natural for a user talking to our assistant, so we will do some data normalization to make sure that the extracted values (words from users) can be used to query the database correctly (using the appropriate resource codes).

To do this, we will need to map specific values for the entity facility_type to corresponding resource codes. This can be achieved using a Rasa NLU feature called synonyms.

To define the synonym, we have to update our training data and specify the mapping between entities and synonym values. There are two ways to define a synonym. The first is within the intent itself - in this case, ## intent:search_provider. Within the examples under this intent, we can connect the entity facility_type with the value nursing home to the specific resource code b27b-2uc7.

You can also create a synonym directly in the file, using the following format. This defines a specific value for the synonym (in this case, the resource code) and makes a list of possible entity values that the synonym should be mapped to.
`## synonym:rbry-mqwu

  • hospital
  • hospitals`

This synonym will make sure that whenever a facility_type entity is extracted with the value hospital or hospitals, those values will be mapped to the specified resource code, to be used to query the database later on. We will do the same with other entity values and resource codes.

Lastly, to ensure that the NLU model learns to map the synonyms, we need to include the component EntitySynonymMapper in our pipeline, in the config.yml file. Going forward, we will be using the Supervised Embeddings Pipeline for our model, and this pipeline (by default) includes the EntitySynonymMapper component in its configuration.

Retraining the NLU Model

Since we've added a number of items to the NLU model, let's retrain the model to see how it works. Once again, we will use the rasa train nlu command to train the model, and the rasa shell nlu command to talk to our model.

If we type I need a hospital, we can see the model correctly classifies this input as intent: search_provider. Even better, take a look at the extracted entities. The entity hospital was identified, but the extracted value was a resource code, which was mapped to the entity value hospital using the synonym feature.

Implementing a Form Action in Rasa

Next, we will improve the dialogue management in our assistant by using the FormAction in Rasa. In previous episodes, we built a simple dialogue management model capable of using slots. The model is capable of driving the conversation based on whether or not the user provided specific details like location or facility type.

Going forward, we will be using the database from to supply information about facility locations. We want the medicare locator assistant to query this database when the user asks the assistant for information about a specific health facility, via an action. To make sure that the API call to the database is correct, we need to make sure that our assistant collects and uses the details of location and facility_typebefore the call is made. This situation is a perfect use case for Rasa forms - a component which can be used to ensure the assistant collects the necessary details before running a specific action.

Defining a Form Action

Form actions have to be defined in the project's file. All the work in this section will be within this file.

First, we need to import the FormAction method from the Rasa SDK. Form action is a class, just like other custom actions in Rasa and so it follows a similar overall structure.

Forms generally consist of a few main functions - name, required slots, slot mappings, and submit.

The function called name is very simple - it defines the name of the form action. When an action facility_form is predicted, Rasa knows that it should run the code defined in this class FacilityForm(_FormAction_):

All form actions must have a method called required slots. This method is used to define which slots must be filled in before an assistant can continue with the dialogue. In the conversation with the user, as long as there are unfilled slots, the model will continue predicting FormAction, and the assistant will continue to ask the user for information to fill in the details, until all the slots are filled. In the case of our medicare locator, we need two slots to be provided for an assistant to query the database: location and facility type.

slot_mappings is an optional method used with Form Action, but it's a very useful one. Sometimes, required slots come from very different user inputs which, naturally, will have different intent or entity labels. By default, FormAction will fill in required slots using only the values extracted from intents or entities with exactly the same name as the slot. Slot mapping allows you to define how the values from other intents and entities can be mapped to the required slots. In our case, the names of the required slots match the names of the corresponding entities. We can use slot_mappings to specify which intents those values come from. For example, the slot location can be filled using the entity values from two different intents: inform and search_provider.

Finally, we need to define what should happen when all required slots are filled. This is specified in the submit method. In the medicare locator, we will want our assistant to send the facility query to database using the provided details and return the options to the user.

Let's walk through the code of the submit method in detail.

With the tracker.get_slot method, our assistant will pull the current values of the location and facility_type slots, and use them to call the (still-to-be-created) custom action find_facilities. (We will create the find_facilities action in a later Masterclass.)

If no results were found by find_facilities, the assistant will send a message to the user saying the requested facility could not be found in the specified location.

If find_facilities returns results, the assistant will format the first three returned facilities as buttons, to be returned to the user. The message to the user is sent using the Rasa dispatcher method.

Updating the Domain File and Model Configuration for Forms

For our assistant to use this newly created form, it has to be included in the domain file.

We also need to update the model config.yml file to include FormPolicy, an extension of the MemoizationPolicy that handles the filling of forms. FormPolicy will predict the new FormAction until all the required slots in the form are filled.

Updating Training Stories with Forms

Lastly, we need to update the file with the form action that we created, facility_form.

We add three lines, which represent three actions that should happen when the user chooses a facility type for the medicare locator assistant to find:

  • facility_form tells the model to activate the form, to begin gathering information to complete the form
  • form{"name":"facility_form"} indicates that the FormPolicy will run the form action until all slots are filled
  • form{"name":"null"} ends this small story, and indicates that the form is filled and the assistant can move on with the conversation. These three lines are enough to allow our assistant to handle all the happy paths the user might take when filling in the form. In this case, "happy path" means that whenever the assistant requests the user for some information, the user eventually responds with the required information. There are many paths for this to happen - situations where the user provides one detail but not the other, or responds with all details at once, or even when the user doesn't provide any of the required information.

Let's add another story which will teach our assistant how to respond when the user provides all information with their initial request.

We also want our medicare locator assistant to be able to handle situations in which a user asks for multiple recommendations. A user may ask additional questions after the assistant has provided information to answer an initial question. We can enable our assistant to handle this behavior by adding stories with more dialogue turns, like the following:

With forms, we enabled our assistant to collect some crucial details needed to run the facility search. Now, let's update the facility_search custom action to enable our medicare assistant to use the extracted details, run the backend integrations, and return the requested information.

Let's go back to our file.

Custom actions with real backend integrations usually consist of quite a few methods and they will highly depend on what database or API you are using. In the medicare locator, we have quite a few helper methods - for example, defining the endpoints for our APIs.

The most important part of our file is a function called FindHeathCareAddress. This function retrieves the current slot values for key parameters, and uses them to create a full path for an endpoint. Our assistant will send a request to this endpoint to retrieve the data - the health care facility address. A method requests.get sends a GET request to the API, which returns the data in JSON format. If any results were returned, our assistant returns a response, based on what was requested. If no results were returned, the assistant sends a message back to the user suggesting what went wrong.

Failing Gracefully in Rasa

We've updated our assistant to be better at intent classification and entity extraction, implemented form actions, and added a custom action for the facility search, with some backend integrations.

Another valuable improvement we can make to our assistant is to implement a fallback policy. The fallback policy helps to make sure that if our assistant makes a mistake, it handles the situation gracefully.

As we discussed in the Rasa Masterclass Episode #7, there are two fallback policies in Rasa: the fallback policy and the two-stage fallback policy. For our assistant, we will implement the TwoStageFallbackPolicy.

We add the TwoStageFallbackPolicy to our assistant in the config.yml file, by listing this policy as one of the components of the policy configuration. We will keep the default hyperparameters of this policy for now.

Since we've made many additions to the model, it would be a good time to re-train the models and test how they work. Again, we use the command line function rasa train, and after the models are trained, we can load the assistant using the functions rasa run actions and rasa shell. Then we can have a conversation with the newly trained assistant.

Our assistant can now pull real-world data and use it to respond to users' queries.


So after all this hard work, our assistant improved quite a bit - we improved the performance of the NLU model, we implemented the form action and quite an advanced custom action with back-end integrations, and we even created a fallback policy.

In future episodes of the Masterclass, we will expand on these updates and work on adding new features and skills to our assistant. We hope you will implement all the features that you learned in this episode of the Rasa Masterclass, to improve your custom assistants. See you in the next MasterClass

Additional Resources