Skip to content

February 7th, 2022

Custom Retrieval Actions in Rasa with Jina and Lunr

  • portrait of Vincent Warmerdam

    Vincent Warmerdam

Rasa Open Souce allows you to write Custom Actions such that custom Python code can be triggered by your virtual assistant. It is commonly used to interact with databases or with 3rd party APIs. Because we’re able to run anything that Python can run, we are free to consider many features for our assistant. In this blog post, we’re going to explore a specific use case for these custom actions that will allow our assistant to recommend recipes to users.

The Task

The goal is to allow a user to search through a dataset of recipe titles. We’re going to be using this recipe dataset that’s hosted on huggingface which contains titles of recipes together with a full description and an external link. A subset of the data is shown below.

json
[
{
"name": "pork chop noodle soup",
"link": "https://www.yummly.com/private/recipe/Pork-Chop-Noodle-Soup-2249011",
"description": "we all know how satisfying it is to make great pork tenderloin ..."
},{
"name": "lasagna soup",
"link": "https://www.skinnytaste.com/lasagna-soup/",
"description": "everything you love about lasagna all in one bowl of soup ..."
}
]

We will implement our search feature with custom actions, but we should consider that there are multiple ways of going about it.

We could store the entire dataset in memory and use standard string-matching libraries to retrieve appropriate examples. While this could work for some users, we should admit that it’s not going to be very expressive. Preferably we’d like the experience to be more like a flexible search engine than a hard-coded string matching module.

That’s why we’re going to explore two search systems for retrieving the text. We’re going to explore a system that uses classic text retrieval techniques, via lunr.py, as well as neural search, via jina.ai.

Lunr

Lunr.py is a Python port of Lunr.js, which is the search engine that comes with mkdocs-material. It’s a refreshingly minimal tool. It’s certainly not as elaborate as elasticsearch, but it’s great for rapid prototyping. The documentation for the project can be found here.

There are a couple of like-able features for Lunr. For starters, if you merely use the base settings then it doesn’t have any dependencies. It allows you to index json-documents as opposed to mere text and you’re also able to manually assign a weight to a document key. That means that we can choose to put more weight into the title of a recipe and less in the description. It also supports wildcards, query boosts, and fuzzy matching. We won’t go in-depth on these features here, but it’s worth reading the indexing documentation and the quick start if you’re interested in learning more.

We’re going to keep things simple in this demo, so we’re just going to index our data by simply indexing the name of the recipe.

Python
from lunr import lunr
from lunr.index import Index
# Our recipes represents a list of dictionaries containing
# recipe information.
recipes = load_recipes(...)
# Let’s index just the `name` field.
idx = lunr(
ref='uid', fields=('name',), documents=recipes
)
# Save the index to disk.
with open("static/index.json", "w") as f:
f.write(json.dumps(idx.serialize()))

We now have a precomputed index on disk in the static/index.json file. Because we’ve precomputed an index, searching through all of our documents should be faster than using a custom regex on a list of strings. Especially when our recipe dataset becomes larger, the lunr approach would be able to retrieve items much faster.

This pre-computed index can be re-used in a Rasa custom action. You can see an implementation of such an action below.

Python
import json
from lunr.index import Index
from clumper import Clumper
from typing import Any, Text, Dict, List
from rasa_sdk import Action, Tracker
from rasa_sdk.executor import CollectingDispatcher
# Read in the recipes with all the data.
recipes = Clumper.read_jsonl("static/recipes.jsonl").collect()
class ActionSuggestRecipe(Action):
def name(self) -> Text:
return "action_suggest_recipe"
async def run(self, dispatcher: CollectingDispatcher,
tracker: Tracker,
domain: Dict[Text, Any]) -> List[Dict[Text, Any]]:
db = {d['uid']: d for d in recipes}
# Read in the index for fast querying.
with open("static/index.json", "r") as f:
idx = Index.load(json.loads(f.read()))
# Attempt to find matching documents.
matches = idx.search(tracker.latest_message['text'])
# We may have no matches, respond appropriately.
if len(matches) == 0:
dispatcher.utter_message(text="Sorry, I couldn't find any recipes.")
return []
# We've found matches here, so we list the top 5.
dispatcher.utter_message(text="These recipes might be interesting.")
for match in matches[:5]:
item = db[match['ref']]
dispatcher.utter_message(f" - {item['name']}")
return []

This custom action can now be used in a Rasa story. The setup for our story will be that we prompt the user to tell us what ingredients they have available which we will then pass to our custom action.

- story: recipe_story
  steps:
  - intent: inquire_recipe
  - action: utter_what_ingredient
  - intent: ingredient
  - action: action_suggest_recipe

Giving it a spin

Let’s give our assistant a spin. If you’d like to follow along, you can find the full implementation on Github here. Below you can find some of the responses that our assistant is now able to generate.

🙂 i wanna cook something                                       
🤖 What ingredients do you have?
🙂 apples                                                       
🤖 These recipes might be interesting.
    - apple juice and apple leather
    - easy baked apple pie apples
    - baked apple crisp stuffed apples
    - apple tansey
    - apple crisp
🙂 i wanna cook something else                                     
🤖 What ingredients do you have?
🙂 meat and carrots
🤖 These recipes might be interesting.
     - classic meat loaf
     - meat feast pizza
     - meat dim sum
     - carrot cake
     - carrot cupcakes

There are a few things to note from our responses.

  1. Lunr does a little bit of stemming on our behalf. When we query for “apples” it is able to remove the “s” at the end, allowing us to retrieve elements that have the keyword “apple”.
  2. When we query for “meat and carrots” we’re getting back recipes that either have “meat” in the name or “carrot”. The results we get back aren’t bad, but they aren’t recipes that contain both ingredients.

Because Lunr uses token-based indexing techniques it isn’t able to recognize that “meat” could also be “beef” or “chicken”. The tokens are different, so there is no match. A solution for this aspect of search may be to consider a more contextualized search engine, but we could also certainly attempt to index the description of the recipe as well.

Jina

Lunr uses a classic method of indexing documents. There are, however, more recent techniques that allow you to utilize pre-trained language models to aid in text retrieval. This might help us in our “beef is also meat”-scenario. There are a couple of tools in this space, but we’re going to make a quick demo with Jina.

Before diving into the implementation, it helps to briefly discuss how contextualized search works on a high level. We are still going to be building an index, but the index will no longer be based on tokens. Instead, we’re going to rely on embeddings. That means that indexing our data would now need to happen in two steps.

In the first step, we’re going to embed our text into a numeric vector. This can be done with word-embeddings, or with a contextual language model like BERT. We’re going to do this for every document in our dataset until we have a collection of vectors. Jina allows you to configure whatever encoding model you prefer, but we’re going to be using a pre-trained model that’s hosted on JinaHub, their hosting service.

Once we have our collection of vectors we will need to index them. When we query our data we are going to compare the numeric representation of the query with the numeric representations of our document vectors. To keep this process fast, we could apply an approximate nearest neighbor implementation. There are many techniques for this, but if you’re using Python then you may consider using annoy or PyNNDescent. Jina has implementations for many of these indexing techniques as well but we’re going to use the SimpleIndexer that is hosted on their platform.

Conceptually, that means that our retrieval pipeline looks like the diagram below.

Because there are more moving parts to Jina deployment we’ve split the implementation into separate concerns. In the previous section, the custom action was running Lunr internally. For Jina, it’s easier to run it as a separate service and to have the Rasa custom action communicate with it over HTTP. This simplifies the code, but it also gives us a proper separation of concerns.

To further keep the implementation simple, we’ve implemented a separate script called prepare.py that can index Jina on our behalf but can also start an HTTP server that our custom action can connect to.

Python
import typer
from clumper import Clumper
from jina import Document, DocumentArray, Flow
app = typer.Typer(name="JinaDemo", add_completion=False, help="This is demo application for Jina.")
recipes = Clumper.read_jsonl("static/recipes.jsonl").collect()
# A DocumentArray is a list of Documents.
docs = DocumentArray(
[Document(text=d['name']) for d in recipes]
)
# Create a new Flow to process our Documents
# This is a definition of steps to follow in sequence in Jina
flow = (
Flow(protocol="http", port_expose=12345)
.add(uses="jinahub://TransformerTorchEncoder", name="encoder", install_requirements=True)
.add(uses="jinahub://SimpleIndexer", install_requirements=True, name="indexer", workspace="workspace")
)
@app.command()
def index():
"""Use Jina to index the recipe data."""
with flow:
flow.index(inputs=docs)
@app.command()
def serve():
"""Use Jina to search in the recipe data."""
with flow:
flow.block()
if __name__ == "__main__":
app()

When we run python prepare.py index then we will create an index in the workspace folder. After the index is created we can run python prepare.py serve to start a Jina server on our behalf that our custom action can connect to. Our custom action can now be implemented by communicating over HTTP.

Python
import httpx
from typing import Any, Text, Dict, List
from rasa_sdk import Action, Tracker
from rasa_sdk.executor import CollectingDispatcher
class ActionSuggestRecipe(Action):
def name(self) -> Text:
return "action_suggest_recipe"
async def run(self, dispatcher: CollectingDispatcher,
tracker: Tracker,
domain: Dict[Text, Any]) -> List[Dict[Text, Any]]:
async with httpx.AsyncClient() as client:
json_data = {"data": [{"mime_type": "text/plain", "text": tracker.latest_message['text']}]}
resp = await client.post("http://localhost:12345/search", json=json_data)
dispatcher.utter_message(text="I found these recipes:")
matches = resp.json()['data']['docs'][0]['matches']
for m in matches[:5]:
dispatcher.utter_message(text=f"- {m['text']}")
return []

Giving it a spin

Let’s give this approach a spin. If you’d like to follow along, you can find the full implementation on Github here. Below you can find some of the responses that our assistant is now able to generate.

🙂 i want to cook with apples                          
🤖 These recipes might be interesting.
     - easy baked apple pie apples
     - easy roasted pork tenderloin and apples
     - cinnamon spice baked apples
     - baked apple crisp stuffed apples
     - classic baked apple
🙂 i want to cook with meat and carrots                          
🤖 These recipes might be interesting.
     - skillet beef and broccoli
     - one pan roasted potatoes, sausage and peppers
     - easy grilled vegetables
     - ground beef and vegetable skillet
     - chunky beef, cabbage and tomato soup instant pot or stove top

We can confirm that the search results are different. In particular, we see that Jina is able to retrieve items related to “beef” when we query for “meat”. It also seems to fetch items where the beef is served with vegetables, suggesting that Jina is recognizing the relevance of “carrots” in our query.

Compare

Both Jina and Lunr come with pros and cons. Lunr is typically more lightweight and Lunr is more flexible but a lot depends on how you customize the deployment for your dataset. To highlight their current implementations in a more qualitative way though, we’re going to compare some queries below.

Query 1: "meat"

YAML
query: meat
lunr:
- classic meat loaf
- meat feast pizza
- meat dim sum
- spaghetti squash with meat ragu
- taco ground beef or any meat
jina:
- roast beef
- whole hog roast
- chopped liver
- turkey meatloaf
- appetizer meatballs

Looking at the “meat” query it’s clear that the lunr approach relies on string matching in their index. The approach in Jina seems more flexible as it’s able to retrieve items that indeed contain meat without containing the exact word.

Query 2: "i want to cook with meat"

YAML
query: i want to cook with meat
lunr:
- double cooked meat hui guo rou
- mo tea tos
- basil mo tea tos
- how to cook porridge oats
- classic meat loaf
jina:
- stovetop beef ragu
- salisbury steak meatballs instant pot, stove top, slow cooker
- ground beef and vegetable skillet
- skillet beef and broccoli
- simple venison stew

Next, we have the “i want to cook with meat”-query. In the case of Lunr we see that it tries to match on “to”, “cook”, and “meat” literally, which is likely not what the user is interested in. The Jina approach does not seem to suffer from this and seems to match recipes that seem to relate mainly to the “meat” keyword.

Query 3: "particle accelerator"

YAML
query: particle accelerator
lunr: <none>
jina:
- quark casserole
- a b c minestrone
- mexican a b c minestrone
- ahi poke bowl
- mexican rice cooker

Next, we try the “particle accelerator” query. This is a bit of a silly query since it’s totally out of context. That’s why the lunr implementation returns nothing. The Jina implementation, on the other hand, still tries to return documents. If we want to prevent documents from being returned in this situation we’d need to do some post-processing in our Jina Flow. This can certainly be done, but it highlights the need for customisation.

Query 4: "vegetarian lasagna"

YAML
query: vegetarian lasagna
lunr:
- lasagna
- vegetarian shepherd's pie
- baked vegetarian rice
- lasagna soup
- rosa lasagna
jina:
- vegetable lasagna
- vegan spinach lasagna
- lasagna
- slow cooker vegetable lasagna
- herringbone vegetable lasagna

Finally, the “vegetarian lasagna” query also shows an interesting difference. Again we see that Lunr really tries to match against strings and that it cannot assume a similarity between “vegetable” and “vegetarian”. That said, “herringbone”

Final Comment

These queries paint a clear picture of what you can, and perhaps cannot, expect from different retrieval approaches. However, it should be said that your mileage may certainly differ. Our implementations are meant as “getting-started” projects and your results on your own datasets are going to be different. It’s always important to investigate which search approach is the most sensible for your use case as you’ll likely need to do a fair amount of customization.

Conclusion

In this blog post, we’ve demonstrated how you may write custom actions for text retrieval. We’ve explored lunr.py and jina.ai but there are plenty of valid alternatives. If you want to play around with the code, you can find the built Rasa projects on Github.

Because custom actions allow anything that Python can handle, you can also choose to integrate with elasticsearch, algolia, or haystack. If you’re interested, the haystack documentation demonstrates how you might be to use their question-answer API as a fallback mechanism for Rasa.

We’ve really only been scratching the surface in this blog post, but it’s good to know that you can integrate Rasa with anything that has a web API or a Python API. That includes a lot of retrieval services!