The goal is to allow a user to search through a dataset of recipe titles. We’re going to be using this recipe dataset that’s hosted on huggingface which contains titles of recipes together with a full description and an external link. A subset of the data is shown below.
We will implement our search feature with custom actions, but we should consider that there are multiple ways of going about it.
We could store the entire dataset in memory and use standard string-matching libraries to retrieve appropriate examples. While this could work for some users, we should admit that it’s not going to be very expressive. Preferably we’d like the experience to be more like a flexible search engine than a hard-coded string matching module.
That’s why we’re going to explore two search systems for retrieving the text. We’re going to explore a system that uses classic text retrieval techniques, via lunr.py, as well as neural search, via jina.ai.
Lunr.py is a Python port of Lunr.js, which is the search engine that comes with mkdocs-material. It’s a refreshingly minimal tool. It’s certainly not as elaborate as elasticsearch, but it’s great for rapid prototyping. The documentation for the project can be found here.
There are a couple of like-able features for Lunr. For starters, if you merely use the base settings then it doesn’t have any dependencies. It allows you to index json-documents as opposed to mere text and you’re also able to manually assign a weight to a document key. That means that we can choose to put more weight into the title of a recipe and less in the description. It also supports wildcards, query boosts, and fuzzy matching. We won’t go in-depth on these features here, but it’s worth reading the indexing documentation and the quick start if you’re interested in learning more.
We’re going to keep things simple in this demo, so we’re just going to index our data by simply indexing the name of the recipe.
We now have a precomputed index on disk in the
static/index.json file. Because we’ve precomputed an index, searching through all of our documents should be faster than using a custom regex on a list of strings. Especially when our recipe dataset becomes larger, the lunr approach would be able to retrieve items much faster.
This pre-computed index can be re-used in a Rasa custom action. You can see an implementation of such an action below.
This custom action can now be used in a Rasa story. The setup for our story will be that we prompt the user to tell us what ingredients they have available which we will then pass to our custom action.
- story: recipe_story steps: - intent: inquire_recipe - action: utter_what_ingredient - intent: ingredient - action: action_suggest_recipe
Giving it a spin
Let’s give our assistant a spin. If you’d like to follow along, you can find the full implementation on Github here. Below you can find some of the responses that our assistant is now able to generate.
🙂 i wanna cook something 🤖 What ingredients do you have? 🙂 apples 🤖 These recipes might be interesting. - apple juice and apple leather - easy baked apple pie apples - baked apple crisp stuffed apples - apple tansey - apple crisp 🙂 i wanna cook something else 🤖 What ingredients do you have? 🙂 meat and carrots 🤖 These recipes might be interesting. - classic meat loaf - meat feast pizza - meat dim sum - carrot cake - carrot cupcakes
There are a few things to note from our responses.
- Lunr does a little bit of stemming on our behalf. When we query for “apples” it is able to remove the “s” at the end, allowing us to retrieve elements that have the keyword “apple”.
- When we query for “meat and carrots” we’re getting back recipes that either have “meat” in the name or “carrot”. The results we get back aren’t bad, but they aren’t recipes that contain both ingredients.
Because Lunr uses token-based indexing techniques it isn’t able to recognize that “meat” could also be “beef” or “chicken”. The tokens are different, so there is no match. A solution for this aspect of search may be to consider a more contextualized search engine, but we could also certainly attempt to index the description of the recipe as well.
Lunr uses a classic method of indexing documents. There are, however, more recent techniques that allow you to utilize pre-trained language models to aid in text retrieval. This might help us in our “beef is also meat”-scenario. There are a couple of tools in this space, but we’re going to make a quick demo with Jina.
Before diving into the implementation, it helps to briefly discuss how contextualized search works on a high level. We are still going to be building an index, but the index will no longer be based on tokens. Instead, we’re going to rely on embeddings. That means that indexing our data would now need to happen in two steps.
In the first step, we’re going to embed our text into a numeric vector. This can be done with word-embeddings, or with a contextual language model like BERT. We’re going to do this for every document in our dataset until we have a collection of vectors. Jina allows you to configure whatever encoding model you prefer, but we’re going to be using a pre-trained model that’s hosted on JinaHub, their hosting service.
Once we have our collection of vectors we will need to index them. When we query our data we are going to compare the numeric representation of the query with the numeric representations of our document vectors. To keep this process fast, we could apply an approximate nearest neighbor implementation. There are many techniques for this, but if you’re using Python then you may consider using annoy or PyNNDescent. Jina has implementations for many of these indexing techniques as well but we’re going to use the SimpleIndexer that is hosted on their platform.
Conceptually, that means that our retrieval pipeline looks like the diagram below.
Because there are more moving parts to Jina deployment we’ve split the implementation into separate concerns. In the previous section, the custom action was running Lunr internally. For Jina, it’s easier to run it as a separate service and to have the Rasa custom action communicate with it over HTTP. This simplifies the code, but it also gives us a proper separation of concerns.
To further keep the implementation simple, we’ve implemented a separate script called
prepare.py that can index Jina on our behalf but can also start an HTTP server that our custom action can connect to.
When we run
python prepare.py index then we will create an index in the
workspace folder. After the index is created we can run
python prepare.py serve to start a Jina server on our behalf that our custom action can connect to. Our custom action can now be implemented by communicating over HTTP.
Giving it a spin
Let’s give this approach a spin. If you’d like to follow along, you can find the full implementation on Github here. Below you can find some of the responses that our assistant is now able to generate.
🙂 i want to cook with apples 🤖 These recipes might be interesting. - easy baked apple pie apples - easy roasted pork tenderloin and apples - cinnamon spice baked apples - baked apple crisp stuffed apples - classic baked apple 🙂 i want to cook with meat and carrots 🤖 These recipes might be interesting. - skillet beef and broccoli - one pan roasted potatoes, sausage and peppers - easy grilled vegetables - ground beef and vegetable skillet - chunky beef, cabbage and tomato soup instant pot or stove top
We can confirm that the search results are different. In particular, we see that Jina is able to retrieve items related to “beef” when we query for “meat”. It also seems to fetch items where the beef is served with vegetables, suggesting that Jina is recognizing the relevance of “carrots” in our query.
Both Jina and Lunr come with pros and cons. Lunr is typically more lightweight and Lunr is more flexible but a lot depends on how you customize the deployment for your dataset. To highlight their current implementations in a more qualitative way though, we’re going to compare some queries below.
Query 1: "meat"
Looking at the “meat” query it’s clear that the lunr approach relies on string matching in their index. The approach in Jina seems more flexible as it’s able to retrieve items that indeed contain meat without containing the exact word.
Query 2: "i want to cook with meat"
Next, we have the “i want to cook with meat”-query. In the case of Lunr we see that it tries to match on “to”, “cook”, and “meat” literally, which is likely not what the user is interested in. The Jina approach does not seem to suffer from this and seems to match recipes that seem to relate mainly to the “meat” keyword.
Query 3: "particle accelerator"
Next, we try the “particle accelerator” query. This is a bit of a silly query since it’s totally out of context. That’s why the lunr implementation returns nothing. The Jina implementation, on the other hand, still tries to return documents. If we want to prevent documents from being returned in this situation we’d need to do some post-processing in our Jina Flow. This can certainly be done, but it highlights the need for customisation.
Query 4: "vegetarian lasagna"
Finally, the “vegetarian lasagna” query also shows an interesting difference. Again we see that Lunr really tries to match against strings and that it cannot assume a similarity between “vegetable” and “vegetarian”. That said, “herringbone”
These queries paint a clear picture of what you can, and perhaps cannot, expect from different retrieval approaches. However, it should be said that your mileage may certainly differ. Our implementations are meant as “getting-started” projects and your results on your own datasets are going to be different. It’s always important to investigate which search approach is the most sensible for your use case as you’ll likely need to do a fair amount of customization.
In this blog post, we’ve demonstrated how you may write custom actions for text retrieval. We’ve explored lunr.py and jina.ai but there are plenty of valid alternatives. If you want to play around with the code, you can find the built Rasa projects on Github.
Because custom actions allow anything that Python can handle, you can also choose to integrate with elasticsearch, algolia, or haystack. If you’re interested, the haystack documentation demonstrates how you might be to use their question-answer API as a fallback mechanism for Rasa.
We’ve really only been scratching the surface in this blog post, but it’s good to know that you can integrate Rasa with anything that has a web API or a Python API. That includes a lot of retrieval services!