We've collaborated with the folks over at streamlit on this announcement. If you're interested in the lessons we've learned while making rasalit, you can read all about it on this blogpost hosted on the streamlit blog.
Rasa actively researches and shares practical algorithms that can handle natural language tasks, but exploring algorithms in this space brings a few unique challenges.
We can only benchmark on datasets that are openly available. If there is any private data in a conversation, it can't be shared. That excludes many meaningful datasets. We're also limited in the languages we can use in our benchmarking datasets. We've done our best to integrate many open-source tools for non-English deployments, but we still need to rely on our community for feedback.
To address this, we've been looking for a meaningful tool to give to our community that makes it easy to explore and investigate trained Rasa models interactively. If we can make it easy for users to inspect their pipelines, we also make it easier for people to give feedback on specific parts.
Rasalit is a command-line app that can be installed with pip via GitHub. When you run it you'll be able to select different apps to run.
> python -m rasalit --help Usage: rasalit [OPTIONS] COMMAND [ARGS]... Helper Views for Rasa NLU Options: --help Show this message and exit. Commands: overview diet-explorer live-nlu nlu-cluster spelling version
Let's highlight some of the views that are at your disposal.
The first view we made for Rasalit handles visualizing grid-search results. You can run cross-validation from the command line in Rasa, but our plugin now makes it easy to get an overview of the scores too.
> python -m rasalit overview --folder gridresults --port 8501
Rasa NLU Playground
The second app allows users to interact directly with a pre-trained Rasa model. You get an overview of the intent confidence and any detected entities.
We've also added charts that visualize the classifier's internal attention mechanism. To keep the overview simple, we've hidden these details.
An neat beta feature from Streamlit can hide details via the expander component. That means we can add detailed features for our research team while still keeping the app distraction-free for the general community.
Effects from Spelling
There's also a spelling robustness checker in Rasalit, which simulates spelling errors on a text that you give it. It will show you how robust your trained models are against typos.
Finally, we've also added a tool for folks who are just getting started with their virtual assistants. Some users might already have some unlabelled training data and might just be curious to explore the clusters in them.
For this use case, we've built a text clustering demo. It uses a light version of the universal sentence encoder to cluster text together. You can pass it a text file, maybe one that might contain intents, after which the clusters in the text can be explored interactively.
Try it Out
Rasalit started out as an experiment but recently surpassed 100 stars on GitHub. That means Rasalit isn't just used by our research team anymore, it's also being picked up by the larger Rasa community.
Curios to try rasalit? You can find the documentation on GitHub. We'd love to receive any feedback on the tool. Especially if you feel like there's a view missing we'd be especially keen to hear from you.