Welcome to the first blog of many Rasa Learning center Series focused on deployment. To learn and appreciate about the learning center, Let’s first discuss how deployment evolved over the year
In the early days, software applications were designed and deployed as a single instance that performed all the business functions called as monolithic applications. To overcome drawbacks of monolithic architecture of slow development and low resource utilization, led to the development of microservices. Microservices solved most of the problems with monolithic on-premises applications, but introduced complexity with the number of services.
The key innovation that became integral part to resolve this complexity was containerization. Containerization is the practice of deploying applications by packaging them in a container. A container is an OS level visualization in which the kernel allows the existence of multiple isolated user-space instances. and hence Docker was born.
Now if you have different containers, you also need to orchestrate, that’s where Kubernetes comes for rescue, resilient to **large fluctuations in workload and applications that can be handled without any downtime.
Let’s dive into the learning center
The goal of this series of videos is to teach you how to deploy Rasa from the ground up, that includes topics relevant to deploying Rasa, like docker, and kubernetes. After this course, you will be able to understand how to deploy your rasa AI assistant to the outside world.
- Docker image building and deployment for rasa model to make it available for everyone
Building a docker image for Rasa assistant, and to write dockerfile to be able to share it with others. It shares the command to build docker image and run the container. It also tells us how we can create entrypoint and run the rasa shell via our entrypoint. Now we know how to build docker image, run and share the instructions for deployment, we also have a prebuilt Rasa Container, discussed int the next lecture.
- Prebuilt Rasa container
You can build your own Docker containers, but you can also make use of containers that others have pre-built. In particular, Rasa hosts many pre-built Docker containers on dockerhub that you can run locally as well.
The containers from Rasa are designed differently than the containers that we built in the previous video. In the previous video we trained a new model as part of the building process. The Rasa containers assume that you already have a trained model and you just want to run it inside of the container. That means that we need to make the container aware of the filesystem so that it can find the model we're interested in. Next we will discuss why it’s important to orchestrate, multiple containers.
- Why Kubernetes
orchestrate different servers with https://learning.rasa.com/deployment/why-kubernetes/
This particular learning video is about importance of Kubernetes, and how you can run and orchestrate multiple VM. You might even consider using a service like docker compose if you want to run multiple containers. But this approach will hit a few issues. If the VM ever goes down, the entire application will go down with it, Kubernetes deals with these issues elegantly. It also allows for the deployment of an application on any host, without first worrying if the host has all the dependencies installed.
- How Kubernetes work
Goal of Kubernetes is to run on a set of servers and provide a way to manage all of that compute. You won't need to know how the underlying infrastructure works and instead you'll be able to think in terms of pods, services and namespaces.
Explanation of terms like pod, services, Ingress and namespaces. In the next lecture , step by step process has been discussed with the terminologies we just learnt.
- Deploying it on local machine
It’s a demo locally, to show the standard commands that one might run when interacting with Kubernetes. The demo uses kind to run Kubernetes cluster locally and basic commands that you can run with kubectl . Kubectl is used, to define the number of replicas, define the resources and declare the service that route to our resources in the container.
All the commands can be found at the given URL.
- Helm chart
To install larger applications on top of Kubernetes we also provide the helm chart on rasa X. Now after running Kubernetes cluster locally , we realized it’s quite complex to use kubectl, that’s where the helm chart comes handy. It helps you helm as a tool to install Kubernetes deployments with all the dependencies ready.
Now you are ready to deploy your Rasa powered AI assistant application into the Real World!