OpenShift and K8S

This page contains detailed instructions for deploying Rasa X enterprise using OpenShift or Kubernetes (K8S).

  1. Make sure docker, and docker compose and kompose are installed on your server. Detailed instructions can be found in the Docker documentation and in the Kompose documentation. You should be able to run:

    $ docker-compose --version
    $ kompose version
    
  2. Create a project directory and switch to it:

    $ mkdir ~/rasa
    $ cd ~/rasa
    
  3. Download the docker-compose file (docker-compose.yml) that contain the containers and their setup (replacing latest with the version to install):

    $ wget -qO docker-compose.yml https://storage.googleapis.com/rasa-x-releases/stable/docker-compose.ee.yml
    
  4. Create the authentication file to download containers from the docker registry. To be authenticated with the registry that contains the Rasa X containers, you need to create a file in ~/rasa/gcr-auth.json that contains the JSON of the field docker_registry_license from your license.

    Make sure to copy everything between the outer " of the docker_registry_license key to that file. The contents of ~/rasa/gcr-auth.json should look like this:

    {
      "type": "service_account",
      "project_id": "rasa-platform",
      "private_key_id": "sdferw234qadst423qafdgxhw",
      "private_key": "-----BEGIN PRIVATE KEY-----\nMIIEvfwrt423qwadsfghtzw0BAQEFAASCBKgwggSkAgEAAoIBAQCgt338FkWbW13dghtzew4easdf5wAi15jrA9t4uOk8dghrtze4weasfgdhtAFZNfrLgvr2\nPBTu1lAJDLo136ZGTdMKi+/TuRqrIMg/sr8q0Ungish8v6t5Jb4gsjBi9StytCT4\nhWXDL3qeadfsgeDOudl6c3iMzylBws+VffrFfaZWjDpGtxmlYwIUa2e\noNSe7BYLnY9tDrX3zrP/wu/6FPbbGkBjguDG1l3Kx7l1wmiPtK5lIhjt+k7Oyx/u\nd6+gvfs+7RX9wUxnZT/tLggybYdsr8BA1Pqr0hDmhdDl7tjXVTmGLG+1/+lXVGFc\nqKEg+uLXAgMBAAECggEAESzwRK0Cp62LgBjInk+jvTmMI4lYP/XTnfk0TNwyiLxd\nT7mkw/TzkSVRifZ37lBQ6BS6BiqBJherh1N4xI+DF9HUN/wHR93QTyu7p8umlcxC\nlPV0KE4b5ZMfWvRG4y236cRGly9urcBNGoFzFHl8pd2iS5DMqZOYpSXY+qvkXTKE\nUOm5mVSs4S4Qa9cHL+jWXCvY0789fG1GrT+L3Fn+StKacgQuBnN1krYFYBSjCAh8\nsnSdjkvGguw/6OApPHd8HqkHtjU0PD67uU5QIm5N1bmz9KT4s9Pm+WbCinEstIiN\nIfln5ikmHcMAiIS0gzSnZavsY21PsDHBkD8SUO7CTQKBgQDgMPhx0TsB/oVH/SnU\nt3oTME+tfAKI69tozX02jHj6DY/vDpI1hXNmb4oMOos5+3ulborHqnso9za1RgV7\nm2N04QQVfzYEuZzJzXL11SHvBYVjHkXYy6HR5GhnPmwA+CzrDNy2/oYxlaqH7TBA\nR+f7IHToIPKGCVrhCJztlAgzIwKBgQC3hQNclIQ5Iw0gm9Rr8zAP/YoRJdiUSYtv\nNBmav+dTTSkPh51Bomj/J4Rrg8OLvHG5U79pmzbQdIFGYGKlR0l4/QepKpbaGm7x\nM/gRp/GXu9sN8LgI+h+FskCYi4cuqDjQ9L2S0gwMre4witmeVSIiBxLWxS7mvkZX\nWRW58ml2vQKBgBozPuW2SQobn6HhIUFdy+NwMu+YXYd44ORnl2mHkx/N8/NBJa8h\nkHH5OQ3izaCSFkooGAnrj4cjFP6sVzmx2DaxkVOd0UdOFdezreqy5MtVPthtkkYa\nzieEZPsj3WVjm4RAtY6hQjeLQSmve4MXpDHCAkeaih1F/Jvt8MEHGso3AoGBAJez\nTioTYpFQliNkbN2nMw2kyaKPJE6/1JDiAmBXTcMgP1blBWsh86UnZ2DwlI5IAcHu\npoWHlnIOPGaOejyhhuyKTPDbkcNMonSkPuVpbF2/Hb6SQ664A6KizJ7Mh7xbtkuU\nY7igBPHePMzHmkg1m3eBXWNHsBNxKfg+XaVN6zwJAoGBAN6VhGMmyDcn0GqkkP6d\nrSsQ0Ig7L4PnU633oYWoGWa8q/XYiFbcACMFynMbrmHG+/0c3Iwt32bi3th60Cwb\nT66yqmv4MaT72+EfQHxiLxnUxhqSpBXM0eoXbyvDg97Zp/slsYvGGLjONmmretlE\nsjAsuAH4Iz1XdfdenzGnyBZH\n-----END PRIVATE KEY-----\n",
      "client_email": "company@rasa-platform.iam.gserviceaccount.com",
      "client_id": "114123456713428149",
      "auth_uri": "https://accounts.google.com/o/oauth2/auth",
      "token_uri": "https://accounts.google.com/o/oauth2/token",
      "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
      "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/company%40rasa-platform.iam.gserviceaccount.com"
    }
    
  5. Login to the docker registry. This will create a new entry in the docker config file.

    $ sudo docker login  -u _json_key -p "$(cat ~/rasa/gcr-auth.json)" https://gcr.io
    
  6. Create a new image pull secret. If you already have a .dockercfg file for the registry, you can create a secret from that file by running (if you are using Kubernetes replace oc with kubectl):

    $ oc create secret generic <pull_secret_name> \
    --from-file=.dockercfg=<path/to/.dockercfg> \
    --type=kubernetes.io/dockercfg
    

    Add the secret to your service account. The name of the service account in this example should match the name of the service account the pod uses; default is the default service account (if you are using Kubernetes replace oc with kubectl):

    $ oc secrets link default <pull_secret_name> --for=pull
    

    For more information please visit the OpenShift documentation.

  7. Create the docker environment file .env with the following content:

    RASA_X_VERSION=stable
    RASA_TOKEN=<random_string>
    RASA_X_TOKEN=<random_string>
    PASSWORD_SALT=<random_string>
    DB_PASSWORD=<random_string>
    JWT_SECRET=<random_string>
    RABBITMQ_PASSWORD=<random_string>
    

    For <random_string> please use secure strings, e.g. randomly generated character sequences.

    For the token and password salt fields, enter any string of your choice. These will be used to hash passwords. Note that if you change the password salt you will have to create new logins for everyone.

    You will also need to set a Rasa token. These are used to authorize the communication among the containers.

    Note

    Make sure to generate a different unique <random_string> for each field! E.g. running this multiple times:

    $ openssl rand -base64 12
    
  8. Create a file called agreement.yml with the following content to accept the terms and conditions.

    apiVersion: v1
    data:
        agree: openshift
    kind: ConfigMap
    metadata:
        name: agreement
    
  9. Create a file called configuration-files.yml with the following content:

    apiVersion: v1
    data:
        rasa-credentials: |
            rasa:
                url: http://rasa-x:5002/api
        rasa-endpoints: |
            models:
                url: ${RASA_MODEL_SERVER}
                token: ${RASA_X_TOKEN}
                wait_time_between_pulls: ${RASA_MODEL_PULL_INTERVAL}
            tracker_store:
                type: sql
                dialect: "postgresql"
                url: ${DB_HOST}
                port: ${DB_PORT}
                username: ${DB_USER}
                password: ${DB_PASSWORD}
                db: ${DB_DATABASE}
                login_db: ${DB_LOGIN_DB}
            event_broker:
                type: "pika"
                url: ${RABBITMQ_HOST}
                username: ${RABBITMQ_USERNAME}
                password: ${RABBITMQ_PASSWORD}
                queue: ${RABBITMQ_QUEUE}
            action_endpoint:
                url: ${RASA_USER_APP}/webhook
                token:  ""
        environments: |
            rasa:
                production:
                  url: http://rasa-production:5005
                  token: ${RASA_TOKEN}
                worker:
                  url: http://rasa-worker:5005
                  token: ${RASA_TOKEN}
                development:
                  url: http://rasa-development:5005
                  token: ${RASA_TOKEN}
    kind: ConfigMap
    metadata:
        name: configuration-files
    
  10. Add ports sections for every service. Unfortunately kompose does not generate the service files for the expose sections. Hence, you have to add a ports section to every service, e.g.:

    rasa-x:
      restart: always
      image: "gcr.io/rasa-platform/rasa-x-ee:${RASA_X_VERSION:-stable}"
      expose:
      - "5002"
    

becomes:

rasa-x:
  restart: always
  image: "gcr.io/rasa-platform/rasa-x-ee:${RASA_X_VERSION:-stable}"
  expose:
  - "5002"
  ports:
  - "5002:5002"
  1. Delete the logger service from the docker-compose file. This log aggregator is not needed since OpenShift / Kubernetes provide different tools for that.

  2. Generate the deployment files by running

    $ docker-compose config > docker-compose.config.yml
    
  3. Set the used docker-compose file specification in docker-compose.config.yml to version 3:

    version: '3'
    
    services:
        ...
    
  4. Update the NGINX service to use the short-hand syntax to expose ports. The NGINX service should look like this:

    nginx:
        restart: always
        image: "gcr.io/rasa-platform/nginx:${RASA_X_VERSION:-stable}"
        ports:
        - "80:8080"
        - "443:8443"
        volumes:
        - ./terms:/opt/bitnami/nginx/conf/bitnami/terms
        depends_on:
        - rasa-production
        - rasa-x
        - event-service
        - app
    
  5. Convert docker-compose.config.yml with (if you are using Kubernetes replace --provider OpenShift with --provider Kubernetes):

    $ kompose convert -f docker-compose.config.yml --provider OpenShift
    

    This will create all files needed for the deployment.

  6. Change the volumes in the rasa-x-deploymentconfig.yaml file. The volumeMounts section should look like this:

    volumeMounts:
    - mountPath: /app/models
      name: rasa-x-claim0
    - mountPath: /logs
      name: rasa-x-claim1
    - mountPath: /app/environments.yml
      subPath: environments.yml
      name: environments
    

    and the volumes section should look like this:

    volumes:
    - name: rasa-x-claim0
      persistentVolumeClaim:
      claimName: rasa-x-claim0
    - name: rasa-x-claim1
      persistentVolumeClaim:
      claimName: rasa-x-claim1
    - name: environments
      configMap:
        name: configuration-files
        items:
          - key: environments
            path: environments.yml
    

    You can then remove the volume claim file which was used to mount environments.yml.

  7. Change the volumes in the following files rasa-production-deploymentconfig.yaml, rasa-development-deploymentconfig.yaml and rasa-worker-deploymentconfig.yaml. The volumesMounts section should look like this:

    volumeMounts:
    - mountPath: /app/endpoints.yml
      subPath: endpoints.yml
      name: config
    - mountPath: /app/credentials.yml
      subPath: credentials.yml
      name: config
    

    and the volumes section should look like this:

    volumes:
    - name: config
      configMap:
        name: configuration-files
        items:
          - key: rasa-endpoints
            path: endpoints.yml
          - key: rasa-credentials
            path: credentials.yml
    

    Where {pod_name} will depend on the deployment that is getting updated. The correct values are rasa-production, rasa-development, rasa-worker for their respective files. You can then remove the volume claim files which were used for the mounts of endpoints.yml and credentials.yml.

  8. Change the volumes in nginx-deploymentconfig.yaml. The volumeMounts section should look like this:

    volumeMounts:
       - mountPath: /opt/bitnami/nginx/conf/bitnami/terms
         name: agreement
    

    and the volumes sections should look like this:

    volumes:
    - configMap:
        items:
          - key: agree
            path: agree.txt
        name: agreement
      name: agreement
    

    The previously generated NGINX volume claim files can now be removed since they were replaced by the configMap and secret.

  9. We recommend that you enable HTTPS on your server by adding SSL certificates (for example using Let’s encrypt). Mount the generated fullchain.pem file to /opt/bitnami/certs/fullchain.pem and the privkey.pem file to /opt/bitnami/certs/privkey.pem.

  10. Now you can use the generated files to deploy Rasa Platform, either using the OpenShift / Kubernetes CLI or the web console.

  11. To access the platform expose the nginx service with (if you are using Kubernetes replace oc with kubectl):

    $ oc expose service/nginx
    

    Alternatively create your own route from the web console.

  12. Using the terminal of the rasa-x pod create the first user by running:

    $ cd scripts
    $ python manage_users.py create [username] [password] [role]
    

    Possible values for role are admin, annotator, and tester.