Version: 1.0.x

Installation

This page contains detailed instructions for bootstrapping a Google GKE Cluster (K8S) via Terraform.

Installation

We use Terraform and the offical Gruntwork Terraform GKE Module Please visit the pages for a more detail on the setup.

Set Up and Initialize your Terraform Workspace

Clone the Gruntwork GKE module Repository:

git clone https://github.com/gruntwork-io/terraform-google-gke

Switch into the repository directory:

cd terraform-google-gke

Configure Terraform GKE Cluster

For educational reasons, we will not use the Helm resource in this step. Please uncomment the following lines in main.tf:

#resource "helm_release" "nginx" {
# depends_on = [google_container_node_pool.node_pool]
#
# repository = "https://charts.bitnami.com/bitnami"
# name = "nginx"
# chart = "nginx"
#}

Fill in the required variables in variables.tf based on your needs as example:

variable "project" {
description = "The project ID where all resources will be launched."
type = string
default = "<YOUR-COMPANY-PROJECT-ID>"
}
variable "location" {
description = "The location (region or zone) of the GKE cluster."
type = string
default = "europe-west1"
}
variable "region" {
description = "The region for the network. If the cluster is regional, this must be the same region. Otherwise, it should be the region of the zone."
type = string
default = "europe-west1"
}
variable "cluster_name" {
description = "The name of the Kubernetes cluster."
type = string
default = "rasa-x-test-cluster"
}
variable "cluster_service_account_name" {
description = "The name of the custom service account used for the GKE cluster. This parameter is limited to a maximum of 28 characters."
type = string
default = "rasa-x-test-cluster-sa"
}

Authenticate to GCP

gcloud auth login
gcloud auth application-default login
note

gcloud auth application-default manages your active Application Default Credentials which provide a method to get credentials used in calling Google APIs. Visit Google Cloud SDK documentation to learn more.

Initialize Terraform

Initialize your Terraform workspace, which will download and configure the providers.

terraform init
Initializing modules...
- gke_cluster in modules/gke-cluster
- gke_service_account in modules/gke-service-account
Downloading github.com/gruntwork-io/terraform-google-network.git?ref=v0.8.2 for vpc_network...
- vpc_network in .terraform/modules/vpc_network/modules/vpc-network
- vpc_network.network_firewall in .terraform/modules/vpc_network/modules/network-firewall
Initializing the backend...
Initializing provider plugins...
- Finding hashicorp/google versions matching "~> 3.43.0"...
- Finding hashicorp/google-beta versions matching "~> 3.43.0"...
- Finding hashicorp/kubernetes versions matching "~> 1.7.0"...
- Finding latest version of hashicorp/random...
- Finding latest version of hashicorp/null...
- Finding latest version of hashicorp/template...
- Finding hashicorp/helm versions matching "~> 1.1.1"...
- Installing hashicorp/kubernetes v1.7.0...
- Installed hashicorp/kubernetes v1.7.0 (signed by HashiCorp)
- Installing hashicorp/random v3.1.0...
- Installed hashicorp/random v3.1.0 (signed by HashiCorp)
- Installing hashicorp/null v3.1.0...
- Installed hashicorp/null v3.1.0 (signed by HashiCorp)
- Installing hashicorp/template v2.2.0...
- Installed hashicorp/template v2.2.0 (signed by HashiCorp)
- Installing hashicorp/helm v1.1.1...
- Installed hashicorp/helm v1.1.1 (signed by HashiCorp)
- Installing hashicorp/google v3.43.0...
- Installed hashicorp/google v3.43.0 (signed by HashiCorp)
- Installing hashicorp/google-beta v3.43.0...
- Installed hashicorp/google-beta v3.43.0 (signed by HashiCorp)
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

Plan Terraform Run

check your Terraform plan before applying it with

terraform plan

if all is correct you will get a rendered execution plan

Plan: 19 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ client_certificate = (known after apply)
+ client_key = (sensitive value)
+ cluster_ca_certificate = (sensitive value)
+ cluster_endpoint = (sensitive value)
───────────────────────────────────────────────────────────────────────────────────────
Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to
take exactly these actions if you run "terraform apply" now.

Run Terraform to Create a GKE Cluster

terraform apply
kubernetes_cluster_role_binding.user: Creation complete after 1s [id=admin-user]
google_container_node_pool.node_pool: Still creating... [10s elapsed]
google_container_node_pool.node_pool: Still creating... [20s elapsed]
google_container_node_pool.node_pool: Still creating... [30s elapsed]
google_container_node_pool.node_pool: Still creating... [40s elapsed]
google_container_node_pool.node_pool: Still creating... [50s elapsed]
google_container_node_pool.node_pool: Still creating... [1m0s elapsed]
google_container_node_pool.node_pool: Still creating... [1m10s elapsed]
google_container_node_pool.node_pool: Still creating... [1m20s elapsed]
google_container_node_pool.node_pool: Still creating... [1m30s elapsed]
google_container_node_pool.node_pool: Still creating... [1m40s elapsed]
google_container_node_pool.node_pool: Creation complete after 1m46s [id=projects/gcp-doc-test-326911/locations/europe-west1/clusters/rasa-x-test-cluster/nodePools/main-pool]
null_resource.configure_kubectl: Creating...
null_resource.configure_kubectl: Provisioning with 'local-exec'...
null_resource.configure_kubectl (local-exec): Executing: ["/bin/sh" "-c" "gcloud beta container clusters get-credentials rasa-x-test-cluster --region europe-west1 --project gcp-doc-test-326911"]
null_resource.configure_kubectl (local-exec): Fetching cluster endpoint and auth data.
null_resource.configure_kubectl (local-exec): kubeconfig entry generated for rasa-x-test-cluster.
null_resource.configure_kubectl: Creation complete after 2s [id=2543131231661204322]
helm_release.nginx: Creating...
helm_release.nginx: Still creating... [10s elapsed]
helm_release.nginx: Still creating... [20s elapsed]
helm_release.nginx: Still creating... [30s elapsed]
helm_release.nginx: Still creating... [40s elapsed]
helm_release.nginx: Still creating... [50s elapsed]
helm_release.nginx: Creation complete after 1m0s [id=nginx]

At the end of the terraform apply, you should now have a working GKE cluster and kubectl context configured. So let's verify that in the next step!

Deploying NGINX Ingress Controller

We use NGINX Ingress Controller and need to deploy it via helm.

run the following commands:

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm -n ingress-nginx install ingress-nginx ingress-nginx/ingress-nginx --create-namespace

after some minutes you should see the Public LoadBalancer IP of the Ingress via:

kubectl -n ingress-nginx get svc ingress-nginx-controller

Example output shows a service with an external IP address that can be used to access the cluster.

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.104.0.45 <EXTERNAL_IP> 80:30439/TCP,443:30425/TCP 30m

as next step Setup Rasa X via Helm chart

more information you can find on the NGINX Ingress website or in the GKE Ingress Community Guide

Next Steps