Deploying Helm Charts into K8S with Terraform

Nicholas Lu
4 min readDec 7, 2020
Photo by Andrew Neel on Unsplash

“Look mom, no hands on Console”, Deploying Helm charts into Kubernetes has never been easier with Terraform

I have just started my DevOps journey with Terraform not long ago and there are too much deployment methods into Kubernetes cluster. The existence of multiple deployment guide and pattern makes it easy for developer to get their code into the cluster with the requirement of knowing the ins and out of Kubernetes.

There are plenty of mode of deployment such as:
1) Creating a surrogate container to run CI/CD tasks via platform like Gitlab/Github Actions/Jenkins etc
2) Direct injection of manifest file/deployment on CI tools
3) Helm, a tool that distills deployment component together into a single package(includes deployment, ingress, service, configmap etc)

In this post, I am going to show how to deploy any charts into your cluster via terraform.

Let start with the cluster creation

We will be doing all from Terrafrom script with no console tinkering just to practice the best practice of IaC. We will need to create vpc with this method first then followed with the EKS cluster creation and the nodegroup creation. I am starting with the basic cluster of AWS EKS(public interfaced, bare minimal setup, dual az), the code is as follow:

#vpc.tf
resource "aws_vpc" "vpc-eks" {
cidr_block = "10.0.0.0/16"
instance_tenancy = "default"
enable_dns_hostnames = true
enable_dns_support = true
tags = {
Name = var.cluster_name
}
}
data "aws_vpc" "selected" {
id = aws_vpc.vpc-eks.id
}
resource "aws_subnet" "public-a" {
vpc_id = aws_vpc.vpc-eks.id
cidr_block = "10.0.0.0/20"
availability_zone = "ap-southeast-1a"
map_public_ip_on_launch = true
tags = {
Name = "public-a"
}
}
data "aws_subnet" "public-a" {
id = aws_subnet.public-a.id
}
resource "aws_subnet" "public-b" {
vpc_id = aws_vpc.vpc-eks.id
cidr_block = "10.0.16.0/20"
availability_zone = "ap-southeast-1b"
map_public_ip_on_launch = true
tags = {
Name = "public-b"
}
}
data "aws_subnet" "public-b" {
id = aws_subnet.public-b.id
}
#assignment of routetable
resource "aws_default_route_table" "route-public" {
default_route_table_id = aws_vpc.vpc-eks.default_route_table_id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.igw.id
}
tags = {
Name = "rtb-main"
}
depends_on = [aws_internet_gateway.igw]
}
resource "aws_internet_gateway" "igw" {
vpc_id = aws_vpc.vpc-eks.id
depends_on = [aws_vpc.vpc-eks]tags = {
Name = "igw-example"
}
}
resource "aws_route_table_association" "public_subnet-a" {
subnet_id = aws_subnet.public-a.id
route_table_id = aws_default_route_table.route-public.id
}
resource "aws_route_table_association" "public_subnet-b" {
subnet_id = aws_subnet.public-b.id
route_table_id = aws_default_route_table.route-public.id
}
variable "cluster_name" {
type = string
default = "example-eks"
}

Now the second file, the infrastructure of EKS and nodegroup:

#main.tf
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "3.6.0"
}
}
provider "aws" {
region = "ap-southeast-1"
}
backend "remote" {
hostname = "app.terraform.io"
organization = "example"
workspaces {
name = "myworkplace"
}
}
}
data "aws_eks_cluster" "cluster" {
name = module.my-cluster.cluster_id
}
data "aws_eks_cluster_auth" "cluster" {
name = module.my-cluster.cluster_id
}
provider "kubernetes" {
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
token = data.aws_eks_cluster_auth.cluster.token
load_config_file = false
version = "~> 1.9"
}
module "my-cluster" {
source = "terraform-aws-modules/eks/aws"
cluster_name = var.cluster_name
cluster_version = "1.18"
subnets = [aws_subnet.public-a.id, aws_subnet.public-b.id]
vpc_id = aws_vpc.vpc-eks.id
cluster_enabled_log_types = ["api", "audit"]
node_groups = [
{
name = "nodegroup-a"
instance_type = "t3.medium"
platform = "linux"
asg_max_size = 5
asg_desired_capacity = 1
public_ip = false
subnets = [aws_subnet.subnet_public_a, aws_subnet.subnet_public_b]
tags = {
Name = "nodegroup-a"
}
},
]
}

Spoiler: the Kubernetes provider component is important for the reference of the endpoint, token, certificate to authenticate with the cluster later in the deployment

Now, run the terraform apply and you should get the cluster ready within 15min or so. I prefer to run them via terraform cloud and it’s the best place to preserve your state and run your plan and apply in one place where failure of terraform apply can be easily mitigated.

2) Let’s Get the Chart Deployed

With the cluster ready, now we can get Helm chart deployed into the cluster. The script is as followed
Notice : This helm.tf script is suppose to be separated from the vpc and eks terraform script. This because we want to decouple deployment from infrastructure as they are different in stages of functionality

#helm.tfprovider "aws" {
region = "ap-southeast-1"
}
backend "remote" {
hostname = "app.terraform.io"
organization = "example"
workspaces {
name = "myworkplace-helm"
}
}
}
provider "kubernetes" {
load_config_file = false
host = data.aws_eks_cluster.example.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.example.certificate_authority.0.data)
token = data.aws_eks_cluster_auth.example.token
}
provider "helm" {
kubernetes {
host = data.aws_eks_cluster.example.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.example.certificate_authority.0.data)
token = data.aws_eks_cluster_auth.example.token
load_config_file = false
}
}
resource "helm_release" "metric_server" {
name = "metrics-server"
chart = "https://charts.bitnami.com/bitnami/metrics-server-5.2.0.tgz"
namespace = "kube-system"
values = [<<EOF
apiService:
create: true
EOF
]
}
resource "helm_release" "example" {
name = "redis"
chart = "https://charts.bitnami.com/bitnami/redis-10.7.16.tgz"
}
resource "helm_release" "spinnaker" {
name = "spin"
repository = "https://helmcharts.opsmx.com"
chart = "spinnaker"
namespace = "spinnaker"
}

The script will call the Kubernetes provider to pass the credentials to Helm provider(you can opt to kubeconfig though I prefer to use ca-certificate and token to authenticate against EKS cluster).

You should able to run the script and see the deployment ready in a bit, depending on the instance size and the charts you are installing, you might have to wait.

Note: the Helm charts I have tested best be referenced as .tgz file as they are easier to refer on. You can still find the chart url and repo path but that will involve much trial and error.

For more information of Helm charts, checkout helmhub.

Thanks and happy helming with Terraform.

I would love to hear from your input on how to make the tutorial series better. Please connect with me in Linkedin

--

--