Categories
General Sotfware & DevOps Tools & HowTo

Continuing the Journey: Advanced Kubernetes Deployment with Terraform and AWS EKS

Welcome Back!

Welcome back, tech enthusiasts! In our previous posts, we explored the foundational steps and advanced techniques for deploying Kubernetes clusters on AWS EKS using Terraform. If you missed those, check out our introductory guide here and our advanced techniques here. Today, we’re diving deeper into ensuring our EKS deployment is as robust and scalable as possible. Buckle up—this is going to be informative and fun!

Continuing the Journey: Advanced Kubernetes Deployment with Terraform and AWS EKS

Prerequisites

Before we dive into the meat of the topic, make sure you have the following:

  • Terraform installed on your machine
  • AWS CLI configured with the appropriate permissions
  • kubectl set up to interact with your EKS cluster

Advanced Cluster Configuration with Terraform

Enhancing Cluster Autoscaling

If your workloads demand dynamic scaling, Kubernetes Autoscaler is your best friend. Let’s set it up using Terraform. We will leverage the latest configurations available directly from the Kubernetes Autoscaler GitHub repository.

Edit your Terraform script to include the autoscaler configuration:

resource "kubernetes_deployment" "cluster_autoscaler" {
  metadata {
    name = "cluster-autoscaler"
    namespace = "kube-system"
    labels = {
      "k8s-addon" = "cluster-autoscaler.addons.k8s.io"
      "k8s-app"   = "cluster-autoscaler"
    }
  }
  spec {
    replicas = 1
    selector {
      match_labels = {
        "app" = "cluster-autoscaler"
      }
    }
    template {
      metadata {
        labels = {
          "app" = "cluster-autoscaler"
        }
      }
      spec {
        container {
          name  = "cluster-autoscaler"
          image = "k8s.gcr.io/autoscaling/cluster-autoscaler:v1.21.0"
          command = [
            "./cluster-autoscaler",
            "--v=4",
            "--stderrthreshold=info",
            "--cloud-provider=aws",
            "--skip-nodes-with-local-storage=false",
            "--expander=least-waste",
            "--nodes=1:10:${aws_eks_node_group.example.name}"
          ]

          env {
            name  = "AWS_REGION"
            value = "us-west-2"
          }

          volume_mount {
            name       = "ssl-certs"
            mount_path = "/etc/ssl/certs/ca-certificates.crt"
            read_only  = true
          }
        }

        volume {
          name = "ssl-certs"
          host_path {
            path = "/etc/ssl/certs/ca-certificates.crt"
          }
        }

        service_account_name = kubernetes_service_account.cluster_autoscaler.metadata[0].name
      }
    }
  }
}

Make sure to review and apply the changes:

You landed the Cloud Storage of the future internet. Cloud Storage Services Sesame Disk by NiHao Cloud

Use it NOW and forever!

Support the growth of a Team File sharing system that works for people in China, USA, Europe, APAC and everywhere else.
terraform init
terraform plan
terraform apply

Setting Up AWS Load Balancer Controller

The AWS Load Balancer Controller simplifies the process of provisioning and managing Elastic Load Balancers for Kubernetes applications. To automate this with Terraform, tweak your configurations as shown below:

resource "kubernetes_deployment" "aws_load_balancer_controller" {
  metadata {
    name = "aws-load-balancer-controller"
    namespace = "kube-system"
    labels = {
      "app.kubernetes.io/name" = "aws-load-balancer-controller"
    }
  }

  spec {
    replicas = 1
    selector {
      match_labels = {
        "app.kubernetes.io/name" = "aws-load-balancer-controller"
      }
    }

    template {
      metadata {
        labels = {
          "app.kubernetes.io/name" = "aws-load-balancer-controller"
        }
      }
      spec {
        service_account_name = kubernetes_service_account.aws_load_balancer_controller.metadata[0].name
        container {
          name  = "aws-load-balancer-controller"
          image = "602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon/aws-load-balancer-controller:v2.1.3"
          args  = [
            "--cluster-name=${aws_eks_cluster.eks.cluster_name}",
            "--region=${var.region}",
            "--v=2"
          ]
        }
      }
    }
  }
}

Verify the Deployment

Finally, verify that your new configurations are up and running:

kubectl get deployment -n kube-system cluster-autoscaler
kubectl get deployment -n kube-system aws-load-balancer-controller

If everything shows up correctly, congrats! You’ve successfully enhanced your Kubernetes deployment on AWS EKS. For further learning, check out the official AWS EKS documentation.

What’s Next?

Stay tuned for more guides and tips! Our next post will explore monitoring and logging solutions to better manage your EKS cluster. Until then, keep experimenting, keep learning, and most importantly, have fun!

Start Sharing and Storing Files for Free

You can also get your own Unlimited Cloud Storage on our pay as you go product.
Other cool features include: up to 100GB size for each file.
Speed all over the world. Reliability with 3 copies of every file you upload. Snapshot for point in time recovery.
Collaborate with web office and send files to colleagues everywhere; in China & APAC, USA, Europe...
Tear prices for costs saving and more much more...
Create a Free Account Products Pricing Page