EKS Cluster

furyctl eks cluster provisioner

6 minute read

This provisioner deploys an EKS Kubernetes Cluster. It enables the creation and the management of all the Kubernetes infrastructural components with a simple yaml file. It provides some nice features:

  • Private EKS Kubernetes Control Plane.
    • Requires to have connectivity from the furyctl to the VPC/Subnet where the cluster will be placed.
  • Set an operator SSH key to enabling the troubleshooting of issues in the cluster nodes.
  • Configures multiple node pools
    • Taints
    • Labels
    • Tags
    • Lift and Shift updates
    • Many more…
  • Seamless cluster (control plane) updates.
  • Many more…

Configuration

kind: Cluster
metadata:
  name: 'nameOfTheProject # Used to identify your resources'
spec:
  version: '1.18 # EKS Control plane version'
  network: 'vpc-id1 # Identificator of the VPC'
  subnetworks:
  - 'subnet-id1 # Identificator of the subnets'
  dmzCIDRRange:
  - 10.0.0.0/8
  - Required. Network CIDR range from where cluster control plane will be accessible
  sshPublicKey: sha-rsa 190jd0132w. Required. Cluster administrator public ssh key.
    Used to access cluster nodes.
  nodePools:
  - name: my-node-pool. Required. Name of the node pool
    version: 1.18. Required. null to use Control Plane version.
    minSize: 0
    maxSize: 1
    instanceType: t3.micro. Required. AWS EC2 isntance types
    maxPods: 110
    volumeSize: 50
    labels:
      environment: 'dev. # Node labels. Use it to tag nodes then use it on Kubernetes'
    taints:
    - key1=value1:NoSchedule. As an example
    subnetworks:
    - subnet-1
    - '# any subnet id where you want your nodes running'
    tags:
      myTag: 'MyValue # Use this tags to annotate nodepool resources. Optional'
    additionalFirewallRules:
    - name: 'The name of rule # Identify the rule using a description'
      direction: 'ingress|egress # Choose one'
      cidrBlock: '0.0.0.0/0 # CIDR Block'
      protocol: 'TCP|UDP # Any supported AWS security group protocol'
      ports: '80-80 # Port range. This one means from 80 to 80'
      tags:
        myTag: 'MyValue # Use this tags to annotate nodepool resources. Optional'
  tags:
    myTag: 'MyValue # Use this tags to annotate all resources. Optional'
  auth:
    additionalAccounts:
    - "777777777777"
    - "88888888888"
    - '# Additional AWS account numbers to add to the aws-auth configmap'
    users:
    - username: 'user1 # Any username'
      groups:
      - system:masters
      - '# Any K8S Group'
      userarn: 'arn:user:7777777... # The user ARN'
    roles:
    - username: 'user1 # Any username'
      groups:
      - system:masters
      - '# Any K8S Group'
      rolearn: 'arn:role:7777777... # The role ARN'
provisioner: eks

Important notes

To properly deploy the EKS cluster using this provisioner, you have to solve first the connectivity problem.

This provisioner requires interacting with the EKS cluster control plane. As it is a private endpoint, the furyctl has to be able to reach the private EKS control plane via VPN or running the cli from a bastion host in the cluster VPC.

If you didn’t solve it, take a look to the aws bootstrap provisioner. It fixes the connectivity issue before deploying the EKS cluster.

Resources

This provisioner creates and configures:

  • EKS: EKS Control plane in the private subnets.
    • Worker Nodes: Based on the configuration of the node pools.
      • Key Pair: EC2 keypair to associate to the worker nodes.
      • Security Group: Additional security group enabling the ssh connection from the DMZ subnet.

Diagram source: https://aws.amazon.com/quickstart/architecture/amazon-eks/

Requirements

This provisioner requires to have previously configured:

Example execution

In the following lines, you will find an example execution to deploy the bootstrap infrastructure.

First, ensure you pass the requirements. Then create a configuration file with the structure described above.

First, create a new directory:

$ mkdir demo
$ cd demo

Take this cluster.yml file as an example. We recommend you to use a non-default backend configuration:

kind: Cluster
metadata:
  name: demo
provisioner: eks
spec:
  version: "1.18"
  network: "vpc-ciao"
  subnetworks: 
    - subnet-1
    - subnet-2
    - subnet-3
  dmzCIDRRange: "10.0.100.0/22"
  sshPublicKey: "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCefFo9ASM8grncpLpJr+DAeGzTtoIaxnqSqrPeSWlCyManFz5M/DDkbnql8PdrENFU28blZyIxu93d5U0RhXZumXk1utpe0L/9UtImnOGG6/dKv9fV9vcJH45XdD3rCV21ZMG1nuhxlN0DftcuUubt/VcHXflBGaLrs18DrMuHVIbyb5WO4wQ9Od/SoJZyR6CZmIEqag6ADx4aFcdsUwK1Cpc51LhPbkdXGGjipiwP45q0I6/Brjxv/Kia1e+RmIRHiltsVBdKKTL9hqu9esbAod9I5BkBtbB5bmhQUVFZehi+d/opPvsIszE/coW5r/g/EVf9zZswebFPcsNr85+x"
  tags: {}
  nodePools:
    - name: "t3instances"
      version: "1.18"
      minSize: 1
      maxSize: 1
      instanceType: "t3.large"
      maxPods: 100
      volumeSize: 50
      labels:
        kind: general
        env: dev
      taints: []
      tags: {}
$ ls
cluster.yml

Init the cluster project

$ furyctl cluster init
ERRO[0000] Directory does not exists. stat ./cluster: no such file or directory 
INFO[0003] [INFO] running Terraform command: cluster/bin/terraform init -no-color -force-copy -input=false -lock-timeout=0s -backend=true -get=true -get-plugins=true -lock=true -upgrade=false -verify-plugins=true -backend-config=configuration/backend.conf 
[EKS] Fury

This provisioner creates a battle-tested AWS EKS Kubernetes Cluster
with a private and production-grade setup.

Requires to connect to a VPN server to deploy the cluster from this computer.
Use a bastion host (inside the EKS VPC) as an alternative method to deploy the cluster.

The provisioner requires the following software installed:
- /bin/sh
- wget or curl
- aws-cli
- kubectl

[FURYCTL]

Init phase completed.

Project directory: ./cluster
Terraform logs: ./cluster/logs/terraform.logs

Everything ready to create the infrastructure; execute:

$ furyctl cluster apply

Once completed, a new directory cluster is available inside the current directory demo.

$ ls
cluster   cluster.yml

It contains all the terraform project and configuration to properly manage the cluster infrastructure.

$ tree cluster
cluster
├── backend.tf
├── bin
│   └── terraform
├── configuration
│   └── backend.conf
├── credentials
├── logs
│   └── terraform.logs
├── main.tf
├── output
├── output.tf
└── variables.tf

5 directories, 7 files

It didn’t create any infrastructure component, continue the example to deploy it.

Deploy the cluster project

NOTE: It can take up to 20 minutes.

$ furyctl cluster apply
ERRO[0000] Directory already exists                     
WARN[0000] error while initializing project subdirectories: Directory already exists 
INFO[0004] [INFO] running Terraform command: cluster/bin/terraform init -no-color -force-copy -input=false -lock-timeout=0s -backend=true -get=true -get-plugins=true -lock=true -upgrade=false -verify-plugins=true -backend-config=configuration/backend.conf 
INFO[0008] Updating EKS project                         
INFO[0008] [INFO] running Terraform command: cluster/bin/terraform fmt -no-color -write=true -list=false -diff=false ./cluster/eks.tfvars 
⢿ Applying terraform project INFO[0008] [INFO] running Terraform command: cluster/bin/terraform apply -no-color -auto-approve -input=false -var-file=./cluster/eks.tfvars -lock=true -parallelism=10 -refresh=true 
⣟ Applying terraform project INFO[1055] EKS Updated                                  
INFO[1055] Gathering output file as json                
INFO[1055] [INFO] running Terraform command: cluster/bin/terraform output -no-color -json 
INFO[1057] Gathering output file as json                
INFO[1057] [INFO] running Terraform command: cluster/bin/terraform output -no-color -json 
INFO[1060] [INFO] running Terraform command: cluster/bin/terraform output -no-color -json 
[EKS] Fury

All the cluster components are up to date.
EKS Kubernetes cluster ready.

EKS Cluster Endpoint: https://the-control-plane-link.gr7.eu-central-1.eks.amazonaws.com
SSH Operator Name: ec2-user

Use the ssh ec2-user username to access the EKS instances with the configured SSH key.
Discover the instances by running

$ kubectl get nodes

Then access by running:

$ ssh ec2-user@node-name-reported-by-kubectl-get-nodes


[FURYCTL]
Update phase completed. The Kubernetes Cluster is up to date.

Project directory: ./cluster
Terraform logs: ./cluster/logs/terraform.logs
Output file: ./cluster/output/output.json
Kubernetes configuration file: ./cluster/credentials/kubeconfig

Use it by running:
$ export KUBECONFIG=./cluster/credentials/kubeconfig
$ kubectl get nodes

Everything is up to date.
Ready to apply or destroy the infrastructure; execute:

$ furyctl cluster apply
or
$ furyctl cluster destroy

Once completed, everything is ready to start using the EKS cluster along with other cluster components. In the output message there is enough information to start using the new infrastructure:

EKS Cluster Endpoint: https://the-control-plane-link.gr7.eu-central-1.eks.amazonaws.com
SSH Operator Name: ec2-user
$ export KUBECONFIG=./cluster/credentials/kubeconfig
$ kubectl get nodes
NAME                                          STATUS   ROLES    AGE     VERSION
ip-10-0-2-116.eu-central-1.compute.internal   Ready    <none>   3m26s   v1.18.9-eks-d1db3c

Modify the configuration

As an example modification of the stack, increase the number of minimum and maximum nodes in the t3instances node pool from 1 to 2. Modify the cluster.yml file adding the right number of nodes.

kind: Cluster
metadata:
  name: demo
provisioner: eks
spec:
  version: "1.18"
  network: "vpc-ciao"
  subnetworks: 
    - subnet-1
    - subnet-2
    - subnet-3
  dmzCIDRRange: "10.0.100.0/22"
  sshPublicKey: "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCefFo9ASM8grncpLpJr+DAeGzTtoIaxnqSqrPeSWlCyManFz5M/DDkbnql8PdrENFU28blZyIxu93d5U0RhXZumXk1utpe0L/9UtImnOGG6/dKv9fV9vcJH45XdD3rCV21ZMG1nuhxlN0DftcuUubt/VcHXflBGaLrs18DrMuHVIbyb5WO4wQ9Od/SoJZyR6CZmIEqag6ADx4aFcdsUwK1Cpc51LhPbkdXGGjipiwP45q0I6/Brjxv/Kia1e+RmIRHiltsVBdKKTL9hqu9esbAod9I5BkBtbB5bmhQUVFZehi+d/opPvsIszE/coW5r/g/EVf9zZswebFPcsNr85+x"
  tags: {}
  nodePools:
    - name: "t3instances"
      version: "1.18"
      minSize: 2 # Changed
      maxSize: 2 # Changed
      instanceType: "t3.large"
      maxPods: 100
      volumeSize: 50
      labels:
        kind: general
        env: dev
      taints: []
      tags: {}

Then run again the furyctl cluster apply command.

$ furyctl cluster apply

The new node can take up to five minutes to be appear in the cluster.

Destroy the cluster project

If you need to destroy the cluster project, first ensure there is nothing blocking the destroy of this infrastructure.

Then run:

$ furyctl cluster destroy
ERRO[0000] Directory already exists                     
WARN[0000] error while initializing project subdirectories: Directory already exists 
INFO[0003] [INFO] running Terraform command: cluster/bin/terraform init -no-color -force-copy -input=false -lock-timeout=0s -backend=true -get=true -get-plugins=true -lock=true -upgrade=false -verify-plugins=true -backend-config=configuration/backend.conf 
INFO[0007] Destroying EKS project                       
INFO[0007] [INFO] running Terraform command: cluster/bin/terraform fmt -no-color -write=true -list=false -diff=false ./cluster/eks.tfvars 
INFO[0007] [INFO] running Terraform command: cluster/bin/terraform destroy -no-color -auto-approve -input=false -lock-timeout=0s -var-file=./cluster/eks.tfvars -lock=true -parallelism=10 -refresh=true 
⢿ Destroying terraform project INFO[0269] EKS destroyed                                
[EKS] Fury
All cluster components were destroyed.
EKS control plane and workers went away.

Had problems, contact us at sales@sighup.io.

[FURYCTL]
Destroy phase completed.

Project directory: ./cluster
Terraform logs: ./cluster/logs/terraform.logs