AWS Kubernetes Cluster

Create a cluster in AWS and install the distribution on top of it

This Quick Start guide will guide you through the process of creating an AWS Kubernetes cluster.

Prerequisites

To run this Quick Start guide, you must have an AWS account:

  • Access key and Secret key: Exposed as environment variables: AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
  • Know your region: Exposed as environment variable: AWS_DEFAULT_REGION. only EU regions are currently supported
  • VPC with two subnets: a public and a private one. Optionally, you can use two public subnets. Examples: subnet-123123123 and subnet-321321321

And the following software must be installed in your computer:

  • terraform: Required to create AWS instances. Version >0.12

Check your requirements before start:

$ env | grep AWS
AWS_DEFAULT_REGION=eu-west-1
AWS_SECRET_ACCESS_KEY=Tn6+EXAMPLE
AWS_ACCESS_KEY_ID=AKIAEXAMPLE
$ terraform version
Terraform v0.12.23

If you are not able to meet these system requirements, you can try the local alternative.

Hands on

Let's define some useful variables:

$ export CLUSTER_DIR="/tmp/sighup.io/cluster"
$ export CLUSTER_NAME="kfd-quick-start"
$ export CLUSTER_VERSION="1.16.9"
$ export PUBLIC_SUBNET_ID="subnet-123123123"
$ export PRIVATE_SUBNET_ID="subnet-321321321"

Create a local directory where to start:

$ rm -rf ${CLUSTER_DIR}
$ mkdir -p ${CLUSTER_DIR}
$ cd ${CLUSTER_DIR}

Then create the main.tf file by running the following command:

$ cat <<EOF >${CLUSTER_DIR}/main.tf
data "aws_region" "current" {}

module "fury" {
  source = "git::git@github.com:sighupio/k8s-conformance-environment.git//modules/aws-k8s-conformance?ref=v1.0.0"

  region = data.aws_region.current.name

  cluster_name    = "${CLUSTER_NAME}"
  cluster_version = "${CLUSTER_VERSION}"

  public_subnet_id  = "${PUBLIC_SUBNET_ID}"
  private_subnet_id = "${PRIVATE_SUBNET_ID}"
  pod_network_cidr  = "172.16.0.0/16" # Fury's CNI (calico) is preconfigured to use this CIDR

}

output "tls_private_key" {
  sensitive   = true
  description = "Private RSA Key to log into the control plane node"
  value       = module.fury.tls_private_key
}

output "master_public_ip" {
  description = "Public IP where control plane is exposed"
  value       = module.fury.master_public_ip
}
EOF

Now it's time to initialize this terraform project:

$ terraform init
Initializing modules...
Downloading git::git@github.com:sighupio/k8s-conformance-environment.git?ref=v1.0.0 for fury...
- fury in .terraform/modules/fury/modules/aws-k8s-conformance

Initializing the backend...

Initializing provider plugins...
- Checking for available provider plugins...
- Downloading plugin for provider "aws" (hashicorp/aws) 2.53.0...
- Downloading plugin for provider "template" (hashicorp/template) 2.1.2...
- Downloading plugin for provider "random" (hashicorp/random) 2.2.1...
- Downloading plugin for provider "tls" (hashicorp/tls) 2.1.1...

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

Then let's plan the AWS Cluster Creation:

$ terraform plan
.....
....
... # Truncated Output
..
.
Plan: 15 to add, 0 to change, 0 to destroy.
.
..
... # Truncated Output
....
.....

After that, we are ready to deploy the cluster:

$ terraform apply --auto-approve
module.fury.data.aws_subnet.private: Refreshing state...
data.aws_region.current: Refreshing state...
module.fury.data.aws_subnet.public: Refreshing state...
module.fury.data.aws_vpc.vpc: Refreshing state...
module.fury.random_string.second_part: Creating...
module.fury.random_string.firts_part: Creating...
module.fury.random_string.second_part: Creation complete after 0s [id=ecq450y3he720p9o]
module.fury.random_string.firts_part: Creation complete after 0s [id=b3xr7q]
module.fury.tls_private_key.master: Creating...
module.fury.tls_private_key.master: Creation complete after 1s [id=c1b94be78fd8cac0915e333113e85340ca0ee2ec]
module.fury.aws_security_group.worker: Creating...
module.fury.aws_eip.master: Creating...
module.fury.aws_security_group.master: Creating...
module.fury.aws_eip.master: Creation complete after 1s [id=eipalloc-0a3fdd7c87e47879e]
module.fury.data.template_file.init_master: Refreshing state...
module.fury.aws_security_group.master: Creation complete after 3s [id=sg-0b17b72ec09000071]
module.fury.aws_security_group.worker: Creation complete after 3s [id=sg-0051ea78393b42527]
module.fury.aws_security_group_rule.worker_ingress: Creating...
module.fury.aws_security_group_rule.worker_ingress_self: Creating...
module.fury.aws_security_group_rule.master_ingress: Creating...
module.fury.aws_security_group_rule.master_egress: Creating...
module.fury.aws_security_group_rule.worker_egress: Creating...
module.fury.aws_spot_instance_request.master: Creating...
module.fury.aws_security_group_rule.master_egress: Creation complete after 2s [id=sgrule-3335059295]
module.fury.aws_security_group_rule.master_ingress: Creation complete after 3s [id=sgrule-2031134662]
module.fury.aws_security_group_rule.worker_ingress: Creation complete after 4s [id=sgrule-689560994]
module.fury.aws_security_group_rule.worker_egress: Creation complete after 5s [id=sgrule-2319715890]
module.fury.aws_security_group_rule.worker_ingress_self: Creation complete after 6s [id=sgrule-670809489]
module.fury.aws_spot_instance_request.master: Still creating... [10s elapsed]
module.fury.aws_spot_instance_request.master: Creation complete after 16s [id=sir-djwr6skq]
module.fury.data.template_file.init_worker: Refreshing state...
module.fury.aws_eip_association.eip_assoc: Creating...
module.fury.aws_spot_instance_request.worker[0]: Creating...
module.fury.aws_spot_instance_request.worker[1]: Creating...
module.fury.aws_eip_association.eip_assoc: Creation complete after 1s [id=eipassoc-0075f7ae83ae06f8d]
module.fury.aws_spot_instance_request.worker[0]: Still creating... [10s elapsed]
module.fury.aws_spot_instance_request.worker[1]: Still creating... [10s elapsed]
module.fury.aws_spot_instance_request.worker[0]: Creation complete after 14s [id=sir-jj9i5mjm]
module.fury.aws_spot_instance_request.worker[1]: Creation complete after 14s [id=sir-dmwg44qm]

Apply complete! Resources: 15 added, 0 changed, 0 destroyed.

Outputs:

master_public_ip = 34.251.225.130
tls_private_key = <sensitive>

Note you will have a different master_public_ip

Now, we have to ssh into the master node:

$ terraform output tls_private_key > cluster.key && chmod 400 cluster.key && ssh -i cluster.key ${CLUSTER_NAME}@34.251.225.130
Welcome to Ubuntu 18.04.4 LTS (GNU/Linux 4.15.0-1058-aws x86_64)
kfd-quick-start@ip-10-0-1-44:~$

Then, cloud-init will take care of bootstraping the nodes and installing Kubernetes, this could take from 3 to 5 minutes. After the nodes have been bootstrapped, execute the following in the master node:

kfd-quick-start@ip-10-0-1-44:~$ kubectl get nodes
NAME            STATUS     ROLES    AGE   VERSION
ip-10-0-1-124   NotReady   <none>   58s   v1.16.4
ip-10-0-1-158   NotReady   <none>   58s   v1.16.4
ip-10-0-1-44    NotReady   master   79s   v1.16.4

Example output: ips and/or region could be different

The cluster should be composed by three nodes in NotReady status. Don't worry, it's the expected output. It will be fixed once the distribution gets deployed on top of this cluster.

Now that you have a Kubernetes cluster, you can deploy the Kubernetes Fury Distribution

Tear down

After finishing this quick start guide, don't forget to destroy this infrastructure:

$ terraform destroy --auto-approve
module.fury.random_string.second_part: Refreshing state... [id=ecq450y3he720p9o]
module.fury.random_string.firts_part: Refreshing state... [id=b3xr7q]
module.fury.tls_private_key.master: Refreshing state... [id=c1b94be78fd8cac0915e333113e85340ca0ee2ec]
module.fury.aws_eip.master: Refreshing state... [id=eipalloc-0a3fdd7c87e47879e]
module.fury.data.aws_subnet.private: Refreshing state...
module.fury.data.aws_subnet.public: Refreshing state...
data.aws_region.current: Refreshing state...
module.fury.data.aws_vpc.vpc: Refreshing state...
module.fury.data.template_file.init_master: Refreshing state...
module.fury.aws_security_group.master: Refreshing state... [id=sg-0b17b72ec09000071]
module.fury.aws_security_group.worker: Refreshing state... [id=sg-0051ea78393b42527]
module.fury.aws_security_group_rule.master_ingress: Refreshing state... [id=sgrule-2031134662]
module.fury.aws_security_group_rule.master_egress: Refreshing state... [id=sgrule-3335059295]
module.fury.aws_spot_instance_request.master: Refreshing state... [id=sir-djwr6skq]
module.fury.aws_security_group_rule.worker_egress: Refreshing state... [id=sgrule-2319715890]
module.fury.aws_security_group_rule.worker_ingress_self: Refreshing state... [id=sgrule-670809489]
module.fury.aws_security_group_rule.worker_ingress: Refreshing state... [id=sgrule-689560994]
module.fury.aws_eip_association.eip_assoc: Refreshing state... [id=eipassoc-0075f7ae83ae06f8d]
module.fury.data.template_file.init_worker: Refreshing state...
module.fury.aws_spot_instance_request.worker[1]: Refreshing state... [id=sir-dmwg44qm]
module.fury.aws_spot_instance_request.worker[0]: Refreshing state... [id=sir-jj9i5mjm]
module.fury.aws_security_group_rule.master_ingress: Destroying... [id=sgrule-2031134662]
module.fury.aws_spot_instance_request.worker[0]: Destroying... [id=sir-jj9i5mjm]
module.fury.aws_security_group_rule.worker_egress: Destroying... [id=sgrule-2319715890]
module.fury.aws_security_group_rule.worker_ingress: Destroying... [id=sgrule-689560994]
module.fury.aws_eip_association.eip_assoc: Destroying... [id=eipassoc-0075f7ae83ae06f8d]
module.fury.aws_security_group_rule.master_egress: Destroying... [id=sgrule-3335059295]
module.fury.aws_security_group_rule.worker_ingress_self: Destroying... [id=sgrule-670809489]
module.fury.aws_spot_instance_request.worker[1]: Destroying... [id=sir-dmwg44qm]
module.fury.aws_eip_association.eip_assoc: Destruction complete after 2s
module.fury.aws_security_group_rule.master_ingress: Destruction complete after 2s
module.fury.aws_security_group_rule.worker_ingress: Destruction complete after 2s
module.fury.aws_security_group_rule.master_egress: Destruction complete after 4s
module.fury.aws_security_group_rule.worker_egress: Destruction complete after 4s
module.fury.aws_security_group_rule.worker_ingress_self: Destruction complete after 5s
module.fury.aws_spot_instance_request.worker[0]: Still destroying... [id=sir-jj9i5mjm, 10s elapsed]
module.fury.aws_spot_instance_request.worker[1]: Still destroying... [id=sir-dmwg44qm, 10s elapsed]
module.fury.aws_spot_instance_request.worker[0]: Still destroying... [id=sir-jj9i5mjm, 20s elapsed]
module.fury.aws_spot_instance_request.worker[1]: Still destroying... [id=sir-dmwg44qm, 20s elapsed]
module.fury.aws_spot_instance_request.worker[1]: Still destroying... [id=sir-dmwg44qm, 30s elapsed]
module.fury.aws_spot_instance_request.worker[0]: Still destroying... [id=sir-jj9i5mjm, 30s elapsed]
module.fury.aws_spot_instance_request.worker[1]: Still destroying... [id=sir-dmwg44qm, 40s elapsed]
module.fury.aws_spot_instance_request.worker[0]: Still destroying... [id=sir-jj9i5mjm, 40s elapsed]
module.fury.aws_spot_instance_request.worker[1]: Destruction complete after 45s
module.fury.aws_spot_instance_request.worker[0]: Still destroying... [id=sir-jj9i5mjm, 50s elapsed]
module.fury.aws_spot_instance_request.worker[0]: Destruction complete after 57s
module.fury.aws_security_group.worker: Destroying... [id=sg-0051ea78393b42527]
module.fury.aws_spot_instance_request.master: Destroying... [id=sir-djwr6skq]
module.fury.aws_security_group.worker: Destruction complete after 1s
module.fury.aws_spot_instance_request.master: Still destroying... [id=sir-djwr6skq, 10s elapsed]
module.fury.aws_spot_instance_request.master: Still destroying... [id=sir-djwr6skq, 20s elapsed]
module.fury.aws_spot_instance_request.master: Still destroying... [id=sir-djwr6skq, 30s elapsed]
module.fury.aws_spot_instance_request.master: Still destroying... [id=sir-djwr6skq, 40s elapsed]
module.fury.aws_spot_instance_request.master: Still destroying... [id=sir-djwr6skq, 50s elapsed]
module.fury.aws_spot_instance_request.master: Still destroying... [id=sir-djwr6skq, 1m0s elapsed]
module.fury.aws_spot_instance_request.master: Destruction complete after 1m3s
module.fury.aws_security_group.master: Destroying... [id=sg-0b17b72ec09000071]
module.fury.aws_eip.master: Destroying... [id=eipalloc-0a3fdd7c87e47879e]
module.fury.random_string.second_part: Destroying... [id=ecq450y3he720p9o]
module.fury.random_string.firts_part: Destroying... [id=b3xr7q]
module.fury.tls_private_key.master: Destroying... [id=c1b94be78fd8cac0915e333113e85340ca0ee2ec]
module.fury.tls_private_key.master: Destruction complete after 0s
module.fury.random_string.second_part: Destruction complete after 0s
module.fury.random_string.firts_part: Destruction complete after 0s
module.fury.aws_security_group.master: Destruction complete after 2s
module.fury.aws_eip.master: Destruction complete after 2s

Destroy complete! Resources: 15 destroyed.

Last modified 15.05.2020: Update kfd 1.0 references (cec8321)