Kubernetes Fury Installers - Public Cloud

Public Cloud Installers

SIGHUP has designed a simple common interface to deploy managed clusters in Google Cloud Platform, Amazon Web Services and Microsoft Azure in a super-easy way offering a consistent feature set:

  • Private control plane: Cluster control-plane shouldn't be public. Only accessible from a user-defined network
  • Seamless cluster updates: The control-plane should be updated as soon as a new release is available.
  • Seamless node pools updates: Once the clusters control-plane receives an update, node pools should be updated too. These installers make this process straight forward.
  • Support for multiple node pools: It's interesting to have multiple node pools to enable different workloads types. Each node pool can be configured to have different machine types, labels, Kubernetes version…

Architecture

Managed Architecture

Common requirements

As these installers create only private control plane clusters, the operator who is responsible for creating it has to have connectivity from the operator's machine (bastion host, laptop with configured VPN…) to the network where the cluster will be placed.

The machine used to create the cluster should have installed:

  • OS tooling like: git and ssh.
  • terraform version > 0.12.

Cloud requirements

The requirements are specific to each cloud provider. Please refer to the documentation section corresponding to your cloud provider to see the details.

Common interface

To start deploying a new cluster make sure you know the value of all terraform input parameters:

Inputs

Name Description Type Default Required
cluster_name Unique cluster name. Used in multiple resources to identify your cluster resources string n/a yes
cluster_version Kubernetes Cluster Version. Look at the cloud providers documentation to discover available versions. EKS example -> 1.16, GKE example -> 1.16.8-gke.9 string n/a yes
dmz_cidr_range Network CIDR range from where cluster control plane will be accessible string n/a yes
network Network where the Kubernetes cluster will be hosted string n/a yes
node_pools An object list defining node pools configurations list(object) [] no
ssh_public_key Cluster administrator public ssh key. Used to access cluster nodes with the operator_ssh_user string n/a yes
subnetworks List of subnets where the cluster will be hosted list n/a yes
resource_group_name Resource group name where every resource will be placed. Required only in AKS installer (*) string n/a yes*

node_pools

node_pools input parameter requires a list of node_pool objects. This node_pool object receives the following input variables:

Name Description Type Default Required
name Unique node pool name. Used in multiple resources to identify your node pool resources string n/a yes
version Kubernetes Node Version. none value allowed. Means to use the cluster_version value string n/a yes
min_size Minimum number of nodes in the node pool number n/a yes
max_size Maximum number of nodes in the node pool number n/a yes
instance_type Name of the instance type to use in the node pool. The value is fully dependant of the cloud provider string n/a yes
volume_size Disk size of each instance in the node pool; unit: GB gigabytes. number n/a yes
labels Map of custom labels every node in the node pool will expose in the cluster. Useful to dedicate nodes to run specific workloads map(string) n/a yes
taints List of taints that lets you mark a node so that the scheduler avoids or prevents using it for certain Pods list(string) n/a yes

Outputs

Name Description
cluster_certificate_authority The base64 encoded certificate data required to communicate with the cluster
cluster_endpoint The endpoint for your Kubernetes API server
operator_ssh_user SSH user to access cluster nodes with ssh_public_key

These output values are the minimum requirement to setup a proper kubeconfig file to interact with the cluster. Check the documentation of each installer to better understand how to create the kubeconfig file.

Changelog

Version v1.0.0

Release date: 13th May 2020

First release with a shared interface across three different managed services: EKS, GKE and AKS.

Version v1.1.0

Release date: 8th July 2020

Input interface modified on the node_pools object to add the taints attribute. Ready to use on all supported managed services.

Migration from v1.0.0

Add taints attribute to your node_pools variables.

node_pools = [
  {
    name : "node-pool-1"
    version : null # To use the cluster_version
    min_size : 1
    max_size : 1
    instance_type : "n1-standard-1"
    volume_size : 100
    labels : {
      "sighup.io/role" : "app"
      "sighup.io/fury-release" : "v1.3.0"
    }
    taints : [] # If you want to preserve v1.0.0 behavior
  },
  {
    name : "node-pool-2"
    instance_type : "n1-standard-2"
    volume_size : 50
    labels : {}
     # If you want to add a taint to a node_pool
    taints : [
      "sighup.io/role=app:NoSchedule"
    ]
  }
]

EKS Installer

Fury Kubernetes Installer - Managed Services - EKS - oss project.

GKE Installer

Fury Kubernetes Installer - Managed Services - GKE - oss project.

AKS Installer

Fury Kubernetes Installer - Managed Services - AKS - oss project.