Setup the installation

Setup Kubernetes Fury Distribution Installation

This installation procedure is the one recommended by SIGHUP to deploy this distribution in an extensible and effective manner.

System requirements

Cluster requirement

Before installing the distribution you should already own a Kubernetes Cluster with the following configuration:

  • Kubernetes Version 1.16, 1.17 or 1.18.
    • Managed Kubernetes service (AKS, GKE) are also supported.
  • A dedicated node-pool to deploy infrastructural components. At least three nodes with the following characteristics each one:
    • CPU: >= 8 Cores
    • Memory: >= 16Gb RAM
    • Storage: >= 200Gb

Configuring infrastructural nodes

In this example Kubernetes Cluster with 9 workers and 3 master nodes we can pick three different nodes to mark them as infrastructural nodes.

$ kubectl get nodes
NAME        STATUS   ROLES     AGE   VERSION
worker-1    Ready    <none>    14s   v1.16.2
worker-2    Ready    <none>    14s   v1.16.2
worker-3    Ready    <none>    14s   v1.16.2
worker-4    Ready    <none>    14s   v1.16.2
worker-5    Ready    <none>    14s   v1.16.2
worker-6    Ready    <none>    14s   v1.16.2
worker-7    Ready    <none>    14s   v1.16.2
worker-8    Ready    <none>    14s   v1.16.2
worker-9    Ready    <none>    14s   v1.16.2
master-1    Ready    master    14s   v1.16.2
master-2    Ready    master    14s   v1.16.2
master-3    Ready    master    14s   v1.16.2

Ideally you should select three nodes that are located in three different physical locations following the same high availability best practices as followed to deploy Kubernetes master nodes.

woker-1, worker-2 and worker-3 are selected to be marked as infrastructural nodes. This includes a specific label and a taint.

$ kubectl label node worker-1 node-role.kubernetes.io/infra= --overwrite
$ kubectl label node worker-2 node-role.kubernetes.io/infra= --overwrite
$ kubectl label node worker-3 node-role.kubernetes.io/infra= --overwrite
$ kubectl taint nodes -l node-role.kubernetes.io/infra= node-role.kubernetes.io/infra=:NoSchedule

Check the result:

$ kubectl get nodes
NAME        STATUS   ROLES     AGE   VERSION
worker-1    Ready    infra     43s   v1.16.2
worker-2    Ready    infra     43s   v1.16.2
worker-3    Ready    infra     43s   v1.16.2
worker-4    Ready    <none>    43s   v1.16.2
worker-5    Ready    <none>    43s   v1.16.2
worker-6    Ready    <none>    43s   v1.16.2
worker-7    Ready    <none>    43s   v1.16.2
worker-8    Ready    <none>    43s   v1.16.2
worker-9    Ready    <none>    43s   v1.16.2
master-1    Ready    master    43s   v1.16.2
master-2    Ready    master    43s   v1.16.2
master-3    Ready    master    43s   v1.16.2

Operator requirement

To follow this installation guide you'll need the following tools:

  • SIGHUP tooling: furyctl.
  • Kubernetes common tooling like: kustomize and kubectl.
  • Linux or MacOS operating system with SO tools like: sed, awk, jq, watch.
  • git repository to save the configuration. We will use git.example.com/my-fury-cluster.git repo as an example.

Repository checks

Every command in this Installation section (Including customizations) runs in the root of your git repository:

$ pwd
my-fury-cluster
$ git remote -v
origin git@git.example.com:my-fury-cluster.git (fetch)
origin git@git.example.com:my-fury-cluster.git (push)

Hands-on

First, create the recommended project structure inside your local git repository:

$ mkdir -p manifests/distribution manifests/patches

Then, pull the distribution files and download every package in the vendor directory.

$ furyctl init --version v1.3.0
2020/03/27 13:26:57 downloading: http::https://github.com/sighupio/fury-distribution/releases/download/v1.3.0/Furyfile.yml -> Furyfile.yml
2020/03/27 13:26:58 downloading: http::https://github.com/sighupio/fury-distribution/releases/download/v1.3.0/kustomization.yaml -> kustomization.yaml
$ furyctl vendor -H
# Omitted output

Once the vendor directory is downloaded we can continue to the configure the kustomize project.

Move the recommended kustomization.yaml file downloaded from the fury distribution repository (furyctl init) to the distribution directory that was created before mkdir -p manifests/distribution (with some modifications):

# Change vendor paths
$ sed 's@./vendor@../../vendor@g' kustomization.yaml  > manifests/distribution/kustomization.yaml
# Delete original kustomization file
$ rm kustomization.yaml
$ cd manifests/
# Create a kustomization.yaml file
$ kustomize create
# Add the distribution bases to this kustomize project
$ echo -e "\nbases:\n  - distribution" >> kustomization.yaml
$ cat kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

bases:
  - distribution

Check structure:

$ pwd
my-fury-cluster
$ tree
.
├── Furyfile.yml
├── manifests
│   ├── distribution
│   │   └── kustomization.yaml
│   ├── kustomization.yaml
│   └── patches
└── vendor
    └── katalog
# Omitted output

This repository structure makes possible to extend and configure the distribution.

Assign infrastructure components to infrastructural nodes

Networking, monitoring, logging, ingress and disaster-recovery components should be deployed to infrastructural nodes. This way we can guarantee the correct operation of the cluster.

Run the following command to add the needed lines to your manifests/kustomization.yaml file:

cat <<EOT >> manifests/kustomization.yaml

patches:
  - patches/calico.yaml
  - patches/prometheus-operator.yaml
  - patches/prometheus-operated.yaml
  - patches/grafana.yaml
  - patches/kube-state-metrics.yaml
  - patches/elasticsearch.yaml
  - patches/cerebro.yaml
  - patches/curator.yaml
  - patches/kibana.yaml
  - patches/cert-manager-ca-injector.yaml
  - patches/cert-manager-controller.yaml
  - patches/cert-manager-webhook.yaml
  - patches/forecastle.yaml
  - patches/nginx.yaml
  - patches/velero.yaml
  - patches/minio.yaml
  - patches/minio-setup.yaml
EOT

Your manifests/kustomization.yaml file should look like:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

bases:
  - distribution

patches:
  - patches/calico.yaml
  - patches/prometheus-operator.yaml
  - patches/prometheus-operated.yaml
  - patches/grafana.yaml
  - patches/kube-state-metrics.yaml
  - patches/elasticsearch.yaml
  - patches/cerebro.yaml
  - patches/curator.yaml
  - patches/kibana.yaml
  - patches/cert-manager-ca-injector.yaml
  - patches/cert-manager-controller.yaml
  - patches/cert-manager-webhook.yaml
  - patches/forecastle.yaml
  - patches/nginx.yaml
  - patches/velero.yaml
  - patches/minio.yaml
  - patches/minio-setup.yaml

ATTENTION Don't forget to create the following patches files inside the manifests/patches directory:

  • manifests/patches/calico.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: calico-kube-controllers
  namespace: kube-system
spec:
  template:
    spec:
      nodeSelector:
        node-role.kubernetes.io/infra: ""
      tolerations:
      - key: node-role.kubernetes.io/infra
        operator: Exists
        effect: NoSchedule
  • manifests/patches/prometheus-operator.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: prometheus-operator
  namespace: monitoring
spec:
  template:
    spec:
      nodeSelector:
        node-role.kubernetes.io/infra: ""
      tolerations:
      - key: node-role.kubernetes.io/infra
        operator: Exists
        effect: NoSchedule
  • manifests/patches/prometheus-operated.yaml:
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
  name: k8s
  namespace: monitoring
spec:
  nodeSelector:
    node-role.kubernetes.io/infra: ""
  tolerations:
  - key: node-role.kubernetes.io/infra
    operator: Exists
    effect: NoSchedule
  • manifests/patches/grafana.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: grafana
  namespace: monitoring
spec:
  template:
    spec:
      nodeSelector:
        node-role.kubernetes.io/infra: ""
      tolerations:
      - key: node-role.kubernetes.io/infra
        operator: Exists
        effect: NoSchedule
  • manifests/patches/kube-state-metrics.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kube-state-metrics
  namespace: monitoring
spec:
  template:
    spec:
      nodeSelector:
        node-role.kubernetes.io/infra: ""
      tolerations:
      - key: node-role.kubernetes.io/infra
        operator: Exists
        effect: NoSchedule
  • manifests/patches/elasticsearch.yaml:
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: elasticsearch
  namespace: logging
spec:
  template:
    spec:
      nodeSelector:
        node-role.kubernetes.io/infra: ""
      tolerations:
      - key: node-role.kubernetes.io/infra
        operator: Exists
        effect: NoSchedule
  • manifests/patches/cerebro.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: cerebro
  namespace: logging
spec:
  template:
    spec:
      nodeSelector:
        node-role.kubernetes.io/infra: ""
      tolerations:
      - key: node-role.kubernetes.io/infra
        operator: Exists
        effect: NoSchedule
  • manifests/patches/curator.yaml:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: curator
  namespace: logging
spec:
  jobTemplate:
    spec:
      template:
        spec:
          nodeSelector:
            node-role.kubernetes.io/infra: ""
          tolerations:
          - key: node-role.kubernetes.io/infra
            operator: Exists
            effect: NoSchedule
  • manifests/patches/kibana.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kibana
  namespace: logging
spec:
  template:
    spec:
      nodeSelector:
        node-role.kubernetes.io/infra: ""
      tolerations:
      - key: node-role.kubernetes.io/infra
        operator: Exists
        effect: NoSchedule
  • manifests/patches/cert-manager-ca-injector.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: cert-manager-cainjector
  namespace: cert-manager
spec:
  template:
    spec:
      nodeSelector:
        node-role.kubernetes.io/infra: ""
      tolerations:
      - key: node-role.kubernetes.io/infra
        operator: Exists
        effect: NoSchedule
  • manifests/patches/cert-manager-controller.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: cert-manager
  namespace: cert-manager
spec:
  template:
    spec:
      nodeSelector:
        node-role.kubernetes.io/infra: ""
      tolerations:
      - key: node-role.kubernetes.io/infra
        operator: Exists
        effect: NoSchedule
  • manifests/patches/cert-manager-webhook.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: cert-manager-webhook
  namespace: cert-manager
spec:
  template:
    spec:
      nodeSelector:
        node-role.kubernetes.io/infra: ""
      tolerations:
      - key: node-role.kubernetes.io/infra
        operator: Exists
        effect: NoSchedule
  • manifests/patches/nginx.yaml:
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
spec:
  template:
    spec:
      nodeSelector:
        node-role.kubernetes.io/infra: ""
      tolerations:
      - key: node-role.kubernetes.io/infra
        operator: Exists
        effect: NoSchedule
  • manifests/patches/forecastle.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: forecastle
  namespace: ingress-nginx
spec:
  template:
    spec:
      nodeSelector:
        node-role.kubernetes.io/infra: ""
      tolerations:
      - key: node-role.kubernetes.io/infra
        operator: Exists
        effect: NoSchedule
  • manifests/patches/velero.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: velero
  namespace: kube-system
spec:
  template:
    spec:
      nodeSelector:
        node-role.kubernetes.io/infra: ""
      tolerations:
      - key: node-role.kubernetes.io/infra
        operator: Exists
        effect: NoSchedule
  • manifests/patches/minio.yaml:
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: minio
  namespace: kube-system
spec:
  template:
    spec:
      nodeSelector:
        node-role.kubernetes.io/infra: ""
      tolerations:
      - key: node-role.kubernetes.io/infra
        operator: Exists
        effect: NoSchedule
  • manifests/patches/minio-setup.yaml:
apiVersion: batch/v1
kind: Job
metadata:
  name: minio-setup
  namespace: kube-system
spec:
  template:
    spec:
      nodeSelector:
        node-role.kubernetes.io/infra: ""
      tolerations:
      - key: node-role.kubernetes.io/infra
        operator: Exists
        effect: NoSchedule

Everything is ready to be deployed to the cluster. In case you need additional configuration, take a look at the customization section.

Otherwise, continue to the next section: applying the cluster configuration

Test

If you have followed every step, you can verify everything is working with the following command:

$ kustomize build manifests
# Omitted output

You should see a bunch of resource definitions printed in the console

Commit the changes to the repository

As we did some progress with the setup of the Kubernetes Fury Distribution, its really important to track changes in the repository. So, push everything to the repository before continuing.

$ git add .
$ git commit -m "Add basic Kubernetes Fury Distribution project structure"
$ git push origin master