Kubernetes Fury Distribution

KFD deep dive

Kubernetes Fury Distribution was designed to be installed on top of any Upstream Kubernetes Cluster. Due to its decoupled design, you can install it in your own Kubernetes Cluster or use one of our existing installers to create one. The only required software that has to be installed on top of it is a pack of core modules.

Fury Installer

As described before you can pick any of our supported Fury Kubernetes Installers to setup an Upstream Kubernetes Cluster. Or use an existing Upstream Cluster.

New Fury Installers

We are redesigning the way we craft clusters. Starting from the cloud installers released during Kubernetes Fury Distribution v1.2.0 to the upcoming self-managed installers. If you want to know more about these new generation installers, go to the installers page.

Fury Core Modules

Core modules are meant to work together so all the bootstrapping required to run a distributed application is ready to go. Logs, tracing, and metrics are stored in an ELK stack, so it tracks any workload everywhere. Dual-ingress controller with taking account for any topology the business logic will need. And you can make changes with ease thanks to incremental backups.

On top of a Kubernetes Cluster this distribution will install:

Networking

The networking core module provides a battle-tested Calico deployment as a CNI plugin.

You can find detailed information in its own documentation section.

Requirements

Network

Configuration Host(s) Connection type Port/protocol
Calico networking (BGP) All Bidirectional TCP 179
Calico networking with IP-in-IP enabled (default) All Bidirectional IP-in-IP, often represented by its protocol number 4

Host

Kernel dependencies:

  • nf_conntrack_netlink subsystem
  • ip_tables (for IPv4)
  • ip6_tables (for IPv6)
  • ip_set
  • xt_set
  • ipt_set
  • ipt_rpfilter
  • ipt_REJECT
  • ipip

Source: https://docs.projectcalico.org/

Logging

The logging core module provides a battle-tested EFK (ElasticSearch, fluentd and Kibana) stack along with some awesome utilities like cerebro and curator to provide a production-ready logging platform.

You can find detailed information in its own documentation section.

ElasticSearch HA

SIGHUP highly recommends deploying ElasticSearch in HA. The default installation deploys a single node elasticsearch server.

Take a look to the elasticsearch-triple feature inside the logging core module to get more information.

Monitoring

Monitoring core module provides a battle-tested Monitoring stack composed by Prometheus (and exporters) + Grafana deployment (with dashboards) plus some nice monitoring helpers like goldpinger.

You can find detailed information in its own documentation section.

Ingress

NGINX ingress controller has been chosen to provide Kubernetes Fury Distribution ingress capabilities. It has been tested against a lot of clusters in multiple cloud environments and on premises.

You can find detailed information in its own documentation section.

Dual Ingress

An interesting feature about the ingress core module is that it provides an easy way to deploy two ingress controller with two different ingress-class definitions to allow exposing ingress both to the internet and internally.

Requirements

Ports

Ingress Feature Node Ports ingress-class
nginx
  • insecure:31080 and secure:31443
  • nginx
    dual-nginx
  • insecure:32080 and secure:32443
  • insecure:31080 and secure:31443
  • internal
    external

    DR

    The disaster recovery core module (aka dr) was built on top of Velero (formed ark). It can de be deployed on premises and on top of any cloud provider supported by this distribution.

    There are two different backups types: Volumes and Manifests.

    You can find detailed information in its own documentation section.

    Volumes

    Volume backups can be triggered if the underlay storage provider supports it. This core module provides support for AWS, GCP and Azure.

    Manifests

    Manifests backups can be achieved in any Kubernetes cluster. You can choose the backup location both on premises or cloud object storage.

    Backup Location

    On premises

    A MinIO instance is deployed to support the manifest backup on premises.

    On cloud

    GCS, AWS S3 or Azure Storage is supported as object storage to store manifest backups.

    OPA

    The opa core module provides a basic starting point to start writing policies for your workloads. All companies have their legal requirements and this piece makes straight forward to create those checks to ensure workloads are aligned with the requirements.

    You can find detailed information in its own documentation section.


    Kubernetes Fury Distribution. Certified Kubernetes Distribution

    Certified by CNCF as Kubernetes Distribution.

    Kubernetes Fury Distribution Versions

    Kubernetes Fury Distribution. Available versions.


    Last modified 15.05.2020: Update kfd 1.0 references (cec8321)