Skip to main content
Version: 1.31.0

KFDDistribution Provider

The KFDDistribution provider will install the KFD modules within an existing Kubernetes cluster. As such, it will not create nor provision any infrastructure. This is ideal to install KFD to clusters for which you don't control any aspect of the infrastructure, such as managed clusters by most Public Cloud providers.

This document provides a through guidance abount every aspect of the configuration file. You can find the reference schema for this provider here.

Prerequisites​

The only requirements are having a working Kubernetes cluster and a working kubeconfig file to work with it.

Configuration file creation​

Create a configuration file with:

furyctl create config --kind KFDDistribution --version v1.31.0 --name <cluster-name>

This will create a new furyctl.yaml file with some default values. Read more to learn how to customize this file.

Note that the metadata.name parameter is the name of your cluster and it is used by furyctl to maintain the current status locally. It's not used by Kubernetes.

spec.distribution section​

This section is dedicated to the configuration of the KFD core modules to be installed inside the cluster.

As usual, in the default configuration file you will find some comments to guide you editing this section.

kubeconfig​

Path to a kubeconfig file that should be able to apply mannifests inside the cluster. furyctl will use this file to connect to the cluster and apply the KFD manifests.

common.nodeSelector and common.tolerations​

These parameters will be applied to every module. If you have labeled and/or tainted some nodes to host specifically KFD modules, you can put your values here to avoid placing KFD modules on all nodes.

spec.distribution.modules section​

Here you will configure the KFD core modules that will be installed in your cluster.

Networking module​

Kubernetes has adopted the Container Network Interface (CNI) specification for managing network resources on a cluster.

Kubernetes Fury Networking makes use of CNCF recommended Project Calico, open-source networking and network security solution for containers, virtual machines, and bare-metal workloads, to bring networking features to the Kubernetes Fury Distribution.

Calico deployment consists of a daemon set running on every node (including control-plane nodes) and a controller.

info

You can find more info about this module here.

You can customize the module using the spec.distribution.modules.networking object in the following way:

  • type parameter to choose between calico and cilium. If your cluster comes with a pre-installed CNI, you can also set the type parameter to none (default) to avoid installing this module.
  • cilium object: if you specified cilium in type, this object lets you customize some aspects of it.
    • maskSize: The mask size to use for the Pods network on each node.
    • podCidr: Allows specifing a CIDR for the Pods network different from .spec.kubernetes.podCidr. If not set the default is to use .spec.kubernetes.podCidr.

Ingress module​

Kubernetes Fury Ingress uses CNCF recommended, Cloud Native projects, such as Ingress NGINX an ingress controller using the well-known NGINX server as a URL path-based routing reverse proxy and load balancer, and cert-manager to automate the issuing and renewal of TLS certificates from various issuing sources.

The module also includes additional tools like Forecastle, a web-based global directory of all the services offered by your cluster.

info

You can find more info about this module here.

You can customize the module using the spec.distribution.modules.ingress object in the following way:

  • baseDomain: the base domain used for all the KFD ingresses, if in the nginx dual configuration, it should be the same as the .spec.distribution.modules.ingress.dns.private.name zone.
  • nginx.type: defines if the nginx should be configured as single or dual (internal + external, default) or none, with none no ingress controller will be deployed and also no ingress resource will be created.
    • dual: KFD will install two classes of the NGINX Ingress Controller, one being intended as internal use (for example by internal OPS teams) and the other for external exposure (for example towards final users). All KFD Ingresses will be using the internal class.
    • single: KFD will install a single NGINX Ingress Controller instance, with class nginx.
    • none: KFD will not install NGINX Ingress Controller. This also means that all web interfaces for other modules will not have an Ingress resource.
  • nginx.tls: the tls section defines how the tls for the ingresses should be managed. You can choose to provide the TLS certificate for the Ingress either via certManager or with a Secret resource. If you choose the provider to be secret, you must also specify the secret object with cert, key and ca contents.
  • certManager.clusterIssuer: this section lets you customize the CertManager instance that will be installed by KFD to provide certificates used by the cluster. Note that CertManager will always be installed, regardless of your choice about the Ingress, because it's used for all KFD certificates, not just the Ingress one.

The Forecastle instance will contain all Ingress resources created by KFD.

Logging module​

Kubernetes Fury Logging uses a collection of open source tools to provide the most resilient and robust logging stack for the cluster.

The central piece of the stack is the open source search engine opensearch, combined with its analytics and visualization platform opensearch-dashboards. The logs are collected using a node-level data collection and enrichment agent fluentbit, pushing it to the OpenSearch via fluentd. The fluentbit and fluentd stack is managed by Banzai Logging Operator. We are also providing an alternative to OpenSearch: loki.

All the components are deployed in the logging namespace in the cluster.

info

You can find more info about this module here.

You can customize the module using the spec.distribution.modules.logging object in the following way:

  • type: selects the logging stack. Choosing none will disable the centralized logging. Choosing opensearch will deploy and configure the Logging Operator and an OpenSearch cluster (can be single or triple for HA) where the logs will be stored. Choosing loki will use a distributed Grafana Loki instead of OpenSearch for storage. Choosing customOuput the Logging Operator will be deployed and installed but with no local storage, you will have to create the needed Outputs and ClusterOutputs to ship the logs to your desired storage.

  • minio.storageSize: if the backend type for the module is minio, it will install a minio instance inside the cluster. Here you specify the size for the bucket that's dedicated to the logging storage.

    info

    The minio instance will have 3 replicas with 2 PVCs each, so the storage size you specify will be multiplicated by a factor of 6.

  • minio.rootUser: here you can provide custom credentials for the Minio root user.

  • loki object: if you specified loki in type, this object lets you customize some aspects of it.

    • backend: the storage backend type for Loki. minio (default) will use an in-cluster MinIO deployment for object storage, externalEndpoint can be used to point to an external S3-compatible object storage instead of deploying an in-cluster MinIO.
    • externalEndpoint: if you specified externalEndpoint as backend, you need to specify this object to provide the needed credentials to your S3-compatible bucket.
    • resources: with this object, you can customize Loki's Pods resources.
  • opensearch object: if you specified opensearch in type, this object lets you customize some aspects of it.

    • type: you can choose between single and triple, which will install a single-instance or a HA triple-instance Opensearch respectively.
    • storageSize: here you can specify the size of the PVCs used by Opensearch's Pods.
    • resources: with this object, you can customize Opensearch's Pods resources.
  • customOutputs: when using the customOutputs logging type, you need to manually specify the spec of the several Output and ClusterOutputs that the Logging Operator expects to forward the logs collected by the pre-defined flows. Here you should follow the Output's spec object schema. You can find more info about the Flows provided by KFD here.

Monitoring module​

Kubernetes Fury Monitoring provides a fully-fledged monitoring stack for the Kubernetes Fury Distribution (KFD). This module extends and improves upon the Kube-Prometheus project.

This module is designed to give you full control and visibility over your cluster operations. Metrics from the cluster and the applications are collected and clean analytics are offered via a visualization platform, Grafana.

info

You can find more info about this module here.

You can customize the module using the spec.distribution.modules.monitoring object in the following way:

  • type: you can select between prometheus (default), prometheusAgent and mimir, or none to avoid installing the module.

    • prometheus will install Prometheus Operator and a preconfigured Prometheus instance, Alertmanager, a set of alert rules, exporters needed to monitor all the components of the cluster, Grafana and a series of dashboards to view the collected metrics, and more.
    • prometheusAgent will install Prometheus operator, an instance of Prometheus in Agent mode (no alerting, no queries, no storage), and all the exporters needed to get metrics for the status of the cluster and the workloads. Useful when having a centralized (remote) Prometheus where to ship the metrics and not storing them locally in the cluster.
    • mimir will install the same as the prometheus option, and in addition Grafana Mimir that allows for longer retention of metrics and the usage of Object Storage.
  • prometheus object: if you specified prometheus or mimir in type, this object lets you customize some aspects of the Prometheus instance, which is governed by Prometheus Operator.

    • remoteWrite: Set this option to ship the collected metrics to a remote Prometheus receiver. remoteWrite is an array of objects that allows configuring the remoteWrite options for Prometheus. The objects in the array follow the same schema as in the prometheus operator.
    • resources: with this object, you can customize Prometheus' Pods resources.
    • retentionSize: the retention size for the k8s Prometheus instance.
    • retentionTime: the retention time for the K8s Prometheus instance.
    • storageSize: the storage size for the k8s Prometheus instance.
  • alertmanager: if you specified prometheus or mimir in type, KFD will install Alertmanager inside the cluster. With this object, you can customize some aspects of it:

    • deadManSwitchWebhookUrl: The webhook url to send deadman switch monitoring, for example to use with healthchecks.io.
    • installDefaultRules: KFD comes with predefined Prometheus rules to generate alerts. If true, the default rules will be installed.
    • slackWebhookUrl: the Slack webhook url to send alerts.
  • grafana: if you specified prometheus or mimir in type, KFD will install Grafana inside the cluster. With this object, you can customize some aspects of it:

    • basicAuthIngress: setting this to true will deploy an additional grafana-basic-auth ingress protected with Grafana's basic auth instead of SSO. It's intended use is as a temporary ingress for when there are problems with the SSO login flow.

    • usersRoleAttributePath: JMESPath expression to retrieve the user's role. Example:

      usersRoleAttributePath: "contains(groups[*], 'beta') && 'Admin' || contains(groups[*], 'gamma') && 'Editor' || contains(groups[*], 'delta') && 'Viewer'

      More details in Grafana's documentation.

  • prometheusAgent object: if you specified prometheusAgent in type, this object lets you specify:

    • remoteWrite: you can specify a list of remote Prometheus receivers to ship metrics to.
    • resources: with this object, you can customize Prometheus' Pods resources.
  • mimir object: if you specified mimir in type, this object lets you customize some aspects of the installation:

    • backend: the storage backend type for Mimir. minio (default) will use an in-cluster MinIO deployment for object storage, externalEndpoint can be used to point to an external S3-compatible object storage instead of deploying an in-cluster MinIO.
    • externalEndpoint: if you specified externalEndpoint as backend, you need to specify this object to provide the needed credentials to your S3-compatible bucket.
    • retentionTime: The retention time for the mimir pods.
  • minio.storageSize: if you selected minio as storage backend for the module, it will install a minio instance inside the cluster. Here you specify the size for the bucket that's dedicated to the monitoring storage.

    info

    The minio instance will have 3 replicas with 2 PVCs each, so the storage size you specify will be multiplicated by a factor of 6.

  • minio.rootUser: here you can provide custom credentials for the Minio root user.

Tracing module​

Kubernetes Fury Tracing uses a collection of open source tools to provide the most resilient and robust tracing stack for the cluster.

The module contains the tempo tool from grafana.

info

You can find more info about this module here.

You can customize the module using the spec.distribution.modules.tracing object in the following way:

  • type: The type of tracing to use, either none or tempo.

  • tempo object: if you specified tempo in type, this object lets you customize some aspects of it.

    • backend: The storage backend type for Tempo. minio (default) will use an in-cluster MinIO deployment for object storage, externalEndpoint can be used to point to an external S3-compatible object storage instead of deploying an in-cluster MinIO.
    • retentionTime: the retention time for the tempo pods.
    • externalEndpoint: if you specified externalEndpoint as backend, you need to specify this object to provide the needed credentials to your S3-compatible bucket.
  • minio.storageSize: if you selected minio as storage backend for the module, it will install a minio instance inside the cluster. Here you specify the size for the bucket that's dedicated to the monitoring storage.

    info

    The minio instance will have 3 replicas with 2 PVCs each, so the storage size you specify will be multiplicated by a factor of 6.

OPA module​

The Kubernetes API server provides a mechanism to review every request that is made (object creation, modification, or deletion). To use this mechanism the API server allows us to create a Validating Admission Webhook that, as the name says, will validate every request and let the API server know if the request is allowed or not based on some logic (policy).

Kubernetes Fury OPA module is based on OPA Gatekeeper and Kyverno, two popular open-source Kubernetes-native policy engines that runs as a Validating Admission Webhook. It allows writing custom constraints (policies) and enforcing them at runtime.

info

You can find more info about this module here.

You can customize the module using the spec.distribution.modules.policy object in the following way:

  • type: the type of security to use, either none, gatekeeper (default) or kyverno.
  • gatekeeper object: if you specified gatekeeper in type, this object lets you customize some aspects of it.
    • additionalExcludedNamespaces: this parameter adds namespaces to Gatekeeper's exemption list, so it will not enforce the constraints on them.
    • installDefaultPolicies: if true, the default policies will be installed. You can find more info about the default policies here
    • enforcementAction: The enforcement action to use for the gatekeeper module. Allowed values: deny, dryrun, warn.
  • kyverno object: if you specified kyverno in type, this object lets you customize some aspects of it.
    • additionalExcludedNamespaces: this parameter adds namespaces to Kyverno's exemption list, so it will not enforce the constraints on them.
    • installDefaultPolicies: if true, the default policies will be installed. You can find more info about the default policies here
    • validationFailureAction: The validation failure action to use for the kyverno module. Allowed values: Audit, Enforce.

DR module​

Kubernetes Fury DR module is based on Velero and Velero Node Agent.

Velero allows you to:

  • backup your cluster
  • restore your cluster in case of problems
  • migrate cluster resources to other clusters
  • replicate your production environment to development and testing environment.

Together with Velero, Velero Node Agent allows you to:

  • backup Kubernetes volumes
  • restore Kubernetes volumes

The module contains also velero plugins to natively integrate with Velero with different cloud providers and use cloud provider's volumes as the storage backend.

info

You can find more info about this module here.

You can customize the module using the spec.distribution.modules.dr object in the following way:

  • type: The type of the DR, must be none or on-premises.
  • velero object: if you specified on-premises in type, this object lets you customize some aspects of it.
    • backend: The storage backend type for Velero. minio (default) will use an in-cluster MinIO deployment for object storage, externalEndpoint can be used to point to an external S3-compatible object storage instead of deploying an in-cluster MinIO.
    • externalEndpoint: if you specified externalEndpoint as backend, you need to specify this object to provide the needed credentials to your S3-compatible bucket.
    • schedules object: Configuration for Velero's backup schedules.
      • ttl: The Time To Live (TTL) of the backups created by the backup schedules (default 720h0m0s, 30 days). Notice that changing this value will affect only newly created backups, prior backups will keep the old TTL.
      • install: Whether to install or not the default manifests and full backups schedules. Default is true.
      • cron.full and cron.manifests: The cron expression for the full and manifests backup schedules (default 0 1 * * * for full, */15 * * * * for manifests).

Auth module​

Kubernetes Fury Auth uses CNCF recommended, Cloud Native projects, such as the Dex identity provider, and Pomerium as an identity-aware proxy to enable secure access to internal applications.

info

You can find more info about this module here.

You can customize the module using the spec.distribution.modules.auth object in the following way:

  • baseDomain: the base domain used for all the auth ingresses, if in the nginx dual configuration, it should be the same as the .spec.distribution.modules.ingress.dns.public.name zone.
  • provider.type: the authentication type used for the infrastructure ingresses (all the ingress for the distribution). It can be none (default), basicAuth, sso.
  • provider.basicAuth.username and provider.basicAuth.password: if you specified basicAuth in provider.type, these two parameter let you customize the credentials to be used.
  • dex object: if you specified sso in provider.type, this object lets you customize some aspects of the Dex installation.
    • expiry.idTokens and expiry.signingKeys: Dex ID tokens and signing key expiration time duration (default 24h and 6h respectively).
    • additionalStaticClients: The additional static clients for dex, here you can find some info about the available options.
    • connectors: The connectors for dex, see here for more info about the available options.
  • pomerium object: if you specified sso in provider.type, this object lets you customize some aspects of the Pomerium installation.
    • secrets: this object contains four parameters that need to be filled when using Pomerium:

      • COOKIE_SECRET: Cookie Secret is the secret used to encrypt and sign session cookies. To generate a random key, run the following command: head -c32 /dev/urandom | base64

      • IDP_CLIENT_SECRET: Identity Provider Client Secret is the OAuth 2.0 Secret Identifier. When auth type is SSO, this value will be the secret used to authenticate Pomerium with Dex, use a strong random value.

      • SHARED_SECRET: Shared Secret is the base64-encoded, 256-bit key used to mutually authenticate requests between Pomerium services. It's critical that secret keys are random, and stored safely. To generate a key, run the following command: head -c32 /dev/urandom | base64

      • SIGNING_KEY: Signing Key is the base64 representation of one or more PEM-encoded private keys used to sign a user's attestation JWT, which can be consumed by upstream applications to pass along identifying user information like username, id, and groups.

        To generate a P-256 (ES256) signing key:

        openssl ecparam -genkey -name prime256v1 -noout -out ec_private.pem      
        # careful! this will output your private key in terminal
        cat ec_private.pem | base64
    • routes: additional routes configuration for Pomerium. Follows Pomerium's route format.

spec.distribution.customPatches section​

This section lets you customize the manifests deployed by KFD. This is intended to apply advanced customizations where the provided configuration fields do not suffice.