Skip to main content
Version: Next

Kubernetes Fury Networking

Overview

Kubernetes has adopted the Container Network Interface (CNI) specification for managing network resources on a cluster.

Kubernetes Fury Networking makes use of CNCF recommended Project Calico, open-source networking and network security solution for containers, virtual machines, and bare-metal workloads, to bring networking features to the Kubernetes Fury Distribution.

Calico deployment consists of a daemon set running on every node (including control-plane nodes) and a controller.

You can also choose to use Cilium instead, to leverage on dynamic eBPF rules.

Module's repository: https://github.com/sighupio/fury-kubernetes-networking

Packages

Kubernetes Fury Networking provides the following packages:

PackageDescription
ciliumCilium CNI Plugin. For cluster with < 200 nodes.
calicoTigera Operator, a Kubernetes Operator for Calico, provides pre-configured installations for on-prem and for EKS in policy-only mode.
info

All the components are deployed in the kube-system namespace in the cluster, except for the Tigera Operator which will be deployed inside the tigera-operator namespace.

Compatibility

Kubernetes VersionCompatibilityNotes
1.28.xNo known issues
1.29.xNo known issues
1.30.xNo known issues
1.31.xNo known issues

Check the compatibility matrix for additional information on previous releases of the module.

You can find more info on the GitHub project's README.

Introduction: Networking in Kubernetes

In a Kubernetes cluster there are at least two kinds of networking:

  1. Nodes network to enable clustering (operating-system level);
  2. Container network to enable applications to communicate with each other (overlay-network, with a CNI). The KFD Networking module manages this layer of networking.

Kubernetes' networking model specifies that:

  • All Pods must have a dedicated, unique IP address inside the entire cluster
  • Containers in a single Pod must share the IP address of the Pod and can communicate with each other using localhost (loopback).
  • Pods can communicate with other Pods inside the same cluster using their respective IP address, without NATting.
  • Agents on a node (for example system daemons, kubelets etc.) can communicate with all Pods in that node.
  • Network isolation must be defined using network policies. CNIs can decide to not implement this functionality.

The actual implementation of that model inside Kubernetes clusters makes sure that:

  1. Containers inside a Pod can use the loopback device to communicate with each other.
  2. Inter-Pod connectivity is provided by the cluster
  3. Kubernetes' Service resources enable exposing applications (Pods) to be made accessible from both inside the cluster (type: ClusterIP) and outside the cluster (types: NodePort and LoadBalancer).
  4. Kubernetes' Ingress resources enable additional, HTTP-related functionality to expose websites and APIs.

Moreover, some other key characteristics of Kubernetes networking are:

  • Service Discovery: Kubernetes objects can automatically discover available services inside the cluster.
  • Load Balancing: the network infrastructure can distribute traffic in a balanced way between multiple Pods of the same application.
  • Network Policies: rules that can be used to protect Pods and control network traffic.
  • DNS Resolution: Kubernetes objects can automatically resolve Services using DNS names.

KFD: Networking module

The Networking module provided by KFD comes with two options for clusters managed with furyctl:

  1. Calico with Tigera Operator (default)
  2. Cilium

Calico

Calico is a networking and security solution for containers, virtual machines and physical hosts. It supports a wide variety of platforms, including Kubernetes, OpenShift, Docker EE, OpenStackm and bare metal. Calico also supports Windows.

Calico can use iptables and eBPF in Linux platforms, or Windows' standard stack.

Calico is one of the most adopted networking solutions for Kubernetes and it's the default option in KFD, which installs Calico in its open-source variant with iptables + kubeproxy + VXLAN (the default stack of Calico) using the Tigera Operator. Calico can also enforce Network Policies.

The open-source version of Calico does not provide a UI to visualize network-related information, but KFD integrates this module with the Monitoring one, providing Grafana dashboards to retrieve the collected network metrics and Prometheus Rules to send alerts when some values go out of their normal range.

Cilium

Cilium is open-source software that protects network connectivity between distributed application workloads using Linux container platforms such as Docker and Kubernetes.

Cilium's foundation is a new technology, named eBPF, which enables plugging custom, dynamic logic for control and visibility over security inside the Linux kernel. Being used at the kernel level, eBPF enables Cilium to apply and update network security policies without modifying applications' source code or container configurations.

Cilium can be extended with Hubble, an observability distributed platform focused on newtork and security. Built on top of Cilium and eBPF, it enables full visibility on communications and behaviours of services and network infrastracture in a completely transparent fashion. It's possible to interact with Hubble's data using both a CLI interface and a web UI. The OSS version does not provide long-term history for traffic data.

Hubble UI is a web interface that provides graphical data showing services dependencies on levelss 3/4/7 of the ISO stack, enabling filtering and visualization of data traffic.

Cilium also provides a service mesh for distributed systems and microservices.

KFD lets you choose Cilium as the networking solution to be installed inside the cluster, instead of the default Calico. If you choose so, the Networking module also includes Hubble and its Hubble UI. As with the Calico component, KFD integrates this module with the Monitoring one to provide Grafana dashboards to visualize network metrics and Prometheus Rules to enable alerting.

Monitoring

The Networking module includes out-of-the-box metrics monitoring and alerting features for its components.

You can monitor the status of the networking stack from the provided Grafana dashboards:

Calico Felix dashboard screenshot Calico Typha dashboard screenshot

click on each screenshot for the full screen version

The following set of alerts is included with the networking module:

Alert NameSummaryDescription
CalicoDataplaneFailuresHighA high number of dataplane failures within Felix are happeningCalico node pod {{ $labels.pod }} ({{ $labels.instance }}) has seen {{ $value }} dataplane failures within the last hour
CalicoIpsetErrorsHighA high number of ipset errors within Felix are happeningCalico node pod {{ $labels.pod }} ({{ $labels.instance }}) has seen {{ $value }} ipset errors within the last hour
CalicoIptableSaveErrorsHighA high number of iptable save errors within Felix are happeningCalico node pod {{ $labels.pod }} ({{ $labels.instance }}) has seen {{ $value }} iptable save errors within the last hour
CalicoIptableRestoreErrorsHighA high number of iptable restore errors within Felix are happeningCalico node pod {{ $labels.pod }} ({{ $labels.instance }}) has seen {{ $value }} iptable restore errors within the last hour
CalicoErrorsWhileLoggingHighA high number of errors within Felix while loggging are happeningCalico node pod {{ $labels.pod }} ({{ $labels.instance }}) has seen {{ $value }} errors while logging within the last ten minutes
TyphaPingLatencyTypha Round-trip ping latency to client (cluster {{ $labels.cluster }})Typha latency is growing (ping operations > 100ms). VALUE = {{ $value }}. LABELS = {{ $labels }}
TyphaClientWriteLatencyTypha unusual write latency (instance {{ $labels.cluster }})Typha client latency is growing (write operations > 100ms). VALUE = {{ $value }}. LABELS = {{ $labels }}
TyphaErrorsWhileLoggingHighA high number of errors within Typha while loggging are happeningTypha pod {{ $labels.pod }} ({{ $labels.instance }}) has seen {{ $value }} errors while logging within the last ten minutes

Read more

You can find more info on networking in Kubernetes at the following links: