Kubernetes Fury OPA
Overview
The Kubernetes API server provides a mechanism to review every request that is made (object creation, modification, or deletion). To use this mechanism the API server allows us to create a Validating Admission Webhook that, as the name says, will validate every request and let the API server know if the request is allowed or not based on some logic (policy).
Kubernetes Fury OPA module is based on OPA Gatekeeper and Kyverno, two popular open-source Kubernetes-native policy engines that runs as a Validating Admission Webhook. It allows writing custom constraints (policies) and enforcing them at runtime.
SIGHUP provides a set of base constraints that could be used both as a starting point to apply constraints to your current workloads and to give you an idea of how to implement new rules matching your requirements.
Module's repository: https://github.com/sighupio/fury-kubernetes-opa
Packages
Fury Kubernetes OPA provides the following packages:
Package | Description |
---|---|
Gatekeeper | Gatekeeper deployment, together with a set of custom rules to get started with policy enforcement. |
Kyverno | Kyverno is a policy engine designed for Kubernetes. It can validate, mutate, and generate configurations using admission controls and background scans. |
Gatekeeper components will be deployed in the gatekeeper-system
namespace.
Kyverno components will be deployed in the kyverno
namespace.
Introduction: Validation Webhooks in Kubernetes
Validating Admission Webhooks are an advanced Kubernetes feature that allows for custom validation of API requests before they are processed by the Kubernetes API server. When a request to create, update, or delete a resource is made, Kubernetes can invoke a webhook to determine whether the request meets predefined policies. If the webhook denies the request, the operation is rejected.
This feature is highly flexible and enables organizations to enforce specific rules and constraints, ensuring that Kubernetes resources comply with organizational policies or security standards. Examples include enforcing naming conventions, validating resource quotas, or ensuring required labels are present on resources.
Why Use Validating Admission Webhooks?
In Kubernetes environments, the flexibility to deploy diverse workloads can lead to inconsistencies, misconfigurations, and security vulnerabilities. Validating Admission Webhooks address these challenges by enabling:
- Policy Enforcement: Automatically enforce policies, such as restricting specific container images or ensuring resource limits are set.
- Security and Compliance: Prevent insecure configurations, like running containers as root, ensuring workloads meet compliance standards.
- Automation and Consistency: Eliminate the need for manual validation by automating checks during the resource creation process.
Using webhooks provides a dynamic and extensible way to validate requests compared to static configurations or post-deployment audits.
KFD: OPA module
With KFD, you can choose to implement policies for your workload using OPA Gatekeeper or Kyverno. You can specify which one you want to include in your cluster setting the .spec.distribution.modules.policy.type
parameter to:
none
: the Policy module will not be included inside your cluster.gatekeeper
: KFD will install OPA Gatekeeper. You can customize it using the.spec.distribution.modules.policies.gatekeeper
object.kyverno
: KFD will install OPA Gatekeeper. You can customize it using the.spec.distribution.modules.policies.kyverno
object.
You can find all the available parameters to configure this module in the provider's reference schemas.
This module also integrates seamlessly with the Monitoring module. See more details in the Monitoring section of this page.
OPA Gatekeeper
OPA Gatekeeper is a popular open-source tool that leverages Validating Admission Webhooks to enforce policies in Kubernetes. Built on the Open Policy Agent (OPA), Gatekeeper allows administrators to define fine-grained policies using a declarative syntax called Rego. Key features of OPA Gatekeeper include:
- Policy as Code: Define policies as Kubernetes custom resources, making them versionable and auditable.
- Dynamic Validation: Evaluate incoming requests against policies in real-time before they are applied to the cluster.
- Template-Based Policies: Create reusable policy templates that simplify enforcing common constraints across clusters.
- Audit Mode: Evaluate how existing resources measure up against defined policies without actively blocking changes.
Use cases for OPA Gatekeeper include:
- Security: Prevent the use of privileged containers or untrusted images.
- Resource Management: Enforce namespace-specific resource quotas or ensure limits/requests are set.
- Labeling Standards: Require specific labels for observability or governance purposes.
OPA Gatekeeper integrates seamlessly with Kubernetes, enabling teams to enforce policies consistently across clusters. Combined with its powerful audit capabilities, Gatekeeper ensures a secure, compliant, and well-governed Kubernetes environment. It also enables cross-platform compatibility for policy enforcement thanks to its vast ecosystem.
Kyverno
Kyverno is a Kubernetes-native policy engine that leverages Validating Admission Webhooks to enforce, validate, and manage resource configurations. Unlike OPA Gatekeeper, Kyverno adopts a user-friendly approach, allowing policies to be defined as standard Kubernetes YAML manifests. This makes it more accessible to Kubernetes users who are already familiar with native resource definitions.
Key features of Kyverno include:
- Policy as Code: Policies in Kyverno are written in plain YAML, making it easy to define, maintain, and version them alongside other Kubernetes manifests.
- Validation: Kyverno validates incoming resource requests against defined rules, ensuring compliance with organizational standards. For example, it can enforce that all pods specify resource requests and limits or disallow the use of privileged containers.
- Mutation: One of Kyverno's standout features is its ability to modify incoming resources dynamically. This can be used to add default labels, inject sidecar containers, or set default values for resource specifications.
- Generation: Kyverno can automatically create or update resources, such as generating ConfigMaps, Secrets, or RBAC roles required by an application. This reduces the need for manual resource provisioning.
- Audit Mode: Kyverno can operate in an audit mode to evaluate how existing resources conform to defined policies without blocking operations. This is especially useful for understanding policy impact before enforcing them.
Kyverno in KFD
Kyverno is deployed in HA mode, and whitelists the KFD infra
namespaces by default on the webhooks.
Pre-configured policies
This package comes with a set of predefined policies from the main Kyverno repository. These policies are our own KFD baseline, and are similar to what is provided with the Gatekeeper package.
Policy | Description |
---|---|
disallow-capabilities-strict | Adding capabilities other than NET_BIND_SERVICE is disallowed. In addition, all containers must explicitly drop ALL capabilities. |
disallow-capabilities | Adding capabilities beyond those listed in the policy must be disallowed. |
disallow-host-namespaces | Host namespaces (Process ID namespace, Inter-Process Communication namespace, and network namespace) allow access to shared information and can be used to elevate privileges. Pods should not be allowed access to host namespaces. This policy ensures fields which make use of these host namespaces are unset or set to false . |
disallow-host-path | HostPath volumes let Pods use host directories and volumes in containers. Using host resources can be used to access shared data or escalate privileges and should not be allowed. This policy ensures no hostPath volumes are in use. |
disallow-host-ports | Access to host ports allows potential snooping of network traffic and should not be allowed, or at minimum restricted to a known list. This policy ensures the hostPort field is unset or set to 0 . |
disallow-latest-tag | The ':latest' tag is mutable and can lead to unexpected errors if the image changes. A best practice is to use an immutable tag that maps to a specific version of an application Pod. This policy validates that the image specifies a tag and that it is not called latest . |
disallow-privilege-escalation | Privilege escalation, such as via set-user-ID or set-group-ID file mode, should not be allowed. This policy ensures the allowPrivilegeEscalation field is set to false . |
disallow-privileged-containers | Privileged mode disables most security mechanisms and must not be allowed. This policy ensures Pods do not call for privileged mode. |
disallow-proc-mount | The default /proc masks are set up to reduce attack surface and should be required. This policy ensures nothing but the default procMount can be specified. |
require-pod-probes | Liveness and readiness probes need to be configured to correctly manage a Pod's lifecycle during deployments, restarts, and upgrades. |
require-run-as-nonroot | Containers must be required to run as non-root users. This policy ensures runAsNonRoot is set to true . 2. |
restrict-sysctls | Sysctls can disable security mechanisms or affect all containers on a host, and should be disallowed except for an allowed "safe" subset. |
unique-ingress-host-and-path | This policy ensures that no Ingress can be created or updated unless it is globally unique with respect to host plus path combination. |
Differences between OPA Gatekeeper and Kyverno
The following table gives a short summary about the differences between the two engines:
Feature | OPA Gatekeeper | Kyverno |
---|---|---|
Policy Language | Rego | Kubernetes-native YAML |
Core Capabilities | Validation, Audit | Validation, Mutation, Generation |
Learning Curve | Steeper | Easier (Kubernetes-native syntax) |
Focus | General-purpose policies, Cross-platform | Kubernetes-specific policies |
Performance | Suitable for complex policies | Lightweight, optimized for Kubernetes |
Extensibility | Cross-platform | Kubernetes-focused |
Monitoring
Gatekeeper is configured by default in this module to expose some Prometheus metrics about its health, performance, and operative information.
You can monitor and review these metrics by checking out the provided Grafana dashboard. (This requires the KFD Monitoring Module to be installed).
Go to your cluster's Grafana and search for the "Gatekeeper" dashboard:

You can also use Gatekeeper Policy Manager to view the Constraints Templates, Constraints, and Violations in a simple-to-use UI.

Two alerts are also provided by default with the module, the alerts are triggered when the number of errors seen by the Kubernetes API server trying to contact Gatekeeper's webhook is too high. Both for Fail open (Ignore
) mode and Fail mode:
Alert | Description |
---|---|
GatekeeperWebhookFailOpenHigh | Gatekeeper is not enforcing {{$labels.type}} requests to the API server. |
GatekeeperWebhookCallError | Kubernetes API server is rejecting all requests because Gatekeeper's webhook '{{ $labels.name }}' is failing for '{{ $labels.operation }}'. |
Notice that the alert for when the Gatekeeper webhook is in Ignore
mode (the default) depends on an API server metric that has been added in Kubernetes version 1.24. Previous versions of Kubernetes won't trigger alerts when the webhook is failing and in Ignore
mode.