Skip to main content
Version: 1.30.2

SIGHUP Distribution Release v1.30.2

Welcome to SD release v1.30.2.

The distribution is maintained with ❤️ by the team SIGHUP by ReeVo.

New Features since v1.30.1

Installer Updates

  • on-premises 📦 installer: v1.32.3
    • Add support and install Kubernetes 1.31.7
    • [#116] Add support for etcd cluster on dedicated nodes
    • [#124] Add support for kubeadm and kubelet reconfiguration

Module updates

  • networking 📦 core module: v2.1.0
    • Updated Tigera operator to v1.36.5 (that includes calico v3.29.2)
    • Updated Cilium to v1.17.2
  • monitoring 📦 core module: v3.4.0
    • Updated alert-manager to v0.27.0
    • Updated x509-exporter to v3.18.1
    • Updated mimir to v2.15.0
    • Updated minio to version RELEASE.2025-02-28T09-55-16Z
  • logging 📦 core module: v5.0.0
    • Updated opensearch and opensearch-dashboards to v2.19.1
    • Updated logging-operator to v5.2.0
    • Updated loki to v3.4.2
    • Updated minio to version RELEASE.2025-02-28T09-55-16Z
  • ingress 📦 core module: v4.0.0
    • Updated cert-manager to v1.17.1
    • Updated external-dns to v0.16.1
    • Updated forecastle to v1.0.156
    • Updated nginx to v1.12.0
  • auth 📦 core module: v0.5.0
    • Updated dex to v2.42.0
    • Updated pomerium to v0.28.0
  • dr 📦 core module: v3.1.0
    • Updated velero to v1.15.2
    • Updated all velero plugins to v1.11.1
    • Added snapshot-controller v8.2.0
  • tracing 📦 core module: v1.2.0
    • Updated tempo to v2.1.1
    • Updated minio to version RELEASE.2025-02-28T09-55-16Z
  • policy 📦 core module: v1.14.0
    • Updated gatekeeper to v3.18.2
    • Updated kyverno to v1.13.4
  • aws 📦 module: v5.0.0
    • Updated cluster-autoscaler to v1.32.0
    • Removed snapshot-controller
    • Updated aws-load-balancer-controller to v2.12.0
    • Updated node-termination-handler to v1.25.0

Breaking changes 💔

  • Feature removal in Ingress NGINX Controller: upstream Ingress NGINX Controller has introduced some breaking changes in version 1.12.0 included in this version of the ingress module. We recommend reading the module's release notes for further info.

  • kustomize upgrade to version 5.6.0: plugins that used old deprecated constructs in their kustomization.yaml may not work anymore. Please refer to the release notes of kustomize version 4.0.0 and version 5.0.0 for breaking changes that might affect your plugins.

  • Loki update: starting with the v5.0.0 of the Logging Core Module Loki version has been bumped to 3.4.2. Please refer to loki documentation for the complete release notes.

  • Policy update: potential breaking changes in Kyverno that depend on the target environment.

    • Removal of wildcard permissions: Prior versions contained wildcard view permissions which allowed Kyverno controllers to view all resources. In v1.13, these were replaced with more granular permissions. This change will not impact policies during admission controls but may impact reports, and may impact users with mutate and generate policies on CRs as the controller may no longer be able to view them.
    • Default exception settings: In Kyverno v1.12 and previous versions, policy exceptions were enabled by default for all namespaces. The new default in Kyverno v1.13 no longer automatically enables exceptions for all namespaces (to address CVE-2024-48921), instead requires explicit configuration of the namespaces to which exceptions apply, which may need to be added.

New features 🌟

  • [#355] Support for etcd cluster on dedicated nodes: adding support for deploying etcd on dedicated nodes instead of control plane nodes to the OnPremises provider. For the new clusters, users can define specific hosts for etcd, each with a name and IP. If the etcd key is omitted, etcd will be provisioned on control plane nodes. Migrating from etcd on the control-plane nodes to separated nodes (and vice-versa) is not currently supported.

    To make use of this new feature, you need to define the hosts where etcd will be deployed in your configuration file using the .spec.kubernetes.etcd key, for example:

    ...
    spec:
    kubernetes:
    masters:
    hosts:
    - name: master1
    ip: 192.168.66.29
    - name: master2
    ip: 192.168.66.30
    - name: master3
    ip: 192.168.66.31
    etcd:
    hosts:
    - name: etcd1
    ip: 192.168.66.39
    - name: etcd2
    ip: 192.168.66.40
    - name: etcd3
    ip: 192.168.66.41
    nodes:
    - name: worker
    hosts:
    - name: worker1
    ip: 192.168.66.49
    ...
  • [#359] Add etcd backup to S3 and PVC: we added two new solutions for snapshotting your etcd cluster. It allows a user to save automatically and periodically the etcd snapshot in a PersistentVolumeClaim or in a remote S3-bucket.

    To make use of this new feature, you need to define how etcdBackup will be deployed in your configuration file, using the .spec.distribution.dr.etcdBackup key, for example:

    ...
    spec:
    distribution:
    dr:
    etcdBackup:
    type: "all" # it can be: pvc, s3, all (pvc and s3), none
    backupPrefix: "" # prefix for the filename of the snapshot
    pvc:
    schedule: "0 2 * * *"
    # name: test-pvc (optional name of the pvc: if set it uses an existing one, if left unset it creates one for you)
    size: 1G # size of the created PVC, ignored if name is set
    # accessModes: [] # accessMode used for the created PVC, ignored if name is set
    # storageClass: storageclass # storage class to use for the created PVC, ignored if name is set
    retentionTime: 10m # how long do you wanna keep your snapshots?
    s3:
    schedule: "0 0 * * *"
    endpoint: play.min.io:9000 # s3 endpoint to upload your snapshots to
    accessKeyId: test
    secretAccessKey: test
    retentionTime: 10m
    bucketName: bucketname
    ...
  • [#368] Add support for kubeadm and kubelet reconfiguration in the OnPremises provider: this feature allows reconfiguring kubeadm and kubelet components after initial provisioning.

    The kubeletConfiguration key allows users to specify any parameter supported by the KubeletConfiguration object, at three different levels:

    • Global level (spec.kubernetes.advanced.kubeletConfiguration).
    • Master nodes level (spec.kubernetes.masters.kubeletConfiguration).
    • Worker node groups level (spec.kubernetes.nodes.kubeletConfiguration).

    Examples of uses include controlling the maximum number of pods per core (podsPerCore), managing container logging (containerLogMaxSize), Topology Manager options (topologyManagerPolicyOptions). All values must follow the official Kubelet specification. Usage examples:

    ...
    spec:
    kubernetes:
    masters:
    hosts:
    - name: master1
    ip: 192.168.56.20
    kubeletConfiguration:
    podsPerCore: 150
    systemReserved:
    memory: 1Gi
    ...
    ...
    spec:
    kubernetes:
    masters:
    hosts:
    - name: master1
    ip: 192.168.66.29
    nodes:
    - name: worker
    hosts:
    - name: worker1
    ip: 192.168.66.49
    kubeletConfiguration:
    podsPerCore: 200
    systemReserved:
    memory: 2Gi
    ...
    ...
    spec:
    kubernetes:
    masters:
    hosts:
    - name: master1
    ip: 192.168.56.20
    advanced:
    kubeletConfiguration:
    podsPerCore: 100
    enforceNodeAllocatable:
    - pods
    - system-reserved
    systemReserved:
    memory: 500Mi
    ...

    This feature also adds the kubeadm reconfiguration logic and exposes the missing apiServerCertSANs field, under the spec.kubernetes.advanced key.

    ...
    spec:
    kubernetes:
    masters:
    hosts:
    - name: master1
    ip: 192.168.56.20
    advanced:
    apiServerCertSANs:
    - my.domain.com
    - other.domain.net
    ...
note

If you have previously made manual changes to kubeadm configmap or kubelet configurations on your cluster, these will be overwritten with this new feature. We recommend taking backups of any custom configurations before upgrading if you've did manual changes. In single-master configurations, do not modify the reserved kubelet resources during the upgrade but only after its completion to avoid potential deadlocks.

Fixes 🐞

  • [#334] Fix to policy module templates: setting the policy module type to gatekeeper and the additionalExcludedNamespaces option for Kyverno at the same time resulted in an error do to an bug in the templates logic, this has been fixed.
  • [#336] Fix race condition when deleting Kyverno: changing the policy module type from kyverno to none could, sometimes, end up in a race condition where the API for ClusterPolicy CRD is unregistered before the deletion of the ClusterPolicy objects, resulting in an error in the deletion command execution. The deletion command has been tweaked to avoid this condition.
  • [#344] Fix Cidr Block additional firewall rule in EKS Cluster: remove the limitation to have a single CIDR Block additional firewall rule as the EKS installer supports a list.
  • [#348] Fix Get previous cluster configuration failure on first apply: fixed an issue on furyctl apply for on-premises clusters that made it fail with an ansible-playbook create-playbook.yaml: command failed - exit status 2 error on the very first time it was executed.
  • [#362] Fix template config files: fix wrong documentation links inside configuration files created with furyctl create config.
  • [#364] Fix node placement for Monitoring's Minio: Add missing placement patch (missing nodeSelector and tolerations) to Minio's setup job.

Security fixes

  • [module-ingress/#146] Fix Ingress-nginx CVEs (CVE-2025-24513, CVE-2025-24514, CVE-2025-1097, CVE-2025-1098 and CVE-2025-1974): upstream Ingress NGINX Controller has corrected a few critical CVEs in version 1.12.1, included in this version of the distribution.

Upgrade procedure

Check the upgrade docs for the detailed procedure.