Skip to main content
Version: 1.23.X

v1.23.3 to 1.23.4 Upgrade Guide

This guide describes the steps to follow to upgrade the Kubernetes Fury Distribution (KFD) from v1.23.3 to 1.23.4.

If you are running a custom set of modules, or different versions than the ones included with each release of KFD, please refer to each module's release notes.

Notice that the guide will not cover changes related to the cloud provider, ingresses or pod placement changes. Only changes related to KFD and its modules.

⛔ī¸ IMPORTANT we strongly recommend reading the whole guide before starting the upgrade process to identify possible blockers.

⚠ī¸ WARNING the upgrade process involves downtime of some components.

Upgrade procedure​

As a high-level overview, the upgrade procedure consists on:

  1. Upgrading KFD (all the core modules).
  2. Upgrading the Kubernetes cluster itself.

1. Upgrade KFD​

The suggested approach to upgrade the distribution is to do it one module at a time, to reduce the risk of errors and to make the process more manageable.

Networking module upgrade​

To upgrade the Networking module to the new version, update the version on the Furyfile.yml file to the new version:

versions:
networking: v1.12.2
...

Then, download the new modules with furyctl with the following command:

furyctl vendor -H

Apply your Kustomize project that uses Networking module packages as bases with:

⛔ī¸ IMPORTANT you may want to limit the scope of the command to only the networking module, otherwise, the first time you apply with --server-side other pods may also be restarted.

The same applies to the rest of the modules, we will not include this warning in every step for simplicity.

kustomize build <your-project-path/networking> | kubectl apply -f - --server-side

Wait until all Calico pods are restarted and running. You can check Calico's Grafana dashboard "General / Felix Dashboard (Calico)" and the "Networking / *" dashboards to make sure everything is working as expected.

Monitoring module upgrade​

⚠ī¸ WARNING downtime for the Monitoring stack is expected during this process.

To upgrade the Monitoring module to the new version, update the version on the Furyfile.yml file to the new version:

versions:
...
monitoring: v2.1.0
...

Then, download the new modules with furyctl with the following command:

furyctl vendor -H

Then apply your Kustomize project that uses Monitoring module packages as bases with:

kustomize build <your-project-path/monitoring> | kubectl apply -f - --server-side

Wait a minute and check that you can see metrics in Grafana, both old and new, check that all Prometheus Targets are up and that Alertmanager is working as expected.

If using Thanos​

⚠ī¸ WARNING this will cause downtime in Thanos components, some metrics and alerts could be missed while performing the upgrade.

  • Since the release ships made changes in the architecture of Thanos, the upgrade process needs the deletion of all resources from the old version of the module.

To upgrade thanos core module from v0.24.0 to v0.30.2, execute the following:

  1. Delete the old deployment, for example if you were using thanos-with-store:
kustomize build katalog/thanos/thanos-with-store -n monitoring | kubectl delete -f -
  1. Add the right base to your kustomize, for example, Thanos using MinIO as storage and pointing to a single Prometheus:
resources:
...
- "../../vendor/katalog/monitoring/thanos/thanos-minio/single"

Note: prometheus-operator and prometheus-operated are already included inside thanos package as base

  1. Finally, apply the new deployment

Logging module upgrade​

⚠ī¸ WARNING downtime of the Logging stack is expected during this process.

To upgrade the Logging module to the new version, update the version on the Furyfile.yml file to the new version:

versions:
...
logging: v3.1.3
...

Then, download the new modules with furyctl with the following command:

furyctl vendor -H

Since this upgrade changes the major version, there are some manual steps involving breaking changes that you need to do before applying the project:

Remove the old minio stack:

kubectl delete sts minio -n logging

Apply your Kustomize project that uses Logging module packages as bases with:

kustomize build <your-project-path/logging> | kubectl apply -f - --server-side

Ingress module upgrade​

⚠ī¸ WARNING some downtime of the NGINX Ingress Controller is expected during the upgrade process.

To upgrade the Ingress module to the new version, update the version on the Furyfile.yml file to the new version:

versions:
...
ingress: v1.14.1
...

Then, download the new modules with furyctl with the following command:

furyctl vendor -H

Apply your Kustomize project that uses Ingress module packages as bases with:

kustomize build <your-project-path/ingress> | kubectl apply -f - --server-side

ℹī¸ INFO you may need to apply twice or thrice because a new Validating webhook is added with this release and it needs some time to come up.

Disaster Recovery module upgrade​

To upgrade the Disaster Recovery module to the new version, update the version on the Furyfile.yml file to the new version:

versions:
...
dr: v1.11.0
...

Then, download the new modules with furyctl with the following command:

furyctl vendor -H

Delete the old velero-restic DaemonSet, since it has been replaced with a new DaemonSet node-agent if it's in use with:

kubectl delete ds velero-restic -n kube-system
kubectl delete job minio-setup -n kube-system

Apply your Kustomize project that uses Disaster Recovery module packages as bases with:

kustomize build <your-project-path/dr> | kubectl apply -f - --server-side

Check that all velero's pods are up and running, you may want to manually trigger a backup to test that everything is working as expected. For example:

# create a backup
velero backup create --from-schedule manifests test-upgrade -n kube-system
# ... wait a moment
# check that Phase is completed
velero get backup -n kube-system test-upgrade
# you may want to see some details
velero backup describe test-upgrade -n kube-system

💡 TIP you can use a port-forward minio'UI and login to check that the backups are there.

OPA module upgrade​

To upgrade the OPA module to the new version, update the version on the Furyfile.yml file to the new version:

versions:
...
opa: v1.8.0
...

Then, download the new modules with furyctl with the following command:

furyctl vendor -H

Apply your Kustomize project that uses OPA module packages as bases with:

kustomize build <your-project-path/opa> | kubectl apply -f - --server-side

You can try to deploy a pod that is not compliant with the rules deployed in the cluster and also check in Gatekeeper Policy Manager for new violations of the constraints.

ℹī¸ INFO seeing errors like http: TLS handshake error from 172.16.0.3:42672: EOF in Gatekeeper Controller logs is normal. The error is considered harmless. See Gatekeeper's issue #2142 for reference.

Auth module upgrade​

To upgrade the Auth module to the new version, update the version on the Furyfile.yml file to the new version:

If you were using these components, adjust your Kustomize project to use the new auth module as a base:

versions:
...
auth: v0.0.3
...

Then, download the new modules with furyctl with the following command:

furyctl vendor -H

To upgrade this module from v0.0.2 to v0.0.3, you need to download this new version and do the following changes before applying the kustomize project:

  1. Edit your policy configuration file to use routes instead of policy, for example, from:
policy:
# from and to should be set to the prometheus ingress
- from: https://prometheus.example.com
to: https://prometheus.example.com
allowed_idp_claims:
groups:
# ldap groups configured in dex
- group1
- group2

to:

routes:
- from: https://prometheus.example.com
to: https://prometheus.monitoring.svc # notice the internal service. See (2.) below.
policy:
- allow:
or:
- claim/groups: group1
- claim/groups: group2
# - email:
# is: someone@sighup.io

  1. Forward auth mode has been deprecated by Pomerium in v0.21.0 (the one included in this release), you will need to switch to proxy auth.

If you were using forward auth with annotations in your ingresses, you will need to adjust them.

For example, if you had an ingress for Grafana in the monitoring namespace with the following annotations:

    nginx.ingress.kubernetes.io/auth-url: "https://pomerium.example.com/verify?uri=$scheme://$host$request_uri"
nginx.ingress.kubernetes.io/auth-signin: "https://pomerium.example.com/?uri=$scheme://$host$request_uri"

You will have to:

2.1. Create a new ingress in the pomerium namespace with the hostname for Grafana using the pomerium service and the http as backend configuration. 2.2. In the policy definition file, make sure that the from field matches the hostname and that the to field points to grafana's service (the same one than the ingress in the monitoring namespace). 2.3. Delete the Ingress in the monitoring namespace. 2.4 Repeat for the other ingresses.

Apply your Kustomize project that uses Auth module packages as bases with:

kustomize build <your-project-path/auth> | kubectl apply -f - --server-side

🎉 CONGRATULATIONS you have now successfully updated all the core modules to KFD 1.23.4