Skip to main content
Version: 1.22.X

Fury on OVHcloud

This step-by-step tutorial guides you to deploy the Kubernetes Fury Distribution on a Managed Kubernetes cluster on OVHcloud.

This tutorial covers the following steps:

  1. Deploy a Managed Kubernetes cluster on OVHcloud with Terraform
  2. Download the latest version of Fury with furyctl
  3. Install the Fury distribution
  4. Explore some features of the distribution
  5. Teardown of the environment

⚠️ OVHcloud charges you to provision the resources used in this tutorial. You should be charged only a few euros, but we are not responsible for any costs that incur.

❗️ Remember to remove all the instances by following all the steps listed in the teardown phase.

💻 If you prefer trying Fury in a local environment, check out the Fury on Minikube tutorial.


This tutorial assumes some basic familiarity with Kubernetes and OVHcloud. Some experience with Terraform is helpful but not required.

To follow this tutorial, you need:

  • OVHcloud Account - You must have an active account to use OVHcloud services. Use this guide to create one.
  • OVHcloud Public Cloud project - You must have a Public Cloud Project. Use this guide to create one.
  • OVHcloud OpenStack User - To manage the network environment with the Terraform provider, you must have an OpenStack user. Use this guide to create one.
  • OVHcloud vRack - Enables to route traffic between OVHcloud dedicated servers as well as other OVHcloud services. Use this guide

Step 1 - Automatic provisioning of a Managed Kubernetes Cluster in a private network

We are using the terraform CLI to automatically deploy the private network, and then use it for the Managed Kubernetes Cluster.

Terraform ProviderCredentials
OVH ProviderOVHcloud API credentials
OpenStack ProviderOpenStack user credentials


The tools we need are furyctl, terraform, kubectl and kustomize.

Click on the desired tool to see how to install it:

1 - furyctl

Install the latest furyctl version from its Github Furyctl Release page.

Example on an Ubuntu distribution:

wget -q "$(uname -s)-amd64" -O /tmp/furyctl \
&& chmod +x /tmp/furyctl \
&& sudo mv /tmp/furyctl /usr/local/bin/furyctl

See furyctl's readme for more installation methods.

2 - terraform CLI

Install the latest terraform CLI from the Hashicorp official download page.

Example on an Ubuntu distribution:

wget -O- | gpg --dearmor | sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg >/dev/null
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
sudo apt-get update && sudo apt-get install terraform
3 - kubectl

Install the kubectl CLI to manage the Managed Kubernetes Cluster, by following the Official Kubernetes Documentation.

Example on an Ubuntu distribution:

curl -LO "$(curl -L -s" \
&& sudo mv ./kubectl /usr/local/bin/kubectl \
&& sudo chmod 0755 /usr/local/bin/kubectl
4 - kustomize v3.5.3

Install the kustomize v3.5.3 CLI, by following the Official Kubernetes Documentation.

Example on an Ubuntu distribution:

wget \
&& tar -zxvf ./kustomize_v3.5.3_linux_amd64.tar.gz \
&& chmod u+x ./kustomize \
&& sudo mv ./kustomize /usr/local/bin/kustomize \
&& ./kustomize_v3.5.3_linux_amd64.tar.gz

Or use the fury-getting-started docker image:

docker run -ti --rm \
-v $PWD:/demo \

⚠️ You need to update the included terraform version. Use the following command:

export TERRAFORM_VERSION="1.3.6" && curl -L${TERRAFORM_VERSION}/terraform_${TERRAFORM_VERSION} -o /tmp/terraform_${TERRAFORM_VERSION} && unzip /tmp/terraform_${TERRAFORM_VERSION} -d /tmp && rm /tmp/terraform_${TERRAFORM_VERSION} && alias terraform=/tmp/terraform

Or use the provided infrastructure/ script to run all the installation commands at once.


1 - OVHcloud Terraform Provider variables

To manage OVHcloud components, you must set the same variables as the OVHcloud API ones. Get your API credentials from this page.

Give your application a name, a description, and a validity date, then add the rights GET/POST/PUT/DELETE on the endpoint /cloud/project/*.

This returns the 3 values: Application key, Application secret and Consumer key.

Use these values to set up sytem variables like this:

export OVH_APPLICATION_KEY="xxxxxxxxxxxxxxxxxxxx"
export OVH_APPLICATION_SECRET="xxxxxxxxxxxxxxxxx"
export OVH_CONSUMER_KEY="xxxxxxxxxxxxxxxxxxxxxxx"

You also need to define the API endpoint and base URL. Choose the values inside this table according to your localization:


Example for eu zone:

export OVH_ENDPOINT=ovh-eu
export OVH_BASEURL=""

The last variable needed by the provider is your Public cloud Id. You can get it from your OVHcloud Public Cloud dashboard, its the the xxxxxxxxxxxxxx part of the URL or copy it from the dashboard just under the project name.


export OVH_CLOUD_PROJECT_SERVICE="xxxxxxxxxxxxxxxxxxxxx"

That's all you need to use the OVHcloud Terraform Provider.

2 - OVHcloud OpenStack User variables

Get your OpenStack user's openrc file to extract and to set necessary variables:

export OS_AUTH_URL=
export OS_TENANT_ID="xxxxxxxxxxxxxxxxxx"
export OS_TENANT_NAME="xxxxxxxxxxxxxxxx"
export OS_USERNAME="user-xxxxxxxxxxxxxx"
export OS_PASSWORD="xxxxxxxxxxxxxxxxxxx"
export OS_REGION_NAME="xxx"

You are ready to use the OpenStack Terraform Provider.

3 - (Optional) Create a variables file from template

You can create an ovhrc file from the ovhrc.template template, to store your variables.


# OpenStack vars from openrc file
export OS_AUTH_URL=
export OS_TENANT_ID="xxxxxxxxxxxxxxxxxx"
export OS_TENANT_NAME="xxxxxxxxxxxxxxxx"
export OS_USERNAME="user-xxxxxxxxxxxxxx"
export OS_PASSWORD="xxxxxxxxxxxxxxxxxxx"
export OS_REGION_NAME="xxx"

# OVH API vars from OVHcloud manager
export OVH_ENDPOINT=ovh-eu
export OVH_BASEURL=""
export OVH_APPLICATION_KEY="xxxxxxxxxxxxxxx"
export OVH_APPLICATION_SECRET="xxxxxxxxxxxx"
export OVH_CONSUMER_KEY="xxxxxxxxxxxxxxxxxx"
export OVH_CLOUD_PROJECT_SERVICE="xxxxxxxxx" # Must be the same as OS_TENANT_ID

Then simply source this file to load all variables into your session:

source ./ovhrc

Deploy the Kubernetes cluster

We use Terraform to deploy the network and the Managed Kubernetes Cluster. A variables.tfvars variable file is present with some default values that you can use as this or change the values if needed:

# Region

region = "GRA7"

# Network - Private Network

network = {
name = "furyNetwork"

# Network - Subnet

subnet = {
name = "furySubnet"
cidr = ""
dhcp_start = ""
dhcp_end = ""

# Network - Router

router = {
name = "furyRouter"

# Managed Kubernetes Cluster

kube = {
name = "furykubernetesCluster"
version = "1.24"
pv_network_name = "furyNetwork"
gateway_ip = ""

pool = {
name = "furypool"
flavor = "b2-7"
desired_nodes = "3"
max_nodes = "6"
min_nodes = "3"
  • region: The region where you want to deploy your infrastructure.
  • network: The network name.
  • subnet: The subnet parameters, like the CIDR IP format value and DHCP range.
  • router: The router name.
  • kube: The Managed Kubernetes Cluster, such as its name, its version, and essentially network information.
  • pool: The Kubernetes node pool parameters.

Deploy the infrastructure with:

cd infrastructure/terraform

terraform init
terraform plan -var-file=variables.tfvars
terraform apply -var-file=variables.tfvars

Wait a few minutes until the end of the deployment.

Configure your kubectl environment

Once the Managed Kubernetes Cluster has been created, you will get the associated kubeconfig file in the terraform folder. Set the KUBECONFIG environment variable value like this:

export KUBECONFIG="$PWD/kubeconfig"

Your kubectl CLI is ready to use

Step 2 - Installation

In this section, you use furyctl to download the monitoring, logging, and ingress modules of the Fury distribution.

Inspect the Furyfile

furyctl needs a Furyfile.yml to know which modules to download.

For this tutorial, use the Furyfile.yml:

monitoring: v2.0.1
logging: v3.0.1
ingress: v1.13.1

- name: monitoring
- name: logging
- name: ingress

Download Fury modules

  1. Download the Fury modules with furyctl:
furyctl vendor -H
  1. Inspect the downloaded modules in the vendor folder:
tree -d vendor -L 3


$ tree -d vendor -L 3

└── katalog
├── ingress
│ ├── cert-manager
│ ├── dual-nginx
│ ├── external-dns
│ ├── forecastle
│ ├── nginx
│ └── tests
├── logging
│ ├── cerebro
│ ├── configs
│ ├── logging-operated
│ ├── logging-operator
│ ├── loki-configs
│ ├── loki-single
│ ├── opensearch-dashboards
│ ├── opensearch-single
│ ├── opensearch-triple
│ └── tests
└── monitoring
├── aks-sm
├── alertmanager-operated
├── blackbox-exporter
├── configs
├── eks-sm
├── gke-sm
├── grafana
├── kubeadm-sm
├── kube-proxy-metrics
├── kube-state-metrics
├── node-exporter
├── prometheus-adapter
├── prometheus-operated
├── prometheus-operator
├── tests
├── thanos
└── x509-exporter

Kustomize project

Kustomize allows to group together related Kubernetes resources and combines them to create more complex deployments. Moreover, it is flexible, and it enables a simple patching mechanism for additional customization.

To deploy the Fury distribution, use the following root kustomization.yaml located manifests/kustomization.yaml:

kind: Kustomization

- ingress
- logging
- monitoring

This kustomization.yaml wraps the other kustomization.yamls in subfolders. For example in manifests/logging/kustomization.yaml

kind: Kustomization

- ../../vendor/katalog/logging/cerebro
- ../../vendor/katalog/logging/logging-operator
- ../../vendor/katalog/logging/logging-operated
- ../../vendor/katalog/logging/configs
- ../../vendor/katalog/logging/opensearch-single
- ../../vendor/katalog/logging/opensearch-dashboards

- resources/ingress.yml

- patches/opensearch-resources.yml
- patches/cerebro-resources.yml

Each kustomization.yaml:

  • references the modules downloaded in the previous section
  • patches the upstream modules (e.g. patches/opensearch-resources.yml limits the resources requested by OpenSearch)
  • deploys some additional custom resources (e.g. resources/ingress.yml)

Install the modules:

cd manifests/

make apply
# Due to some CRDs being created, the first time you have to run make apply multiple times. Run it until you see no more errors.
make apply

Step 3 - Explore the distribution

🚀 The distribution is finally deployed! In this section, you explore some of its features.

Setup local DNS

In Step 3, alongside the distribution, you have deployed Kubernetes ingresses to expose underlying services at the following HTTP routes:


To access the ingresses more easily via the browser, configure your local DNS to resolve the ingresses to the external load balancer IP:

  1. Get the address of the external load balancer:
kubectl get svc -n ingress-nginx ingress-nginx -ojsonpath='{.spec.externalIPs[*]}'
  1. Add the following line to your machine's /etc/hosts (not the container's):

Now, you can reach the ingresses directly from your browser.

⚠️ We are using an external load-balancer only for the demo purpose. In a real environment, don't expose dashboards on a public network and prefer use internal load balancer.


Forecastle is an open-source control panel where you can access all exposed applications running on Kubernetes.

Navigate to to see all the other ingresses deployed, grouped by namespace.



Grafana is an open-source platform for monitoring and observability. Grafana allows you to query, visualize, alert on and understand your metrics.

Navigate to or click the Grafana icon from Forecastle.

Fury provides some pre-configured dashboards to visualize the state of the cluster. Examine an example dashboard:

  1. Click on the search icon on the left sidebar.
  2. Write pods and click enter.
  3. Select the Kubernetes/Pods dashboard.

This is what you should see:


OpenSearch Dashboards

OpenSearch Dashboards is an open-source analytics and visualization platform for OpenSearch. OpenSearch Dashboards lets you perform advanced data analysis and visualize data in various charts, tables, and maps. You can use it to search, view, and interact with data stored in OpenSearch indices.

Navigate to or click the OpenSearch Dashboards icon from Forecastle.

⚠️ please beware that some background jobs need to run to finish OpenSearch configuration. If you get a screen with a "Start by adding your data" title, please wait some minutes and try again.

Read the logs

The Fury Logging module already collects data from the following indices:

  • kubernetes-*
  • systemd-*
  • ingress-controller-*
  • events-*

Click on Discover to see the main dashboard. On the top left corner select one of the indices to explore the logs.


Step 4 - Teardown

Clean up the demo environment:

cd ../infrastructure/terraform/
terraform destroy -var-file=variables.tfvars


Congratulations, you made it! 🥳🥳

We hope you enjoyed this tour of Fury on OVHcloud!


In case your ran into any problems feel free to open an issue here on GitHub.

Where to go next?

More tutorials:

More about Fury: