Skip to main content
Version: Next

Fury on OVH

This step-by-step tutorial guides you to deploy the Kubernetes Fury Distribution on a Managed Kubernetes cluster on OVHcloud.

This tutorial covers the following steps:

  1. Deploy a Managed Kubernetes cluster on OVHcloud with Terraform
  2. Download the latest version of Fury with furyctl
  3. Install the Fury distribution
  4. Explore some features of the distribution
  5. Teardown of the environment

⚠️ OVHcloud charges you to provision the resources used in this tutorial. You should be charged only a few euros, but we are not responsible for any costs that incur.

❗️ Remember to remove all the instances by following all the steps listed in the teardown phase.

💻 If you prefer trying Fury in a local environment, check out the Fury on Minikube tutorial.

Prerequisites

This tutorial assumes some basic familiarity with Kubernetes and OVHcloud. Some experience with Terraform is helpful but not required.

To follow this tutorial, you need:

  • OVHcloud Account - You must have an active account to use OVHcloud services. Use this guide to create one.
  • OVHcloud Public Cloud project - You must have a Public Cloud Project. Use this guide to create one.
  • OVHcloud OpenStack User - To manage the network environment with the Terraform provider, you must have an OpenStack user. Use this guide to create one.
  • OVHcloud vRack - Enables to route traffic between OVHcloud dedicated servers as well as other OVHcloud services. Use this guide

Step 1 - Automatic provisioning of a Managed Kubernetes Cluster in a private network

We are using the terraform CLI to automatically deploy the private network, and then use it for the Managed Kubernetes Cluster.

Terraform ProviderCredentials
OVH ProviderOVHcloud API credentials
OpenStack ProviderOpenStack user credentials

Pre-requisites

The tools we need are furyctl, terraform, kubectl and kustomize.

Click on the desired tool to see how to install it:

1 - furyctl

Install the latest furyctl version from its Github Furyctl Release page.

Example on an Ubuntu distribution:

wget -q "https://github.com/sighupio/furyctl/releases/download/v0.9.0/furyctl-$(uname -s)-amd64" -O /tmp/furyctl \
&& chmod +x /tmp/furyctl \
&& sudo mv /tmp/furyctl /usr/local/bin/furyctl

See furyctl's readme for more installation methods.

2 - terraform CLI

Install the latest terraform CLI from the Hashicorp official download page.

Example on an Ubuntu distribution:

wget -O- https://apt.releases.hashicorp.com/gpg | gpg --dearmor | sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg >/dev/null
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
sudo apt-get update && sudo apt-get install terraform
3 - kubectl

Install the kubectl CLI to manage the Managed Kubernetes Cluster, by following the Official Kubernetes Documentation.

Example on an Ubuntu distribution:

curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" \
&& sudo mv ./kubectl /usr/local/bin/kubectl \
&& sudo chmod 0755 /usr/local/bin/kubectl
4 - kustomize v3.5.3

Install the kustomize v3.5.3 CLI, by following the Official Kubernetes Documentation.

Example on an Ubuntu distribution:

wget https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize%2Fv3.5.3/kustomize_v3.5.3_linux_amd64.tar.gz \
&& tar -zxvf ./kustomize_v3.5.3_linux_amd64.tar.gz \
&& chmod u+x ./kustomize \
&& sudo mv ./kustomize /usr/local/bin/kustomize \
&& ./kustomize_v3.5.3_linux_amd64.tar.gz

Or use the fury-getting-started docker image:

docker run -ti --rm \
-v $PWD:/demo \
registry.sighup.io/delivery/fury-getting-started

⚠️ You need to update the included terraform version. Use the following command:

export TERRAFORM_VERSION="1.3.6" && curl -L https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/terraform_${TERRAFORM_VERSION}_linux_amd64.zip -o /tmp/terraform_${TERRAFORM_VERSION}_linux_amd64.zip && unzip /tmp/terraform_${TERRAFORM_VERSION}_linux_amd64.zip -d /tmp && rm /tmp/terraform_${TERRAFORM_VERSION}_linux_amd64.zip && alias terraform=/tmp/terraform

Or use the provided infrastructure/install_dependencies_ubuntu_debian.sh script to run all the installation commands at once.

Variables

1 - OVHcloud Terraform Provider variables

To manage OVHcloud components, you must set the same variables as the OVHcloud API ones. Get your API credentials from this page.

Give your application a name, a description, and a validity date, then add the rights GET/POST/PUT/DELETE on the endpoint /cloud/project/*.

This returns the 3 values: Application key, Application secret and Consumer key.

Use these values to set up sytem variables like this:

export OVH_APPLICATION_KEY="xxxxxxxxxxxxxxxxxxxx"
export OVH_APPLICATION_SECRET="xxxxxxxxxxxxxxxxx"
export OVH_CONSUMER_KEY="xxxxxxxxxxxxxxxxxxxxxxx"

You also need to define the API endpoint and base URL. Choose the values inside this table according to your localization:

OVH_ENDPOINTOVH_BASEURL
ovh-euhttps://eu.api.ovh.com/1.0
ovh-ushttps://api.us.ovhcloud.com/1.0
ovh-cahttps://ca.api.ovh.com/1.0
kimsufi-euhttps://eu.api.kimsufi.com/1.0
kimsufi-cahttps://ca.api.kimsufi.com/1.0
soyoustart-euhttps://eu.api.soyoustart.com/1.0
soyoustart-cahttps://ca.api.soyoustart.com/1.0

Example for eu zone:

export OVH_ENDPOINT=ovh-eu
export OVH_BASEURL="https://eu.api.ovh.com/1.0"

The last variable needed by the provider is your Public cloud Id. You can get it from your OVHcloud Public Cloud dashboard, its the the xxxxxxxxxxxxxx part of the URL or copy it from the dashboard just under the project name.

Example:

export OVH_CLOUD_PROJECT_SERVICE="xxxxxxxxxxxxxxxxxxxxx"

That's all you need to use the OVHcloud Terraform Provider.

2 - OVHcloud OpenStack User variables

Get your OpenStack user's openrc file to extract and to set necessary variables:

export OS_AUTH_URL=https://auth.cloud.ovh.net/v3
export OS_IDENTITY_API_VERSION=3
export OS_USER_DOMAIN_NAME=${OS_USER_DOMAIN_NAME:-"Default"}
export OS_PROJECT_DOMAIN_NAME=${OS_PROJECT_DOMAIN_NAME:-"Default"}
export OS_TENANT_ID="xxxxxxxxxxxxxxxxxx"
export OS_TENANT_NAME="xxxxxxxxxxxxxxxx"
export OS_USERNAME="user-xxxxxxxxxxxxxx"
export OS_PASSWORD="xxxxxxxxxxxxxxxxxxx"
export OS_REGION_NAME="xxx"

You are ready to use the OpenStack Terraform Provider.

3 - (Optional) Create a variables file from the template

You can create an ovhrc file from the ovhrc.template template to store your variables.

Example:

# OpenStack vars from openrc file
export OS_AUTH_URL=https://auth.cloud.ovh.net/v3
export OS_IDENTITY_API_VERSION=3
export OS_USER_DOMAIN_NAME=${OS_USER_DOMAIN_NAME:-"Default"}
export OS_PROJECT_DOMAIN_NAME=${OS_PROJECT_DOMAIN_NAME:-"Default"}
export OS_TENANT_ID="xxxxxxxxxxxxxxxxxx"
export OS_TENANT_NAME="xxxxxxxxxxxxxxxx"
export OS_USERNAME="user-xxxxxxxxxxxxxx"
export OS_PASSWORD="xxxxxxxxxxxxxxxxxxx"
export OS_REGION_NAME="xxx"

# OVH API vars from OVHcloud manager
export OVH_ENDPOINT=ovh-eu
export OVH_BASEURL="https://eu.api.ovh.com/1.0"
export OVH_APPLICATION_KEY="xxxxxxxxxxxxxxx"
export OVH_APPLICATION_SECRET="xxxxxxxxxxxx"
export OVH_CONSUMER_KEY="xxxxxxxxxxxxxxxxxx"
export OVH_CLOUD_PROJECT_SERVICE="xxxxxxxxx" # Must be the same as OS_TENANT_ID

Then simply source this file to load all variables into your session:

source ./ovhrc

Deploy the Kubernetes cluster

We use Terraform to deploy the network and the Managed Kubernetes Cluster. A variables.tfvars variable file is present with some default values that you can use as this or change the values if needed:

# Region

region = "GRA7"

# Network - Private Network

network = {
name = "furyNetwork"
}

# Network - Subnet

subnet = {
name = "furySubnet"
cidr = "10.0.0.0/24"
dhcp_start = "10.0.0.100"
dhcp_end = "10.0.0.254"
}

# Network - Router

router = {
name = "furyRouter"
}

# Managed Kubernetes Cluster

kube = {
name = "furykubernetesCluster"
version = "1.25"
pv_network_name = "furyNetwork"
gateway_ip = "10.0.0.1"
}

pool = {
name = "furypool"
flavor = "b2-7"
desired_nodes = "3"
max_nodes = "6"
min_nodes = "3"
}
  • region: The region where you want to deploy your infrastructure.
  • network: The network name.
  • subnet: The subnet parameters, like the CIDR IP format value and DHCP range.
  • router: The router name.
  • kube: The Managed Kubernetes Cluster, such as its name, its version, and essentially network information.
  • pool: The Kubernetes node pool parameters.

Deploy the infrastructure with:

cd infrastructure/terraform

terraform init
terraform plan -var-file=variables.tfvars
terraform apply -var-file=variables.tfvars

Wait a few minutes until the end of the deployment.

Configure your kubectl environment

Once the Managed Kubernetes Cluster has been created, you will get the associated kubeconfig file in the terraform folder. Set the KUBECONFIG environment variable value like this:

export KUBECONFIG="$PWD/kubeconfig"

Your kubectl CLI is ready to use

Step 2 - Installation

In this section, you use furyctl to download the monitoring, logging, and ingress modules of the Fury distribution.

Inspect the Furyfile

furyctl needs a Furyfile.yml to know which modules to download.

For this tutorial, use the Furyfile.yml:

versions:
monitoring: v2.0.1
logging: v3.0.1
ingress: v1.13.1

bases:
- name: monitoring
- name: logging
- name: ingress

Download Fury modules

  1. Download the Fury modules with furyctl:
furyctl vendor -H
  1. Inspect the downloaded modules in the vendor folder:
tree -d vendor -L 3

Output:

$ tree -d vendor -L 3

vendor
└── katalog
├── ingress
│ ├── cert-manager
│ ├── dual-nginx
│ ├── external-dns
│ ├── forecastle
│ ├── nginx
│ └── tests
├── logging
│ ├── cerebro
│ ├── configs
│ ├── logging-operated
│ ├── logging-operator
│ ├── loki-configs
│ ├── loki-single
│ ├── opensearch-dashboards
│ ├── opensearch-single
│ ├── opensearch-triple
│ └── tests
└── monitoring
├── aks-sm
├── alertmanager-operated
├── blackbox-exporter
├── configs
├── eks-sm
├── gke-sm
├── grafana
├── kubeadm-sm
├── kube-proxy-metrics
├── kube-state-metrics
├── node-exporter
├── prometheus-adapter
├── prometheus-operated
├── prometheus-operator
├── tests
├── thanos
└── x509-exporter

Kustomize project

Kustomize allows grouping related Kubernetes resources and combining them to create more complex deployments. Moreover, it is flexible, and it enables a simple patching mechanism for additional customization.

To deploy the Fury distribution, use the following root kustomization.yaml located at manifests/kustomization.yaml:

---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
- ingress
- logging
- monitoring

This kustomization.yaml wraps other kustomization.yaml files present in each module subfolder. For example in /demo/manifests/logging/kustomization.yaml you'll find:

---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
- ../../vendor/katalog/logging/cerebro
- ../../vendor/katalog/logging/logging-operator
- ../../vendor/katalog/logging/logging-operated
- ../../vendor/katalog/logging/configs
- ../../vendor/katalog/logging/opensearch-single
- ../../vendor/katalog/logging/opensearch-dashboards
- ../../vendor/katalog/logging/minio-ha

- resources/ingress.yml

patchesStrategicMerge:
- patches/opensearch-resources.yml
- patches/cerebro-resources.yml

Each kustomization.yaml file:

  • references the modules downloaded in the previous section
  • patches the upstream modules that we downloaded with a custom configuration for this environment (e.g. patches/opensearch-resources.yml limits the resources requested by OpenSearch)
  • deploys some additional custom resources not included in the modules (e.g. resources/ingress.yml)

Install the modules:

cd manifests/

make apply
# Due to some CRDs being created, the first time you have to run make apply multiple times. Run it until you see no more errors.
make apply

Step 3 - Explore the distribution

🚀 The distribution is finally deployed! In this section, you explore some of its features.

Setup local DNS

In Step 3, alongside the distribution, you have deployed Kubernetes ingresses to expose underlying services at the following HTTP routes:

  • forecastle.fury.info
  • grafana.fury.info
  • opensearch-dashboards.fury.info

To access the ingresses more easily via the browser, configure your local DNS to resolve the ingresses to the external load balancer IP:

  1. Get the address of the external load balancer:
kubectl get svc -n ingress-nginx ingress-nginx -ojsonpath='{.spec.externalIPs[*]}'
  1. Add the following line to your machine's /etc/hosts (not the container's):
<EXTERNAL_IP> forecastle.fury.info cerebro.fury.info opensearch-dashboards.fury.info grafana.fury.info

Now, you can reach the ingresses directly from your browser.

⚠️ We are using an external load-balancer only for the demo purpose. In a real environment, don't expose dashboards on a public network and prefer use internal load balancer.

Forecastle

Forecastle is an open-source control panel where you can access all exposed applications running on Kubernetes.

Navigate to http://forecastle.fury.info to see all the other ingresses deployed, grouped by namespace.

Forecastle

Grafana

Grafana is an open-source platform for monitoring and observability. Grafana allows you to query, visualize, alert on and understand your metrics.

Navigate to http://grafana.fury.info or click the Grafana icon from Forecastle.

Fury provides some pre-configured dashboards to visualize the state of the cluster. Examine an example dashboard:

  1. Click on the search icon on the left sidebar.
  2. Write pods and click enter.
  3. Select the Kubernetes/Pods dashboard.

This is what you should see:

Grafana

OpenSearch Dashboards

OpenSearch Dashboards is an open-source analytics and visualization platform for OpenSearch. OpenSearch Dashboards lets you perform advanced data analysis and visualize data in various charts, tables, and maps. You can use it to search, view, and interact with data stored in OpenSearch indices.

Navigate to http://opensearch-dashboards.fury.info or click the OpenSearch Dashboards icon from Forecastle.

⚠️ please beware that some background jobs need to run to finish OpenSearch configuration. If you get a screen with a "Start by adding your data" title, please wait some minutes and try again.

Manually Create OpenSearch Dashboards Indeces (optional)

If when you access OpenSearch Dashboards you get welcomed with the following message:

![opensearch-dashboards-welcome][opensearch-dashboards-welcome]

this means that the Indexes have not been created yet. This is expected the first time you deploy the logging stack. We deploy a set of cron jobs that take care of creating them but they may not have run yet (they run every hour).

You can trigger them manually with the following commands:

kubectl create job -n logging --from cronjob/index-patterns-cronjob manual-indexes
kubectl create job -n logging --from cronjob/ism-policy-cronjob manual-ism-policy

Wait a moment for the jobs to finish and try refreshing the OpenSearch Dashboard page.

Discover the logs

To work with the logs arriving into the system, click on "OpenSearch Dashboards" icon on the main page, and then on the "Discover" option or navigate through the side ("hamburger") menu and select Discover (see image below).

![opensearch-dashboards-discover][opensearch-dashboards-discover]

Opensearch-Dashboards

Follow the next steps to query the logs collected by the logging stack:

![opensearch-dashboards-index][opensearch-dashboards-index]

You can choose between different index options:

  • audit-* Kubernetes API server audit logs.
  • events-*: Kubernetes events.
  • infra-*: logs for infrastructural components deployed as part of KFD
  • ingress-controller-*: logs from the NGINX Ingress Controllers running in the cluster.
  • kubernetes-*: logs for applications running in the cluster that are not part of KFD. Notice that this index will most likely be empty until you deploy an application.
  • systemd-* logs collected from a selection of systemd services running in the nodes like containerd and kubelet.

Once you selected your desired index, then you can search them by writing queries in the search box. You can also filter the results by some criteria, like pod name, namespaces, etc.

Step 4 - Teardown

Clean up the demo environment:

cd ../infrastructure/terraform/
terraform destroy -var-file=variables.tfvars

Conclusions

Congratulations, you made it! 🥳🥳

We hope you enjoyed this tour of Fury on OVHcloud!

Issues/Feedback

In case you ran into any problems feel free to open an issue in GitHub.