Self-Provisioning

Fury Cluster self provisioning

3 minute read

Self Provisioning cluster using furyctl is now possible.

This feature has been designed to enable you to provision a Fury Kubernetes Cluster in an already existing network configuration.

The bootstrap phase aims to solve this gap if you hadn’t yet a properly configured network where to deploy the cluster.

Workflow to deploy a cluster from zero

The following workflow describes a complete setup of a cluster from scratch. The bootstrap command will create the underlay requirements to deploy a Kubernetes cluster. Most of these components are network-related stuff.

Once the bootstrap process is up to date, the cluster command can be triggered using outputs from the bootstrap apply command.

+--------------------------+   +--------------------------+   +--------------------------+   +--------------------------+
| furyctl bootstrap init   +-->+ furyctl bootstrap apply  +-->+ furyctl cluster init     +-->+ furyctl cluster apply    |
+--------------------------+   +--------------------------+   +--------------------------+   +--------------------------+

Workflow to deploy a cluster from an already existing infrastructure

The following workflow describes a setup of a cluster using an already existing underlay infrastructure.

+--------------------------+   +--------------------------+
+ furyctl cluster init     +-->+ furyctl cluster apply    |
+--------------------------+   +--------------------------+

Provisioners

To deploy all the components, furyctl introduces a new concept: provisioners. These provisioners are terraform projects integrated with the furyctl binary. They can be open (like the cluster EKS provisioner) or enterprise only (like the bootstrap AWS, contact sales@sighup.io)

To use an enterprise provisioner, you need to specify a token in the furyctl {bootstrap,cluster} {init,apply,destroy} --token YOUR_TOKEN commands.

You can use an environment variable to avoid passing the token via console: FURYCTL_TOKEN.

Contact sales@sighup.io to get more details about this feature.

Project structure

furyctl makes use of terraform under the hood. This feature creates a local terraform project in a new directory that gets updated with every furyctl apply.

Both bootstrap and cluster subcommands create the following directories:

  • workdir/bin: Uses to download the right terraform binary.
  • workdir/logs: Host the terraform.logs file. As mentioned before, furyctl uses terraform under the hood. Raw logs are available in this directory.
  • workdir/output: Host any output value in the output.json file.
  • workdir/configuration: Host backend and network configuration.
  • workdir/secrets: Here will be available a kube-config file ready to be used or any other credential files.

It is essential to know the location of these files. Especially in those scenarios where needs to debug a problem or gather all the outputs from the project.

Backend configuration

The furyctl page is available with the description of all the self-provisioning configuration files' attributes. One of those attributes is in charge of configuring the executor and the state file to use.

furyctl can use any backend configuration. The default backend configuration creates a local file in the project directory: workdir/terraform.tfstate. We recommend configuring a different backend from the default one. In case of losing that file, the infrastructure can not be manageable again using furyctl.

Instead, if you configure a remote state backend, you only need to have backed up the configuration file used by furyctl: bootstrap.yml and/or cluster.yml

Example:

kind: Cluster
metadata:
  name: demo
executor:
  version: 0.12.29
  state:
    backend: s3
    config:
      bucket: "best-bucket-name-ever"
      key: "furyctl/demo/cluster"
      region: "eu-home"
spec: {}
provisioner: demo

With the above configuration, the project state is available in a remote file hosted on a s3 bucket named: best-bucket-name-ever with the object name: furyctl/demo/cluster.

Important note: Make sure you use different executor.state.config.key for both bootstrap and cluster.

Then you need to take care of this cluster.yml configuration file.

Additional details

If you want to understand how to integrate more provisioners, read the contributing guidelines.

Discover the provisioner available configurations' parameter in upcoming pages.