Deploy your monitoring stack
Fury Kubernetes Monitoring
Our monitoring module makes use of CNCF recommended, Cloud Native projects, such as Prometheus, an open source monitoring and alerting toolkit, and Grafana, an open source tool which provides dashboards for analyzing and visualizing metrics. Prometheus can be extended with a number of exporters and language specific libraries, so you can have both infrasturctural and application monitoring on the same tool. Alertmanger, a component of the Prometheus’ stack can be used to configure and dispatch alerts. Thanks to the components in the Fury Kubernetes Monitoring stack, you can have full visibility over your cluster.
The Fury Kubernetes Monitoring module can be deployed on the following platforms:
- on-premises clusters and AWS or unmanaged cloud clusters (
- Google Kubernetes Engine (GKE) - (
- Azure Kubernetes Service (AKS) - (
Fury Kubernetes Monitoring extends and improves upon the CoreOS Kube Prometheus project.
On Kubernetes we use Prometheus Operator to deploy, configure and manage Prometheus instances and to manage service monitoring and alerts. The core idea of the Operator is to decouple deployment of Prometheus instances from the configuration of what they are monitoring. Other components, known as Service Monitors specify how metrics can be retrieved from a set of services exposing them. The Operator configures the Prometheus instance to monitor all services covered by included Service Monitors and keeps this configuration synchronized with any changes happening in the cluster.
Our monitoring module contains a package to deploy Prometheus Operator and other packages to deploy specific Service Monitors and active Prometheus instances. Configs for rules, alerts and exporters are also included. Packages with the
-operated postfix are deployed via Operator's CRD (Custom Resource Definition, a component in Kubernetes architecture), therefore you need Prometheus Operator up and running to be able to deploy them.
The following packages are included in the Fury Kubernetes Monitoring katalog. All the
resources listed below are going to be deployed in the
monitoring namespace in your Kubernetes cluster.
|prometheus-operator||Prometheus Operator creates, configures, and manages Prometheus and Alertmanager instances. It also automatically generates monitoring target configurations based on familiar Kubernetes label queries.|
|prometheus-operated||Prometheus instance deployed with Prometheus Operator's CRD|
|alertmanager-operated||Alertmanager instance (handling alerts sent by Prometheus) deployed with Prometheus Operator's CRD|
|grafana||Grafana deployment to query and visualize metrics collected by Prometheus|
|kube-state-metrics||this package is a Service Monitor collecting health metrics for Kubernetes objects such as Deployments, Nodes and Pods|
|node-exporter||this exporter is a Service Monitor for hardware and OS metrics exposed by *NIX kernels|
|metrics-server||this tool is is a cluster-wide aggregator of resource usage metrics, collecting resource metrics from the kubelet and exposing them via the Metrics API|
|goldpinger||the go-to tool to see connectivity and slowness issues, it runs as a DaemonSet on Kubernetes and produces Prometheus metrics that can be scraped, visualised and alerted on|
|kubeadm-sm||this package includes a Service Monitor and Prometheus rules and alerts for Kubernetes components of unmanaged/on-premise clusters|
|gke-sm||this package is a Service Monitor collecting Kubernetes components metrics for Google Kubernetes Engine (GKE)|
|aks-sm||this package is a Service Monitor collecting Kubernetes components metrics for Azure Kubernetes Service (AKS)|
In addition to the components listed above, the Fury Kubernetes Monitoring module monitors:
- Kubernetes apiserver
- Kubernetes controller manager
- Kubernetes scheduler
- Gluster (if installed)
Alerts and pagers for your cluster
Thanos components for monitoring
LDAP authentication for monitoring dashboards
Was this page helpful?
Glad to hear it! Thanks for letting us know!
Sorry to hear that. Please tell us how we can improve.