Kubernetes monitoring for Zabbix with discovery objects:
Works with 2 variables only by default:
ZABBIX_ENDPOINT
: Zabbix server/proxy where the datas will be sentKUBERNETES_NAME
: Name of your Kubernetes cluster on Zabbix (host)Before installation, you need to create zabbix-monitoring
namespace in your cluster:
$ kubectl create namespace zabbix-monitoring
All Helm options/parameters are available in the Helm folder here.
To install the chart with the release name zabbix-kubernetes-discovery
from local Helm templates:
$ helm upgrade --install zabbix-kubernetes-discovery \
./helm/zabbix-kubernetes-discovery/ \
--values ./helm/zabbix-kubernetes-discovery/values.yaml \
--namespace zabbix-monitoring \
--set namespace.name="zabbix-monitoring" \
--set environment.ZABBIX_ENDPOINT="zabbix-proxy.example.com" \
--set environment.KUBERNETES_NAME="kubernetes-cluster-example"
To install the chart with the release name zabbix-kubernetes-discovery
from Axians Helm repository:
$ helm repo add acsp https://helm.acsp.io
$ helm upgrade --install zabbix-kubernetes-discovery \
acsp/zabbix-kubernetes-discovery \
--namespace zabbix-monitoring
--set namespace.name="zabbix-monitoring" \
--set environment.ZABBIX_ENDPOINT="zabbix-proxy.example.com" \
--set environment.KUBERNETES_NAME="kubernetes-cluster-name"
To uninstall/delete the zabbix-kubernetes-discovery
deployment:
$ helm list -n zabbix-monitoring
$ helm delete -n zabbix-monitoring zabbix-kubernetes-discovery
The command removes all the Kubernetes components associated with the chart and deletes the release.
Zabbix template is located in ./zabbix/
folder on this repository.
After downloading, you need to import it as below:
Daemonset {#KUBERNETES_DAEMONSET_NAME}: Available replicas
Daemonset {#KUBERNETES_DAEMONSET_NAME}: Current replicas
Daemonset {#KUBERNETES_DAEMONSET_NAME}: Desired replicas
Daemonset {#KUBERNETES_DAEMONSET_NAME}: Ready replicas
Daemonset {#KUBERNETES_DAEMONSET_NAME}: Available replicas nodata
Daemonset {#KUBERNETES_DAEMONSET_NAME}: Current replicas nodata
Daemonset {#KUBERNETES_DAEMONSET_NAME}: Desired replicas nodata
Daemonset {#KUBERNETES_DAEMONSET_NAME}: Ready replicas nodata
Daemonset {#KUBERNETES_DAEMONSET_NAME}: Problem items nodata
Daemonset {#KUBERNETES_DAEMONSET_NAME}: Graph replicas
Deployment {#KUBERNETES_DEPLOYMENT_NAME}: Available replicas
Deployment {#KUBERNETES_DEPLOYMENT_NAME}: Desired replicas
Deployment {#KUBERNETES_DEPLOYMENT_NAME}: Ready replicas
Deployment {#KUBERNETES_DEPLOYMENT_NAME}: Available replicas nodata
Deployment {#KUBERNETES_DEPLOYMENT_NAME}: Desired replicas nodata
Deployment {#KUBERNETES_DEPLOYMENT_NAME}: Ready replicas nodata
Deployment {#KUBERNETES_DEPLOYMENT_NAME}: Problem items nodata
Deployment {#KUBERNETES_DEPLOYMENT_NAME}: Problem number of replicas
Deployment {#KUBERNETES_DEPLOYMENT_NAME}: Graph replicas
Statefulset {#KUBERNETES_STATEFULSET_NAME}: Available replicas
Statefulset {#KUBERNETES_STATEFULSET_NAME}: Desired replicas
Statefulset {#KUBERNETES_STATEFULSET_NAME}: Ready replicas
Statefulset {#KUBERNETES_STATEFULSET_NAME}: Available replicas nodata
Stetafulset {#KUBERNETES_STATEFULSET_NAME}: Desired replicas nodata
Statefulset {#KUBERNETES_STATEFULSET_NAME}: Ready replicas nodata
Statefulset {#KUBERNETES_STATEFULSET_NAME}: Problem items nodata
Statefulset {#KUBERNETES_STATEFULSET_NAME}: Problem number of replicas
Deployment {#KUBERNETES_STATEFULSET_NAME}: Graph replicas
Cronjob {#KUBERNETES_CRONJOB_NAME}: Job exitcode
Cronjob {#KUBERNETES_CRONJOB_NAME}: Job restart
Cronjob {#KUBERNETES_CRONJOB_NAME}: Job reason
Cronjob {#KUBERNETES_CRONJOB_NAME}: Job exitcode nodata
Cronjob {#KUBERNETES_CRONJOB_NAME}: Job restart nodata
Cronjob {#KUBERNETES_CRONJOB_NAME}: Job reason nodata
Cronjob {#KUBERNETES_CRONJOB_NAME}: Problem items nodata
Cronjob {#KUBERNETES_CRONJOB_NAME}: Problem last job
Cronjob {#KUBERNETES_CRONJOB_NAME}: Graph jobs
Node {#KUBERNETES_NODE_NAME}: Allocatable cpu
Node {#KUBERNETES_NODE_NAME}: Allocatable memory
Node {#KUBERNETES_NODE_NAME}: Allocatable pods
Node {#KUBERNETES_NODE_NAME}: Capacity cpu
Node {#KUBERNETES_NODE_NAME}: Capacity memory
Node {#KUBERNETES_NODE_NAME}: Capacity pods
Node {#KUBERNETES_NODE_NAME}: Current pods
Node {#KUBERNETES_NODE_NAME}: Healthz
Node {#KUBERNETES_NODE_NAME}: Allocatable pods nodata
Node {#KUBERNETES_NODE_NAME}: Capacity pods nodata
Node {#KUBERNETES_NODE_NAME}: Current pods nodata
Node {#KUBERNETES_NODE_NAME}: Problem pods limits warning
Node {#KUBERNETES_NODE_NAME}: Problem pods limits critical
Node {#KUBERNETES_NODE_NAME}: Health nodata
Node {#KUBERNETES_NODE_NAME}: Health problem
Node {#KUBERNETES_NODE_NAME}: Problem items nodata
Node {#KUBERNETES_NODE_NAME}: Graph pods
Volume {#KUBERNETES_PVC_NAME}: Available bytes
Volume {#KUBERNETES_PVC_NAME}: Capacity bytes
Volume {#KUBERNETES_PVC_NAME}: Capacity inodes
Volume {#KUBERNETES_PVC_NAME}: Free inodes
Volume {#KUBERNETES_PVC_NAME}: Used bytes
Volume {#KUBERNETES_PVC_NAME}: Used inodes
Volume {#KUBERNETES_PVC_NAME}: Available bytes nodata
Volume {#KUBERNETES_PVC_NAME}: Capacity bytes nodata
Volume {#KUBERNETES_PVC_NAME}: Capacity inodes nodata
Volume {#KUBERNETES_PVC_NAME}: Consumption bytes critical
Volume {#KUBERNETES_PVC_NAME}: Consumption bytes warning
Volume {#KUBERNETES_PVC_NAME}: Consumption inodes critical
Volume {#KUBERNETES_PVC_NAME}: Consumption inodes warning
Volume {#KUBERNETES_PVC_NAME}: Free inodes nodata
Volume {#KUBERNETES_PVC_NAME}: Used bytes nodata
Volume {#KUBERNETES_PVC_NAME}: Used inodes nodata
Volume {#KUBERNETES_PVC_NAME}: Problem items nodata
Volume {#KUBERNETES_PVC_NAME}: Graph bytes
Volume {#KUBERNETES_PVC_NAME}: Graph inodes
You can build Docker image manually like this:
$ docker build -t zabbix-kubernetes-discovery .
All contributions are welcome! Please fork the main branch, create a new branch and then create a pull request.