prometheus-operator / kube-prometheus

Use Prometheus to monitor Kubernetes and applications running on Kubernetes
https://prometheus-operator.dev/
Apache License 2.0
6.72k stars 1.92k forks source link

Importing grafana json #115

Closed maurodelazeri closed 5 years ago

maurodelazeri commented 5 years ago

is there anything special that needs to be done to import a grafana exported json ?

It does not show any error and does not import, if I import manually with grafana UI it works fine

cat example.jsonnet

// Reference info: documentation for https://github.com/ksonnet/ksonnet-lib can be found at http://g.bryan.dev.hepti.center
//
local k = import 'ksonnet/ksonnet.beta.3/k.libsonnet';  // https://github.com/ksonnet/ksonnet-lib/blob/master/ksonnet.beta.3/k.libsonnet - imports k8s.libsonnet
// * https://github.com/ksonnet/ksonnet-lib/blob/master/ksonnet.beta.3/k8s.libsonnet defines things such as "persistentVolumeClaim:: {"
//
local pvc = k.core.v1.persistentVolumeClaim;  // https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#persistentvolumeclaim-v1-core (defines variable named 'spec' of type 'PersistentVolumeClaimSpec')

local kp = (import 'kube-prometheus/kube-prometheus.libsonnet') +
  (import 'kube-prometheus/kube-prometheus-kops.libsonnet') +
  (import 'kube-prometheus/kube-prometheus-kops-coredns.libsonnet') + {
    _config+:: {
      namespace: 'monitoring',
     alertmanager+: {
       config: importstr 'alertmanager-config.yaml',
     },
    prometheus+:: {
        namespaces: ["default", "kube-system","kube-node-lease", "kube-public", "kube-system", "metallb-system", "monitoring", "rook-ceph"],
    },
     },

    prometheus+:: {
      prometheus+: {
        spec+: {  // https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#prometheusspec
          // If a value isn't specified for 'retention', then by default the '--storage.tsdb.retention=24h' arg will be passed to prometheus by prometheus-operator.
          // The possible values for a prometheus <duration> are:
          //  * https://github.com/prometheus/common/blob/c7de230/model/time.go#L178 specifies "^([0-9]+)(y|w|d|h|m|s|ms)$" (years weeks days hours minutes seconds milliseconds)
          retention: '30d',

          // Reference info: https://github.com/coreos/prometheus-operator/blob/master/Documentation/user-guides/storage.md
          // By default (if the following 'storage.volumeClaimTemplate' isn't created), prometheus will be created with an EmptyDir for the 'prometheus-k8s-db' volume (for the prom tsdb).
          // This 'storage.volumeClaimTemplate' causes the following to be automatically created (via dynamic provisioning) for each prometheus pod:
          //  * PersistentVolumeClaim (and a corresponding PersistentVolume)
          //  * the actual volume (per the StorageClassName specified below)
          storage: {  // https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#storagespec
            volumeClaimTemplate:  // (same link as above where the 'pvc' variable is defined)
              pvc.new() +  // http://g.bryan.dev.hepti.center/core/v1/persistentVolumeClaim/#core.v1.persistentVolumeClaim.new

              pvc.mixin.spec.withAccessModes('ReadWriteOnce') +

              // https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#resourcerequirements-v1-core (defines 'requests'),
              // and https://kubernetes.io/docs/concepts/policy/resource-quotas/#storage-resource-quota (defines 'requests.storage')
              pvc.mixin.spec.resources.withRequests({ storage: '25Gi' }) +

              // A StorageClass of the following name (which can be seen via `kubectl get storageclass` from a node in the given K8s cluster) must exist prior to kube-prometheus being deployed.
              pvc.mixin.spec.withStorageClassName('rook-ceph-block'),

            // The following 'selector' is only needed if you're using manual storage provisioning (https://github.com/coreos/prometheus-operator/blob/master/Documentation/user-guides/storage.md#manual-storage-provisioning).
            // And note that this is not supported/allowed by AWS - uncommenting the following 'selector' line (when deploying kube-prometheus to a K8s cluster in AWS) will cause the pvc to be stuck in the Pending status and have the following error:
            //  * 'Failed to provision volume with StorageClass "ssd": claim.Spec.Selector is not supported for dynamic provisioning on AWS'
            //pvc.mixin.spec.selector.withMatchLabels({}),
          },  // storage
        },  // spec
      },  // prometheus
    },  // prometheus

  grafanaDashboards+:: {
    'kafka-zookeeper.json': (import 'kafka-grafana-dashboard.json'),
  },

  };

{ ['00namespace-' + name]: kp.kubePrometheus[name] for name in std.objectFields(kp.kubePrometheus) } +
{ ['0prometheus-operator-' + name]: kp.prometheusOperator[name] for name in std.objectFields(kp.prometheusOperator) } +
{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } +
{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } +
{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } +
{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } +
{ ['prometheus-adapter-' + name]: kp.prometheusAdapter[name] for name in std.objectFields(kp.prometheusAdapter) } +
{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) }

kafka-grafana-dashboard.json --> http://cdn.paste.click/NDAp8S7CwnYiCaZpcRTOdQ

brancz commented 5 years ago

Is the dashboard being rendered at all into a configmap?

maurodelazeri commented 5 years ago

@brancz yes, it creates the configmap just fine but does not import onto grafana, I need to do manually

brancz commented 5 years ago

Check the grafana logs, if it sees the dashboard and tries to import it then it logs that at start up.

maurodelazeri commented 5 years ago

It complains about this:

t=2019-06-06T13:51:25+0000 lvl=eror msg="failed to save dashboard" logger=provisioning.dashboard type=file name=0 error="Alert validation error: Data source used by alert rule not found, alertName=Outstanding Requests alert, datasource=${datasource}"

the datasource is fine, maybe this Data source used by alert rule not found ?

http://cdn.paste.click/tehLsowOZNicy7y78MdnfA

brancz commented 5 years ago

The problem is the "${datasource}" in the template. That doesn't work with the provisioning API, you need to specify an explicit datasource. That template is usually filled out at import time, but since this is not using the UI to import, it has to be specified upfront.

maurodelazeri commented 5 years ago

@brancz yeah, that solves the problem... thanks