prometheus-operator / kube-prometheus

Use Prometheus to monitor Kubernetes and applications running on Kubernetes
https://prometheus-operator.dev/
Apache License 2.0
6.73k stars 1.93k forks source link

STATIC ERROR: vendor/node-mixin/alerts/alerts.libsonnet:262:42: unexpected: "{" while parsing field definition #655

Closed yogeek closed 4 years ago

yogeek commented 4 years ago

What happened?

My installation script below has not changed and was successful.

# Use release-0.4 to be compatible with k8s 1.17.6
export KUBE_PROMETHEUS_RELEASE=release-0.4
etcd_ips="XX.XX.XX.XX YY.YY.YY.YY ZZ.ZZ.ZZ.ZZ"

jb init
jb install github.com/coreos/kube-prometheus/jsonnet/kube-prometheus@${KUBE_PROMETHEUS_RELEASE}
jb install github.com/latchmihay/kube-prometheus-pushgateway/prometheus-pushgateway
set -o pipefail
rm -rf manifests
mkdir -p manifests/setup
jsonnet --ext-str k8s-domain="${K8S_DOMAIN}" --ext-str etcd-ips="${etcd_ips}" -J vendor -m manifests ../prom.jsonnet | xargs -I{} sh -c 'cat {} | gojsontoyaml > {}.yaml; rm -f {}' -- {}

But now when I execute it, I have the following error :

GET https://github.com/coreos/kube-prometheus/archive/4e7440f742df31cd6da188f52ddc4e4037b81599.tar.gz 200
GET https://github.com/prometheus/node_exporter/archive/66fb6762bfca60c4d633b7b0839a39f7e143ed33.tar.gz 200
GET https://github.com/ksonnet/ksonnet-lib/archive/0d2f82676817bbf9e4acf6495b2090205f323b9f.tar.gz 200
GET https://github.com/kubernetes-monitoring/kubernetes-mixin/archive/f0a5099c8214241842b827a08eba6d7b515550ea.tar.gz 200
GET https://github.com/brancz/kubernetes-grafana/archive/18c50c83ea49291b0aa00067e4b2b386556ba0e3.tar.gz 200
GET https://github.com/coreos/prometheus-operator/archive/8b9d024467383d84b55d7e5e0d4f7a33eb5007b3.tar.gz 200
GET https://github.com/coreos/etcd/archive/facd0c946025f07ed8c1ba7d2bb2d80baa17c194.tar.gz 200
GET https://github.com/prometheus/prometheus/archive/d668a7efe3107dbdcc67bf4e9f12430ed8e2b396.tar.gz 200
GET https://github.com/grafana/grafonnet-lib/archive/41ed8c0c53047ff9ddb2ae7f2f3f5f51d7926b97.tar.gz 200
GET https://github.com/grafana/jsonnet-libs/archive/2bead07b1497283ce4e3b3fd7f2c0b141a973a13.tar.gz 200
GET https://github.com/kubernetes-monitoring/kubernetes-mixin/archive/b1005adad5940eee9366272ab4c85cf077e547c2.tar.gz 200
GET https://github.com/metalmatze/slo-libsonnet/archive/e238df4fac957357d78a405966c51523ef151cbc.tar.gz 200
GET https://github.com/latchmihay/kube-prometheus-pushgateway/archive/009b835f7344219ceab7f40d8af810d2cf5b3ee8.tar.gz 200
ok
STATIC ERROR: vendor/node-mixin/alerts/alerts.libsonnet:262:42: unexpected: "{" while parsing field definition

Environment

v0.14.0

v0.35.1

kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-16T11:56:40Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.6", GitCommit:"d32e40e20d167e103faf894261614c5b45c44198", GitTreeState:"clean", BuildDate:"2020-05-20T13:08:34Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}

kubeadm

Here is my prom.jsonnet file

##########################################################################
#
# Jsonnet custom file to compile kube-prometheus manifests
# https://github.com/coreos/kube-prometheus#customizing-kube-prometheus
#
##########################################################################

# Used to modify default 'prometheus-clusterRole.yaml'
# cf. https://github.com/coreos/kube-prometheus/issues/483#issuecomment-610427646
# cf. https://github.com/coreos/kube-prometheus/issues/492
local k = import 'ksonnet/ksonnet.beta.3/k.libsonnet';
local clusterRole = k.rbac.v1.clusterRole;
local policyRule = clusterRole.rulesType;
local extra_cluster_role_resources = policyRule.new() +
                                     policyRule.withApiGroups(['']) +
                                     policyRule.withResources(['services','pods','endpoints']) +
                                     policyRule.withVerbs(['get','list','watch']);
local pvc = k.core.v1.persistentVolumeClaim;

local kp =
  (import 'kube-prometheus/kube-prometheus.libsonnet') +

  // kubeadm specific mixins to create control-planes services to be scrapped by corresponding service monitors
  // cf. https://github.com/coreos/kube-prometheus#cluster-creation-tools
  (import 'kube-prometheus/kube-prometheus-kubeadm.libsonnet') +

  // prom push gateway
  // cf. https://github.com/latchmihay/kube-prometheus/blob/docsPrometheusPushGateway/docs/prometheus-pushgateway.md
  (import 'prometheus-pushgateway/pushgateway.libsonnet') +

  // static-etcd external etcd
  (import 'kube-prometheus/kube-prometheus-static-etcd.libsonnet') +

  // Uncomment the following imports to enable its patches
  // (import 'kube-prometheus/kube-prometheus-anti-affinity.libsonnet') +
  // (import 'kube-prometheus/kube-prometheus-managed-cluster.libsonnet') +
  // (import 'kube-prometheus/kube-prometheus-node-ports.libsonnet') +
  // (import 'kube-prometheus/kube-prometheus-thanos-sidecar.libsonnet') +
  {
    // Global customizations
    _config+:: {
      resources+: {
        'node-exporter': {
          requests: { cpu: '10m', memory: '64Mi' },
          limits: { cpu: '50m', memory: '128Mi' }
        }
      },
      namespace: 'monitoring',
      // Shorten apiserver (kube-mixin) retention (default 30d)
      // Prom error: ... query processing would load too many samples ...
      SLOs+: {
        apiserver+:{
             days:3,
        },
      },
      // Reference info: https://github.com/coreos/kube-prometheus/blob/master/README.md#static-etcd-configuration
      etcd+: {
        // Configure this to be the IP(s) to scrape - i.e. your etcd node(s) (use commas to separate multiple values).
        ips: std.split(std.extVar('etcd-ips'),' '),
        clientCA: importstr 'etcd-certs/ca.pem',
        clientKey: importstr 'etcd-certs/etcd-client-key.pem',
        clientCert: importstr 'etcd-certs/etcd-client.pem',
        serverName: 'etcd.' + std.extVar('k8s-domain'),
      },
      # https://github.com/prometheus-operator/prometheus-operator/issues/2636#issuecomment-553483110
      etcd_selector: 'job=~".*etcd.*",grpc_service!="etcdserverpb.Watch"',
    },
    // Prometheus customizations
    prometheus+:: {
      prometheus+: {
        spec+: {
          externalUrl: 'http://prometheus.' + std.extVar('k8s-domain'),
          externalLabels: { cluster: std.extVar('k8s-domain') },
          // Storage
          retention: '30d',
          storage: {
            volumeClaimTemplate:
              pvc.new() +
              pvc.mixin.spec.withAccessModes('ReadWriteOnce') +
              pvc.mixin.spec.resources.withRequests({ storage: '100Gi' })
          },
          // Discover PrometheusRules in all namespaces
          ruleNamespaceSelector: {},
        },
      },
      clusterRole+: {
        // cf. local variable above
        rules+: [extra_cluster_role_resources],
      },
      // Change apiserver service monitor interval, default 30s
      serviceMonitorApiserver+: {
        spec+: {
          endpoints: [
              x { interval: '1m' },
              for x in super.endpoints        
          ],
        },
      },
    },
    // AlertManager customizations
    alertmanager+:: {
      alertmanager+: {
        spec+: {
          externalUrl: 'http://alertmanager.' + std.extVar('k8s-domain'),
        },
      },
    },
  };

{ ['setup/0namespace-' + name]: kp.kubePrometheus[name] for name in std.objectFields(kp.kubePrometheus) } +
{
  ['setup/prometheus-operator-' + name]: kp.prometheusOperator[name]
  for name in std.filter((function(name) name != 'serviceMonitor'), std.objectFields(kp.prometheusOperator))
} +
// serviceMonitor is separated so that it can be created after the CRDs are ready
{ 'prometheus-operator-serviceMonitor': kp.prometheusOperator.serviceMonitor } +
{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } +
{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } +
{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } +
{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } +
{ ['prometheus-adapter-' + name]: kp.prometheusAdapter[name] for name in std.objectFields(kp.prometheusAdapter) } +
{ ['prometheus-pushgateway-' + name]: kp.pushgateway[name], for name in std.objectFields(kp.pushgateway) } //+

// Remove Grafana resources (it will be managed independantly, by Grafana Operator)
//{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) }
piyushkumarjiit commented 4 years ago

Hi,

I was running in the same issue and was able to narrow it down to file: vendor/node-mixin/alerts/alerts.libsonnet. There are erroneous single quotes around {{ $labels.device }} on line number 262 and 275. After I removed the extra quotes, it started compiling again. I will test it a little more but thought of posting in case someone else runs in this issue.

Thanks you for the great work.

Regards Piyush

image

simonpasquier commented 4 years ago

This is an issue with the node_exporter repository. https://github.com/prometheus/node_exporter/pull/1823 and/or https://github.com/prometheus/node_exporter/pull/1821 should fix it.

yogeek commented 4 years ago

@simonpasquier ok thanks. Will the fix be done in release-0.4 too ? (I cannot use master because of my k8s version)

simonpasquier commented 4 years ago

@yogeek AFAICT there's nothing to be fixed in release-0.4 neither in master because they are both pinned to a version of node_exporter that doesn't have the buggy code. I suppose that something in your setup pulls the latest version of node_exporter so it should work now that https://github.com/prometheus/node_exporter/pull/1823 is merged.

yogeek commented 4 years ago

Ok thank you all for your help. It is working again now.