elastic / elastic-agent

Elastic Agent - single, unified way to add monitoring for logs, metrics, and other types of data to a host.
Other
19 stars 144 forks source link

Elastic Agent standalone failed to recover from time-out on the k8s API server #416

Open rbrunan opened 2 years ago

rbrunan commented 2 years ago

We are working on the synthetics project and using the elastic agent to gather all the needed metrics from the service and the cluster. We are on a GKE managed cluster and we are using this configuration for the elastic agent:

``` outputs: metrics: type: elasticsearch hosts: - >- ${ELASTICSEARCH_METRICS_HOST} username: ${ELASTICSEARCH_METRICS_USERNAME} password: ${ELASTICSEARCH_METRICS_PASSWORD} logs: type: elasticsearch hosts: - >- ${ELASTICSEARCH_LOGS_HOST} username: ${ELASTICSEARCH_LOGS_USERNAME} password: ${ELASTICSEARCH_LOGS_PASSWORD} monitoring: type: elasticsearch hosts: - >- ${ELASTICSEARCH_MONITORING_HOST} username: ${ELASTICSEARCH_MONITORING_USERNAME} password: ${ELASTICSEARCH_MONITORING_PASSWORD} agent: monitoring: enabled: true use_output: monitoring logs: true metrics: true logging: level: {{ .Values.agent.log_level | default "info" }} providers.kubernetes: node: ${NODE_NAME} scope: node inputs: - name: kubernetes-cluster-metrics condition: ${kubernetes_leaderelection.leader} == true type: kubernetes/metrics use_output: metrics meta: package: name: kubernetes version: 0.2.8 data_stream: namespace: default streams: - data_stream: dataset: kubernetes.apiserver type: metrics metricsets: - apiserver bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token hosts: - 'https://${env.KUBERNETES_SERVICE_HOST}:${env.KUBERNETES_SERVICE_PORT}' period: 30s ssl.certificate_authorities: - /var/run/secrets/kubernetes.io/serviceaccount/ca.crt - data_stream: dataset: kubernetes.event type: metrics metricsets: - event period: 10s add_metadata: true - data_stream: dataset: kubernetes.state_container type: metrics metricsets: - state_container add_metadata: true hosts: - 'kube-state-metrics:8080' period: 10s - data_stream: dataset: kubernetes.state_cronjob type: metrics metricsets: - state_cronjob add_metadata: true hosts: - 'kube-state-metrics:8080' period: 10s - data_stream: dataset: kubernetes.state_deployment type: metrics metricsets: - state_deployment add_metadata: true hosts: - 'kube-state-metrics:8080' period: 10s - data_stream: dataset: kubernetes.state_daemonset type: metrics metricsets: - state_daemonset add_metadata: true hosts: - 'kube-state-metrics:8080' period: 10s - data_stream: dataset: kubernetes.state_job type: metrics metricsets: - state_job add_metadata: true hosts: - 'kube-state-metrics:8080' period: 10s - data_stream: dataset: kubernetes.state_node type: metrics metricsets: - state_node add_metadata: true hosts: - 'kube-state-metrics:8080' period: 10s - data_stream: dataset: kubernetes.state_persistentvolume type: metrics metricsets: - state_persistentvolume add_metadata: true hosts: - 'kube-state-metrics:8080' period: 10s - data_stream: dataset: kubernetes.state_persistentvolumeclaim type: metrics metricsets: - state_persistentvolumeclaim add_metadata: true hosts: - 'kube-state-metrics:8080' period: 10s - data_stream: dataset: kubernetes.state_pod type: metrics metricsets: - state_pod add_metadata: true hosts: - 'kube-state-metrics:8080' period: 10s - data_stream: dataset: kubernetes.state_replicaset type: metrics metricsets: - state_replicaset add_metadata: true hosts: - 'kube-state-metrics:8080' period: 10s - data_stream: dataset: kubernetes.state_resourcequota type: metrics metricsets: - state_resourcequota add_metadata: true hosts: - 'kube-state-metrics:8080' period: 10s - data_stream: dataset: kubernetes.state_service type: metrics metricsets: - state_service add_metadata: true hosts: - 'kube-state-metrics:8080' period: 10s - data_stream: dataset: kubernetes.state_statefulset type: metrics metricsets: - state_statefulset add_metadata: true hosts: - 'kube-state-metrics:8080' period: 10s - data_stream: dataset: kubernetes.state_storageclass type: metrics metricsets: - state_storageclass add_metadata: true hosts: - 'kube-state-metrics:8080' period: 10s - name: system-logs type: logfile use_output: logs meta: package: name: system version: 0.10.7 data_stream: namespace: default streams: - data_stream: dataset: system.auth type: logs paths: - /var/log/auth.log* - /var/log/secure* exclude_files: - .gz$ multiline: pattern: ^\s match: after processors: - add_fields: target: '' fields: ecs.version: 1.12.0 - data_stream: dataset: system.syslog type: logs paths: - /var/log/messages* - /var/log/syslog* exclude_files: - .gz$ multiline: pattern: ^\s match: after processors: - add_fields: target: '' fields: ecs.version: 1.12.0 - name: container-log type: container use_output: logs meta: package: name: log version: 0.4.6 data_stream: namespace: default streams: - data_stream: dataset: generic symlinks: true paths: - /var/log/containers/*${kubernetes.container.id}.log - name: heartbeat-log type: container use_output: metrics data_stream.namespace: default data_stream.type: metrics streams: - data_stream: dataset: elastic_agent.synthetics_job symlinks: true include_lines: ['Total metrics'] fields: data_stream.type: metrics metricset.name: stats fields_under_root: true json.keys_under_root: true json.add_error_key: true json.message_key: message json.overwrite_keys: true paths: - /var/log/containers/*${kubernetes.container.id}.log condition: ${kubernetes.namespace} == "synthetics-workload" processors: - rename: fields: - from: "monitoring.metrics.beat.info.uptime.ms" to: "beat.stats.uptime.ms" - drop_fields: fields: ["monitoring"] - name: system-metrics type: system/metrics use_output: metrics meta: package: name: system version: 0.10.9 data_stream: namespace: default streams: - data_stream: dataset: system.core type: metrics metricsets: - core core.metrics: - percentages - data_stream: dataset: system.cpu type: metrics period: 10s cpu.metrics: - percentages - normalized_percentages metricsets: - cpu - data_stream: dataset: system.diskio type: metrics period: 10s diskio.include_devices: null metricsets: - diskio - data_stream: dataset: system.filesystem type: metrics period: 1m metricsets: - filesystem processors: - drop_event.when.regexp: system.filesystem.mount_point: ^/(sys|cgroup|proc|dev|etc|host|lib|snap)($|/) - data_stream: dataset: system.fsstat type: metrics period: 1m metricsets: - fsstat processors: - drop_event.when.regexp: system.fsstat.mount_point: ^/(sys|cgroup|proc|dev|etc|host|lib|snap)($|/) - data_stream: dataset: system.load type: metrics period: 10s metricsets: - load - data_stream: dataset: system.memory type: metrics period: 10s metricsets: - memory - data_stream: dataset: system.network type: metrics period: 10s network.interfaces: null metricsets: - network - data_stream: dataset: system.process type: metrics period: 10s processes: - .* process.cgroups.enabled: true process.cmdline.cache.enabled: true metricsets: - process process.include_cpu_ticks: false process.include_per_cpu: false process.include_top_n.enabled: false system.hostfs: /hostfs - data_stream: dataset: system.process_summary type: metrics period: 10s metricsets: - process_summary system.hostfs: /hostfs - data_stream: dataset: system.socket_summary type: metrics period: 10s metricsets: - socket_summary system.hostfs: /hostfs - name: kubernetes-node-metrics type: kubernetes/metrics use_output: metrics meta: package: name: kubernetes version: 0.2.8 data_stream: namespace: default streams: - data_stream: dataset: kubernetes.controllermanager type: metrics metricsets: - controllermanager bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token hosts: - 'https://127.0.0.1:10257' period: 10s ssl.verification_mode: none condition: ${kubernetes.labels.component} == 'kube-controller-manager' - data_stream: dataset: kubernetes.scheduler type: metrics metricsets: - scheduler bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token hosts: - 'https://127.0.0.1:10259' period: 10s ssl.verification_mode: none condition: ${kubernetes.labels.component} == 'kube-scheduler' - data_stream: dataset: kubernetes.proxy type: metrics metricsets: - proxy hosts: - 'localhost:10249' period: 10s - data_stream: dataset: kubernetes.container type: metrics metricsets: - container add_metadata: true bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token hosts: - 'https://${env.NODE_NAME}:10250' period: 10s ssl.verification_mode: none - data_stream: dataset: kubernetes.node type: metrics metricsets: - node add_metadata: true bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token hosts: - 'https://${env.NODE_NAME}:10250' period: 10s ssl.verification_mode: none - data_stream: dataset: kubernetes.pod type: metrics metricsets: - pod add_metadata: true bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token hosts: - 'https://${env.NODE_NAME}:10250' period: 10s ssl.verification_mode: none - data_stream: dataset: kubernetes.system type: metrics metricsets: - system add_metadata: true bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token hosts: - 'https://${env.NODE_NAME}:10250' period: 10s ssl.verification_mode: none - data_stream: dataset: kubernetes.volume type: metrics metricsets: - volume add_metadata: true bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token hosts: - 'https://${env.NODE_NAME}:10250' period: 10s ssl.verification_mode: none ```

The elastic agent is working fine, reporting cluster metrics every 30s, till we have an error on the k8s API server communication:

Apr 24, 2022 @ 06:11:43.159 | E0424 06:11:43.159693       8 leaderelection.go:325] error retrieving resource lock kube-system/elastic-agent-cluster-leader: Get "https://10.253.0.1:443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/elastic-agent-cluster-leader": context deadline exceeded
Apr 24, 2022 @ 06:11:43.159 | I0424 06:11:43.159726       8 leaderelection.go:278] failed to renew lease kube-system/elastic-agent-cluster-leader: timed out waiting for the condition
Apr 24, 2022 @ 06:11:43.159 | E0424 06:11:43.159762       8 leaderelection.go:301] Failed to release lock: resource name may not be empty

After that the lease keep expired until I restart the former leader pod.

The problem with the API server affects at the same time to the cert-manager deployment that we have in the same cluster but, the cert-manager deployment recover the leader lease automatically. This is the behavior that we expected from the elastic agent.

As a workaround, we are trying to upgrade the agent and set up a different deployment for the cluster metrics, as mentioned here.

rbrunan commented 2 years ago

Pinging @elastic/obs-cloudnative-monitoring

ChrsMark commented 2 years ago

cc: @mlunadia @rameshelastic for prioritising this accordingly.