Open SharpThunder opened 3 years ago
For APM Server, you need to add the monitoring
settings in the configuration to use internal collection to send monitoring data (see the documentation):
apiVersion: apm.k8s.elastic.co/v1
kind: ApmServer
metadata:
name: apm
namespace: elastic-system
spec:
version: 7.14.0
count: 1
elasticsearchRef:
name: "es-cluster"
kibanaRef:
name: "kibana"
config:
# enable internal collection to send monitoring data
monitoring:
enabled: true
podTemplate:
spec:
containers:
- name: apm-server
resources:
limits:
memory: 1Gi
cpu: 1
Thank you, that solves my issue. However, Could you please add this to Elastic Cloud on Kubernetes documentation as a warning either this parameter to be applied or MetricBeats installed before installing APM? Also, It can be helpful if there is a meaningful error for this when debugging.
Bug Report
What did you do? I tried to install APM Server with Elastic Search and Kibana. What did you expect to see? Working monitoring with internal monitoring in Stack Monitoring What did you see instead? Under which circumstances?
in https://kibana/app/monitoring#/overview?_g=(cluster_uuid:********,inSetupMode:!t,refreshInterval:(pause:!f,value:10000),time:(from:now-15m,to:now))
Environment
ECK version: 1.7.1
Kubernetes information:
{"type":"log","@timestamp":"2021-09-14T16:13:56+00:00","tags":["error","plugins","monitoring","monitoring"],"pid":1208,"message":"TypeError: Cannot destructure property 'beats' of '(intermediate value)(intermediate value)(intermediate value)' as it is undefined.\n at handleResponse (/usr/share/kibana/x-pack/plugins/monitoring/server/lib/apm/get_apms.js:46:5)\n at getApms (/usr/share/kibana/x-pack/plugins/monitoring/server/lib/apm/get_apms.js:181:10)\n at runMicrotasks ()\n at processTicksAndRejections (internal/process/task_queues.js:95:5)\n at async Promise.all (index 1)\n at Object.handler (/usr/share/kibana/x-pack/plugins/monitoring/server/routes/api/v1/apm/instances.js:51:31)\n at handler (/usr/share/kibana/x-pack/plugins/monitoring/server/plugin.js:406:28)\n at Router.handle (/usr/share/kibana/src/core/server/http/router/router.js:163:30)\n at handler (/usr/share/kibana/src/core/server/http/router/router.js:124:50)\n at exports.Manager.execute (/usr/share/kibana/node_modules/@hapi/hapi/lib/toolkit.js:60:28)\n at Object.internals.handler (/usr/share/kibana/node_modules/@hapi/hapi/lib/handler.js:46:20)\n at exports.execute (/usr/share/kibana/node_modules/@hapi/hapi/lib/handler.js:31:20)\n at Request._lifecycle (/usr/share/kibana/node_modules/@hapi/hapi/lib/request.js:370:32)\n at Request._execute (/usr/share/kibana/node_modules/@hapi/hapi/lib/request.js:279:9)"}
helm repo add elastic https://helm.elastic.co helm repo update
helm install elastic-operator elastic/eck-operator -n elastic-system --create-namespace
This sample sets up an Elasticsearch cluster with 3 nodes.
apiVersion: elasticsearch.k8s.elastic.co/v1 kind: Elasticsearch metadata: name: es-cluster namespace: elastic-system spec: version: 7.14.0 volumeClaimDeletePolicy: DeleteOnScaledownOnly nodeSets:
name: us-east-2a count: 1 config: node.attr.zone: us-east-2a cluster.routing.allocation.awareness.attributes: k8s_node_name,zone node.roles: ["master", "data", "ingest", "ml", "transform"]
this allows ES to run on nodes even if their vm.max_map_count has not been increased, at a performance cost
node.store.allow_mmap: false
podTemplate: spec:
this changes the kernel setting on the node to allow ES to use mmap
volumeClaimTemplates:
node.roles: ["master", "data", "ingest", "ml", "transform"]
this allows ES to run on nodes even if their vm.max_map_count has not been increased, at a performance cost
node.store.allow_mmap: false
podTemplate: spec:
this changes the kernel setting on the node to allow ES to use mmap
volumeClaimTemplates:
node.roles: ["master", "data", "ingest", "ml", "transform"]
this allows ES to run on nodes even if their vm.max_map_count has not been increased, at a performance cost
node.store.allow_mmap: false
podTemplate: spec:
this changes the kernel setting on the node to allow ES to use mmap
volumeClaimTemplates:
apiVersion: kibana.k8s.elastic.co/v1 kind: Kibana metadata: name: kibana namespace: elastic-system spec: version: 7.14.0 count: 1 elasticsearchRef: name: "es-cluster" namespace: elastic-system podTemplate: spec: containers:
apiVersion: apm.k8s.elastic.co/v1 kind: ApmServer metadata: name: apm namespace: elastic-system spec: version: 7.14.0 count: 1 elasticsearchRef: name: "es-cluster" kibanaRef: name: "kibana" podTemplate: spec: containers:
APM server is recognized by Kibana
Indices also created
Thanks.