Closed riccardopierpaoli closed 6 years ago
Please use https://discuss.elastic.co for support questions. On discuss, both community members and developers will try to help.
X-Pack monitoring requires the beat to forward the metrics to the Elasticsearch cluster, which will add additional meta-data and some event formatting/processing to the metrics provided. Where the metrics are finally stored depends on your X-Pack Monitoring setup in Elasticsearch.
Docs PR with more details opened here: https://github.com/elastic/beats/pull/7296
Hello, I'm new in Elasticsearch x-pack plug-in and I have a problem to configure Metricbeat. My configuration is composed by 2 clusters: one for monitoring, compound by one node (port 9202) and one cluster for data nodes, composed by 2 nodes (ports 9201 and 9200). I've installed x-pack on all nodes of the clusters. The Elasticsearch and Metricbeat version is the 6.2.4.
-This is my Metricbeat configurations:
`#=========== Modules configuration ========================
metricbeat.config.modules:
Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
path: /etc/metricbeat/modules.d/*.yml
Set to true to enable config reloading
reload.enabled: false
Period on which files under path should be checked for changes
reload.period: 10s
metricbeat.modules:
==================== Elasticsearch template setting ==========================
setup.template.settings: index.number_of_shards: 1 index.codec: best_compression
_source.enabled: false
======================== General ==============================
The name of the shipper that publishes the network data. It can be used to group
all the transactions sent by a single shipper in the web interface.
name: metricbeat-test-cluster
The tags of the shipper are included in their own field with each
transaction published.
tags: ["service-X", "web-tier"]
Optional fields that you can specify to add additional information to the
output.
fields:
env: aws-test-env
===================== Dashboards =============================
These settings control loading the sample dashboards to the Kibana index. Loading
the dashboards is disabled by default and can be enabled either by setting the
options here, or by using the
-setup
CLI flag or thesetup
command.setup.dashboards.enabled: true
The URL from where to download the dashboards archive. By default this URL
has a value which is computed based on the Beat name and version. For released
versions, this URL points to the dashboard archive on the artifacts.elastic.co
website.
setup.dashboards.url:
============================ Kibana ========================
Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
This requires a Kibana endpoint configuration.
setup.kibana.host: "127.0.0.1:5602"
Kibana Host
Scheme and port can be left out and will be set to the default (http and 5601)
In case you specify and additional path, the scheme is required: http://localhost:5601/path
IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
======================= Outputs ==========================
Configure what output to use when sending the data collected by the beat.
-------------------------- Elasticsearch output ------------------------------
output.elasticsearch.hosts: ["https://127.0.0.1:9200"]
output.elasticsearch.username: "elastic"
output.elasticsearch.password: "elastic"
output.elasticsearch.ssl.enabled: true
output.elasticsearch.ssl.verification_mode: none
output.elasticsearch.ssl.certificate_authorities: ["path/to/node1/pem"]
output.elasticsearch.ssl.certificate: "path/to/node1/pem"
output.elasticsearch.ssl.key: "path/to/node1/pem"
========================= Logging ========================
Sets log level. The default log level is info.
Available log levels are: error, warning, info, debug
logging.level: debug
At debug level, you can selectively enable logging only for some components.
To enable all selectors use ["*"]. Examples of other selectors are "beat",
"publish", "service".
logging.selectors: ["*"]
==================== Xpack Monitoring ======================
metricbeat can export internal metrics to a central Elasticsearch monitoring
cluster. This requires xpack monitoring to be enabled in Elasticsearch. The
reporting is disabled by default.
Set to true to enable the monitoring reporter.
xpack.monitoring.enabled: true
Uncomment to send the metrics to Elasticsearch. Most settings from the
Elasticsearch output are accepted here as well. Any setting that is not set is
automatically inherited from the Elasticsearch output configuration, so if you
have the Elasticsearch output configured, you can simply uncomment the
following line.
xpack.monitoring.elasticsearch.hosts: ["https://127.0.0.1:9202"] xpack.monitoring.elasticsearch.username: "elastic" xpack.monitoring.elasticsearch.password: "elastic" xpack.monitoring.elasticsearch.ssl.enabled: true xpack.monitoring.elasticsearch.ssl.verification_mode: none xpack.monitoring.elasticsearch.ssl.certificate_authorities: ["/path/to/monitoring/pem"] xpack.monitoring.elasticsearch.ssl.certificate: "/path/to/monitoring/pem" xpack.monitoring.elasticsearch.ssl.key: "/path/to/monitoring/pem" `
-When I try to start the service by using the command : "service metricbeat start", the response is:
Starting metricbeat: 2018-06-07T10:43:06.889Z INFO instance/beat.go:468 Home path: [/usr/share/metricbeat] Config path: [/etc/metricbeat] Data path: [/var/lib/metricbeat] Logs path: [/var/log/metricbeat] 2018-06-07T10:43:06.889Z DEBUG [beat] instance/beat.go:495 Beat metadata path: /var/lib/metricbeat/meta.json 2018-06-07T10:43:06.889Z INFO instance/beat.go:475 Beat UUID: ########## 2018-06-07T10:43:06.889Z INFO instance/beat.go:213 Setup Beat: metricbeat; Version: 6.2.4 2018-06-07T10:43:06.889Z DEBUG [beat] instance/beat.go:230 Initializing output plugins 2018-06-07T10:43:06.890Z DEBUG [processors] processors/processor.go:49 Processors: 2018-06-07T10:43:06.890Z INFO elasticsearch/client.go:145 Elasticsearch url: https://127.0.0.1:9200 2018-06-07T10:43:06.890Z INFO pipeline/module.go:76 Beat name: metricbeat-test-cluster 2018-06-07T10:43:06.890Z DEBUG [modules] beater/metricbeat.go:80 Register [ModuleFactory:[docker, mongodb, mysql, postgresql, system, uwsgi], MetricSetFactory:[aerospike/namespace, apache/status, ceph/cluster_disk, ceph/cluster_health, ceph/cluster_status, ceph/monitor_health, ceph/osd_df, ceph/osd_tree, ceph/pool_disk, couchbase/bucket, couchbase/cluster, couchbase/node, docker/container, docker/cpu, docker/diskio, docker/healthcheck, docker/image, docker/info, docker/memory, docker/network, dropwizard/collector, elasticsearch/node, elasticsearch/node_stats, etcd/leader, etcd/self, etcd/store, golang/expvar, golang/heap, graphite/server, haproxy/info, haproxy/stat, http/json, http/server, jolokia/jmx, kafka/consumergroup, kafka/partition, kibana/status, kubernetes/container, kubernetes/event, kubernetes/node, kubernetes/pod, kubernetes/state_container, kubernetes/state_deployment, kubernetes/state_node, kubernetes/state_pod, kubernetes/state_replicaset, kubernetes/system, kubernetes/volume, logstash/node, logstash/node_stats, memcached/stats, mongodb/collstats, mongodb/dbstats, mongodb/status, mysql/status, nginx/stubstatus, php_fpm/pool, postgresql/activity, postgresql/bgwriter, postgresql/database, prometheus/collector, prometheus/stats, rabbitmq/node, rabbitmq/queue, redis/info, redis/keyspace, system/core, system/cpu, system/diskio, system/filesystem, system/fsstat, system/load, system/memory, system/network, system/process, system/process_summary, system/raid, system/socket, system/uptime, uwsgi/status, vsphere/datastore, vsphere/host, vsphere/virtualmachine, zookeeper/mntr]] Config OK [ OK ]
-The log of metricbeat after the service starting is :
2018-06-07T10:43:06.928Z ERROR instance/beat.go:667 Exiting: 'xpack.monitoring.elasticsearch.hosts' and 'output.elasticsearch.hosts' are configured
Q1: Could you help me on this issue? Q2: Is it possible to store both metricbeat's indices, monitoring and metric, on monitoring node?I could sucessfully store monitoring index on monitoring node, but, of course, metricbeat index won't be seen by kibana...
Thank you!