Closed Idam7961 closed 10 months ago
172.22.0.9 is fleet docker
fleet-compose.yml
version: '3.7'
services:
fleet-server:
build:
context: extensions/fleet/
args:
ELASTIC_VERSION: ${ELASTIC_VERSION}
volumes:
- fleet-server:/usr/share/elastic-agent/state:Z
# (!) TLS certificates. Generate using the 'tls' service.
- ./tls/certs/ca/ca.crt:/usr/share/elastic-agent/ca.crt:ro,z
- ./tls/certs/fleet-server/fleet-server.crt:/usr/share/elastic-agent/fleet-server.crt:ro,Z
- ./tls/certs/fleet-server/fleet-server.key:/usr/share/elastic-agent/fleet-server.key:ro,Z
environment:
FLEET_SERVER_ENABLE: '1'
FLEET_SERVER_HOST: 0.0.0.0
FLEET_SERVER_POLICY_ID: fleet-server-policy
FLEET_URL: https://fleet-server:8220
FLEET_SERVER_CERT: /usr/share/elastic-agent/fleet-server.crt
FLEET_SERVER_CERT_KEY: /usr/share/elastic-agent/fleet-server.key
ELASTICSEARCH_HOST: https://elasticsearch:9200
ELASTICSEARCH_CA: /usr/share/elastic-agent/ca.crt
FLEET_SERVER_ELASTICSEARCH_CA_TRUSTED_FINGERPRINT: xxxxxx
# Fleet plugin in Kibana
KIBANA_FLEET_SETUP: '1'
# Enrollment.
# (a) Auto-enroll using basic authentication
ELASTICSEARCH_USERNAME: elastic
ELASTICSEARCH_PASSWORD: ${ELASTIC_PASSWORD:-}
# (b) Enroll using a pre-generated service token
#FLEET_SERVER_SERVICE_TOKEN: <service_token>
ports:
- 8220:8220
hostname: fleet-server
# Elastic Agent does not retry failed connections to Kibana upon the initial enrollment phase.
restart: on-failure
networks:
- elk
depends_on:
- elasticsearch
- kibana
volumes:
fleet-server:
Thanks for the detailed report :+1:
With just a few log lines it is difficult to tell what is going on. Sometimes the reason for something not working is not directly visible inside WARN/ERROR log entries, but instead scattered around the logs.
Are you seeing the Fleet server inside Kibana?
I noted a few things:
FLEET_SERVER_ELASTICSEARCH_CA_TRUSTED_FINGERPRINT
variable is not necessary. The Elasticsearch CA certificate is already mounted inside the Fleet Server container. Only the kibana.yml
file requires this fingerprint.Thanks for the detailed report 👍
With just a few log lines it is difficult to tell what is going on. Sometimes the reason for something not working is not directly visible inside WARN/ERROR log entries, but instead scattered around the logs.
Are you seeing the Fleet server inside Kibana?
I noted a few things:
- "plain text" errors could indicate that you didn't rebuild images after switching branches (see README)
- The
FLEET_SERVER_ELASTICSEARCH_CA_TRUSTED_FINGERPRINT
variable is not necessary. The Elasticsearch CA certificate is already mounted inside the Fleet Server container. Only thekibana.yml
file requires this fingerprint.
Thanks for the fast reply!
what more logs you need? elastic search just repeats that whole line for the whole buffer.
I dont see the fleet server inside kibana
its allready rebuilded:
user@host:/etc/elastic-stack/docker-elk# docker-compose -f docker-compose.yml -f extensions/metricbeat/metricbeat-compose.yml -f extensions/filebeat/filebeat-compose.yml -f extensions/heartbeat/heartbeat-compose.yml -f extensions/logspout/logspout-compose.yml -f extensions/fleet/fleet-compose.yml build
Building elasticsearch
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/
Sending build context to Docker daemon 5.632kB
Step 1/2 : ARG ELASTIC_VERSION
Step 2/2 : FROM docker.elastic.co/elasticsearch/elasticsearch:${ELASTIC_VERSION}
---> b7c0bf7f2e52
Successfully built b7c0bf7f2e52
Successfully tagged docker-elk_elasticsearch:latest
Building metricbeat
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/
Sending build context to Docker daemon 11.78kB
Step 1/2 : ARG ELASTIC_VERSION
Step 2/2 : FROM docker.elastic.co/beats/metricbeat:${ELASTIC_VERSION}
---> 9f5035931f3d
Successfully built 9f5035931f3d
Successfully tagged docker-elk_metricbeat:latest
Building filebeat
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/
Sending build context to Docker daemon 10.24kB
Step 1/2 : ARG ELASTIC_VERSION
Step 2/2 : FROM docker.elastic.co/beats/filebeat:${ELASTIC_VERSION}
---> 7c0012435993
Successfully built 7c0012435993
Successfully tagged docker-elk_filebeat:latest
Building kibana
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/
Sending build context to Docker daemon 14.85kB
Step 1/2 : ARG ELASTIC_VERSION
Step 2/2 : FROM docker.elastic.co/kibana/kibana:${ELASTIC_VERSION}
---> a5ee2c00b338
Successfully built a5ee2c00b338
Successfully tagged docker-elk_kibana:latest
Building fleet-server
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/
Sending build context to Docker daemon 19.97kB
Step 1/3 : ARG ELASTIC_VERSION
Step 2/3 : FROM docker.elastic.co/beats/elastic-agent:${ELASTIC_VERSION}
---> 69a3e94d378d
Step 3/3 : RUN mkdir state
---> Using cache
---> cf291c5d6d6e
Successfully built cf291c5d6d6e
Successfully tagged docker-elk_fleet-server:latest
Building logstash
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/
Sending build context to Docker daemon 33.79kB
Step 1/7 : ARG ELASTIC_VERSION
Step 2/7 : FROM docker.elastic.co/logstash/logstash:${ELASTIC_VERSION}
---> d3299608a390
Step 3/7 : USER root
---> Using cache
---> 582d5d80ef66
Step 4/7 : RUN mkdir -p /etc/pki/tls/{certs,private}
---> Using cache
---> 832d1896e9b1
Step 5/7 : ADD ./logstash-beats.crt /etc/pki/tls/certs/logstash-beats.crt
---> Using cache
---> 2f357c084d4e
Step 6/7 : ADD ./logstash-beats.key /etc/pki/tls/private/logstash-beats.key
---> Using cache
---> 3f416862328f
Step 7/7 : USER logstash
---> Using cache
---> 090723fffaaa
Successfully built 090723fffaaa
Successfully tagged docker-elk_logstash:latest
Building logspout
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/
Sending build context to Docker daemon 7.68kB
Step 1/2 : FROM gliderlabs/logspout:master
# Executing 3 build triggers
---> Using cache
---> Using cache
---> Using cache
---> a0594aaf2145
Step 2/2 : ENV SYSLOG_FORMAT rfc3164
---> Using cache
---> e31afb72e052
Successfully built e31afb72e052
Successfully tagged docker-elk_logspout:latest
Building heartbeat
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/
Sending build context to Docker daemon 9.728kB
Step 1/2 : ARG ELASTIC_VERSION
Step 2/2 : FROM docker.elastic.co/beats/heartbeat:${ELASTIC_VERSION}
---> ea7959d45f2d
Successfully built ea7959d45f2d
Successfully tagged docker-elk_heartbeat:latest
user@host:/etc/elastic-stack/docker-elk# docker-compose -f docker-compose.yml -f extensions/metricbeat/metricbeat-compose.yml -f extensions/filebeat/filebeat-compose.yml -f extensions/heartbeat/heartbeat-compose.yml -f extensions/logspout/logspout-compose.yml -f extensions/fleet/fleet-compose.yml up -d
docker-elk_elasticsearch_1 is up-to-date
docker-elk_kibana_1 is up-to-date
docker-elk_heartbeat_1 is up-to-date
docker-elk_metricbeat_1 is up-to-date
docker-elk_logstash_1 is up-to-date
docker-elk_filebeat_1 is up-to-date
docker-elk_fleet-server_1 is up-to-date
docker-elk_logspout_1 is up-to-date
Ok thanks for confirming about the image builds.
Right now the issue is not obvious to me. I wasn't able to reproduce and the automated tests are still passing. I'll try to dig a bit more during the weekend.
I have a theory.
Could it be that Kibana was originally set up on the main
branch, and registered the following Elasticsearch output while bootstrapping the Fleet configuration?:
You should be able to check the Elasticsearch URL propagated to Fleet agents on the Settings tab of the Fleet menu, as described in the docs.
i did started with the main branch and now have switched to tls branch. when i switched to tls i used "git checkout -f tls" should had i done it a different way?
Anyways now on the current file of kibana.yml - it is configured to use https with elasticsearch:
user@host:/etc/elastic-stack/docker-elk/kibana/config# cat kibana.yml
---
## Default Kibana configuration from Kibana base image.
## https://github.com/elastic/kibana/blob/main/src/dev/build/tasks/os_packages/docker_generator/templates/kibana_yml.template.ts
#
server.name: kibana
server.host: 0.0.0.0
elasticsearch.hosts: [ https://elasticsearch:9200 ]
monitoring.ui.container.elasticsearch.enabled: true
monitoring.ui.container.logstash.enabled: true
## X-Pack security credentials
#
elasticsearch.username: kibana_system
elasticsearch.password: ${KIBANA_SYSTEM_PASSWORD}
##
## TLS configuration
## See instructions from README to enable.
##
## Communications between Kibana and Elasticsearch
## see https://www.elastic.co/guide/en/kibana/current/configuring-tls.html#configuring-tls-kib-es
#
elasticsearch.ssl.certificateAuthorities: [ config/ca.crt ]
## Communications between web browsers and Kibana
## see https://www.elastic.co/guide/en/kibana/current/configuring-tls.html#configuring-tls-browser-kib
#
server.ssl.enabled: true
server.ssl.certificate: config/kibana.crt
server.ssl.key: config/kibana.key
## Encryption keys (optional but highly recommended)
##
## Generate with either
## $ docker container run --rm docker.elastic.co/kibana/kibana:8.6.2 bin/kibana-encryption-keys generate
## $ openssl rand -hex 32
##
## https://www.elastic.co/guide/en/kibana/current/using-kibana-with-security.html
## https://www.elastic.co/guide/en/kibana/current/kibana-encryption-keys.html
#
#xpack.security.encryptionKey:
#xpack.encryptedSavedObjects.encryptionKey:
#xpack.reporting.encryptionKey:
## Fleet
## https://www.elastic.co/guide/en/kibana/current/fleet-settings-kb.html
#
xpack.fleet.agents.fleet_server.hosts: [ https://fleet-server:8220 ]
xpack.fleet.outputs:
- id: fleet-default-output
name: default
type: elasticsearch
hosts: [ https://elasticsearch:9200 ]
# Set to output of 'docker-compose up tls'. Example:
ca_trusted_fingerprint: xxxxxx
is_default: true
is_default_monitoring: true
xpack.fleet.packages:
- name: fleet_server
version: latest
- name: system
version: latest
- name: elastic_agent
version: latest
- name: apm
version: latest
xpack.fleet.agentPolicies:
- name: Fleet Server Policy
id: fleet-server-policy
description: Static agent policy for Fleet Server
monitoring_enabled:
- logs
- metrics
package_policies:
- name: fleet_server-1
package:
name: fleet_server
- name: system-1
package:
name: system
- name: elastic_agent-1
package:
name: elastic_agent
- name: Agent Policy APM Server
id: agent-policy-apm-server
description: Static agent policy for the APM Server integration
monitoring_enabled:
- logs
- metrics
package_policies:
- name: system-1
package:
name: system
- name: elastic_agent-1
package:
name: elastic_agent
- name: apm-1
package:
name: apm
# See the APM package manifest for a list of possible inputs.
# https://github.com/elastic/apm-server/blob/v8.5.0/apmpackage/apm/manifest.yml#L41-L168
inputs:
- type: apm
vars:
- name: host
value: 0.0.0.0:8200
- name: url
value: https://apm-server:8200
- name: tls_enabled
value: true
- name: tls_certificate
value: /usr/share/elastic-agent/apm-server.crt
- name: tls_key
value: /usr/share/elastic-agent/apm-server.key
Plus, the Elasticsearch URL that is propagated to Fleet agents on the Fleet menu is as follows:
All good, it seems like Kibana did update the output's URL to the correct value.
If your stack does not contain any critical data, would you mind trying a complete reset as follows? (notice the -v
flag for "delete volumes")
ive made a new vm and deployed it from the get go before making my configurations to my needs as done in the old vm. fleet is working and is recognized from the start in kibana ui. all i done at the cloned repo files was inserting the fingerprint.
ill continue with my configurations and let you know what breaks the fleet if at all. Thanks for the help!
Problem description
After setting up elk stack on tls using your docker-compose i would like to integrate fleet server with the stack.
Following your documentation I've set up fingerprint in kibana.yml that i get from running docker-compose up tls and in fleet-compose.yml as i saw that is perhaps required on some other issue
But when the fleet container is running i get the following log message being repeated from the fleet docker logs.
Extra information
Stack configuration
Docker setup
Container logs
KIBANA
ELASTIC-SEARCH