deviantony / docker-elk

The Elastic stack (ELK) powered by Docker and Compose.
MIT License
17.36k stars 6.81k forks source link

Error - failed version compatibility check with elasticsearch - fleet server on branch tls #957

Closed Idam7961 closed 10 months ago

Idam7961 commented 10 months ago

Problem description

After setting up elk stack on tls using your docker-compose i would like to integrate fleet server with the stack.

Following your documentation I've set up fingerprint in kibana.yml that i get from running docker-compose up tls and in fleet-compose.yml as i saw that is perhaps required on some other issue

But when the fleet container is running i get the following log message being repeated from the fleet docker logs.

Extra information

{"log.level":"error","@timestamp":"2024-01-25T15:21:35.247Z","log.origin":{"file.name":"coordinator/coordinator.go","file.line":557},"message":"Unit state changed fleet-server-default (STARTING->FAILED): Error - failed version compatibility check with elasticsearch: EOF","log":{"source":"elastic-agent"},"component":{"id":"fleet-server-default","state":"HEALTHY"},"unit":{"id":"fleet-server-default","type":"output","state":"FAILED","old_state":"STARTING"},"ecs.version":"1.6.0"}`

Stack configuration


diff --git a/.env b/.env
index 7a556b7..ad046ae 100644
--- a/.env
+++ b/.env
@@ -7,36 +7,36 @@ ELASTIC_VERSION=8.11.4
 #
 # Superuser role, full access to cluster management and data indices.
 # https://www.elastic.co/guide/en/elasticsearch/reference/current/built-in-users.html
-ELASTIC_PASSWORD='changeme'
+ELASTIC_PASSWORD='xxxxxxxxx'

 # User 'logstash_internal' (custom)
 #
 # The user Logstash uses to connect and send data to Elasticsearch.
 # https://www.elastic.co/guide/en/logstash/current/ls-security.html
-LOGSTASH_INTERNAL_PASSWORD='changeme'
+LOGSTASH_INTERNAL_PASSWORD='xxxxx'

 # User 'kibana_system' (built-in)
 #
 # The user Kibana uses to connect and communicate with Elasticsearch.
 # https://www.elastic.co/guide/en/elasticsearch/reference/current/built-in-users.html
-KIBANA_SYSTEM_PASSWORD='changeme'
+KIBANA_SYSTEM_PASSWORD='xxxxxx'

 # Users 'metricbeat_internal', 'filebeat_internal' and 'heartbeat_internal' (custom)
 #
 # The users Beats use to connect and send data to Elasticsearch.
 # https://www.elastic.co/guide/en/beats/metricbeat/current/feature-roles.html
-METRICBEAT_INTERNAL_PASSWORD=''
-FILEBEAT_INTERNAL_PASSWORD=''
-HEARTBEAT_INTERNAL_PASSWORD=''
+METRICBEAT_INTERNAL_PASSWORD='xxxxx'
+FILEBEAT_INTERNAL_PASSWORD='xxxxxx'
+HEARTBEAT_INTERNAL_PASSWORD='xxxxx'
 # User 'monitoring_internal' (custom)
 #
 # The user Metricbeat uses to collect monitoring data from stack components.
 # https://www.elastic.co/guide/en/elasticsearch/reference/current/how-monitoring-works.html
-MONITORING_INTERNAL_PASSWORD=''
+MONITORING_INTERNAL_PASSWORD='xxxxx

 # User 'beats_system' (built-in)
 #
 # The user the Beats use when storing monitoring information in Elasticsearch.
 # https://www.elastic.co/guide/en/elasticsearch/reference/current/built-in-users.html
-BEATS_SYSTEM_PASSWORD=''
+BEATS_SYSTEM_PASSWORD='xxxxxxxxxx'
diff --git a/docker-compose.yml b/docker-compose.yml
index 8720347..cc5ddc6 100644
--- a/docker-compose.yml
+++ b/docker-compose.yml
@@ -84,7 +84,7 @@ services:
       - 9300:9300
     environment:
       node.name: elasticsearch
-      ES_JAVA_OPTS: -Xms512m -Xmx512m
+      ES_JAVA_OPTS: -Xms9000m -Xmx9000m
       # Bootstrap password.
       # Used to initialize the keystore during the initial startup of
       # Elasticsearch. Ignored on subsequent runs.
@@ -112,7 +112,7 @@ services:
       - 50000:50000/udp
       - 9600:9600
     environment:
-      LS_JAVA_OPTS: -Xms256m -Xmx256m
+      LS_JAVA_OPTS: -Xms4000m -Xmx5000m
       LOGSTASH_INTERNAL_PASSWORD: ${LOGSTASH_INTERNAL_PASSWORD:-}
     networks:
       - elk
diff --git a/extensions/fleet/fleet-compose.yml b/extensions/fleet/fleet-compose.yml
index 17486ee..9dd101b 100644
--- a/extensions/fleet/fleet-compose.yml
+++ b/extensions/fleet/fleet-compose.yml
@@ -21,6 +21,7 @@ services:
       FLEET_SERVER_CERT_KEY: /usr/share/elastic-agent/fleet-server.key
       ELASTICSEARCH_HOST: https://elasticsearch:9200
       ELASTICSEARCH_CA: /usr/share/elastic-agent/ca.crt
+      FLEET_SERVER_ELASTICSEARCH_CA_TRUSTED_FINGERPRINT: xxxxxx
       # Fleet plugin in Kibana
       KIBANA_FLEET_SETUP: '1'
       # Enrollment.
diff --git a/kibana/config/kibana.yml b/kibana/config/kibana.yml
index 7bbf738..e60bbca 100644
--- a/kibana/config/kibana.yml
+++ b/kibana/config/kibana.yml
@@ -27,7 +27,7 @@ elasticsearch.ssl.certificateAuthorities: [ config/ca.crt ]
 ## Communications between web browsers and Kibana
 ## see https://www.elastic.co/guide/en/kibana/current/configuring-tls.html#configuring-tls-browser-kib
 #
-server.ssl.enabled: false
+server.ssl.enabled: true
 server.ssl.certificate: config/kibana.crt
 server.ssl.key: config/kibana.key

@@ -55,7 +55,7 @@ xpack.fleet.outputs:
     type: elasticsearch
     hosts: [ https://elasticsearch:9200 ]
     # Set to output of 'docker-compose up tls'. Example:
-    #ca_trusted_fingerprint: xxxxxxx
+    ca_trusted_fingerprint: xxxxxx
     is_default: true
     is_default_monitoring: true

diff --git a/logstash/Dockerfile b/logstash/Dockerfile
index bde5808..71ab598 100644
--- a/logstash/Dockerfile
+++ b/logstash/Dockerfile
@@ -3,5 +3,14 @@ ARG ELASTIC_VERSION
 # https://www.docker.elastic.co/
 FROM docker.elastic.co/logstash/logstash:${ELASTIC_VERSION}

+
+# certs/keys for Beats and Lumberjack input
+USER root
+RUN mkdir -p /etc/pki/tls/{certs,private}
+ADD ./logstash-beats.crt /etc/pki/tls/certs/logstash-beats.crt
+ADD ./logstash-beats.key /etc/pki/tls/private/logstash-beats.key
+USER logstash
+
+
 # Add your logstash plugins setup here
 # Example: RUN logstash-plugin install logstash-filter-json
diff --git a/logstash/pipeline/logstash.conf b/logstash/pipeline/logstash.conf
deleted file mode 100644
index 5cb4708..0000000
--- a/logstash/pipeline/logstash.conf
+++ /dev/null
@@ -1,21 +0,0 @@
-input {
-       beats {
-               port => 5044
-       }
-
-       tcp {
-               port => 50000
-       }
-}
-
-## Add your filters / logstash plugins configuration here
-
-output {
-       elasticsearch {
-               hosts => "elasticsearch:9200"
-               user => "logstash_internal"
-               password => "${LOGSTASH_INTERNAL_PASSWORD}"
-               ssl => true
-               cacert => "config/ca.crt"
-       }
-}

Docker setup

Client:
 Version:           24.0.5
 API version:       1.43
 Go version:        go1.20.3
 Git commit:        24.0.5-0ubuntu1~22.04.1
 Built:             Mon Aug 21 19:50:14 2023
 OS/Arch:           linux/amd64
 Context:           default

Server:
 Engine:
  Version:          24.0.5
  API version:      1.43 (minimum version 1.12)
  Go version:       go1.20.3
  Git commit:       24.0.5-0ubuntu1~22.04.1
  Built:            Mon Aug 21 19:50:14 2023
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.7.2
  GitCommit:
 runc:
  Version:          1.1.7-0ubuntu1~22.04.1
  GitCommit:
 docker-init:
  Version:          0.19.0
  GitCommit:
docker-compose version 1.29.2, build unknown
docker-py version: 5.0.3
CPython version: 3.10.12
OpenSSL version: OpenSSL 3.0.2 15 Mar 2022

Container logs

KIBANA

kibana_1         | [2024-01-25T15:21:10.671+00:00][WARN ][plugins.licensing] License information could not be obtained from Elasticsearch due to ConnectionError: connect ECONNREFUSED 172.22.0.2:9200 error
kibana_1         | [2024-01-25T15:21:20.718+00:00][ERROR][plugins.security.authentication] License is not available, authentication is not possible.
kibana_1         | [2024-01-25T15:21:20.728+00:00][WARN ][plugins.licensing] License information could not be obtained from Elasticsearch due to ConnectionError: connect ECONNREFUSED 172.22.0.2:9200 error
kibana_1         | [2024-01-25T15:21:21.676+00:00][ERROR][plugins.security.authentication] License is not available, authentication is not possible.
kibana_1         | [2024-01-25T15:21:21.688+00:00][WARN ][plugins.licensing] License information could not be obtained from Elasticsearch due to ConnectionError: connect ECONNREFUSED 172.22.0.2:9200 error
kibana_1         | [2024-01-25T15:21:29.890+00:00][ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. security_exception
kibana_1         |      Root causes:
kibana_1         |              security_exception: unable to authenticate user [kibana_system] for REST request [/_nodes?filter_path=nodes.*.version%2Cnodes.*.http.publish_address%2Cnodes.*.ip]
kibana_1         | [2024-01-25T15:21:31.746+00:00][ERROR][plugins.security.authentication] License is not available, authentication is not possible.
kibana_1         | [2024-01-25T15:21:31.766+00:00][INFO ][plugins.monitoring.monitoring.kibana-monitoring] Starting monitoring stats collection
kibana_1         | [2024-01-25T15:21:31.837+00:00][INFO ][plugins.fleet] Agent policies updated by license change: []
kibana_1         | [2024-01-25T15:21:32.562+00:00][INFO ][status] Kibana is now available (was critical)
kibana_1         | [2024-01-25T15:21:32.667+00:00][INFO ][status] Kibana is now degraded (was available)
kibana_1         | [2024-01-25T15:21:33.058+00:00][INFO ][plugins.fleet] Fleet Usage: {"agents_enabled":true,"agents":{"total_enrolled":0,"healthy":0,"unhealthy":0,"offline":0,"inactive":0,"unenrolled":0,"total_all_statuses":0,"updating":0},"fleet_server":{"total_enrolled":0,"healthy":0,"unhealthy":0,"offline":0,"updating":0,"total_all_statuses":0,"num_host_urls":1}}
kibana_1         | [2024-01-25T15:21:38.528+00:00][INFO ][status] Kibana is now available (was degraded)
kibana_1         | [2024-01-25T15:36:35.605+00:00][INFO ][plugins.fleet] Fleet Usage: {"agents_enabled":true,"agents":{"total_enrolled":0,"healthy":0,"unhealthy":0,"offline":0,"inactive":0,"unenrolled":0,"total_all_statuses":0,"updating":0},"fleet_server":{"total_enrolled":0,"healthy":0,"unhealthy":0,"offline":0,"updating":0,"total_all_statuses":0,"num_host_urls":1}}
kibana_1         | [2024-01-25T15:51:35.690+00:00][INFO ][plugins.fleet] Fleet Usage: {"agents_enabled":true,"agents":{"total_enrolled":0,"healthy":0,"unhealthy":0,"offline":0,"inactive":0,"unenrolled":0,"total_all_statuses":0,"updating":0},"fleet_server":{"total_enrolled":0,"healthy":0,"unhealthy":0,"offline":0,"updating":0,"total_all_statuses":0,"num_host_urls":1}}

ELASTIC-SEARCH

elasticsearch_1  | {"@timestamp":"2024-01-25T16:14:13.390Z", "log.level": "WARN", "message":"received plaintext http traffic on an https channel, closing connection Netty4HttpChannel{localAddress=/172.22.0.2:9200, remoteAddress=/172.22.0.9:42440}", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[elasticsearch][transport_worker][T#4]","log.logger":"org.elasticsearch.http.netty4.Netty4HttpServerTransport","elasticsearch.cluster.uuid":"fJcSqqO5SqyHacd0EqrkUw","elasticsearch.node.id":"nO4mwj7GT1ibYUBl2h7LZQ","elasticsearch.node.name":"elasticsearch","elasticsearch.cluster.name":"docker-cluster"}
elasticsearch_1  | {"@timestamp":"2024-01-25T16:14:15.393Z", "log.level": "WARN", "message":"received plaintext http traffic on an https channel, closing connection Netty4HttpChannel{localAddress=/172.22.0.2:9200, remoteAddress=/172.22.0.9:36918}", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[elasticsearch][transport_worker][T#6]","log.logger":"org.elasticsearch.http.netty4.Netty4HttpServerTransport","elasticsearch.cluster.uuid":"fJcSqqO5SqyHacd0EqrkUw","elasticsearch.node.id":"nO4mwj7GT1ibYUBl2h7LZQ","elasticsearch.node.name":"elasticsearch","elasticsearch.cluster.name":"docker-cluster"}
elasticsearch_1  | {"@timestamp":"2024-01-25T16:14:15.394Z", "log.level": "WARN", "message":"received plaintext http traffic on an https channel, closing connection Netty4HttpChannel{localAddress=/172.22.0.2:9200, remoteAddress=/172.22.0.9:36932}", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[elasticsearch][transport_worker][T#7]","log.logger":"org.elasticsearch.http.netty4.Netty4HttpServerTransport","elasticsearch.cluster.uuid":"fJcSqqO5SqyHacd0EqrkUw","elasticsearch.node.id":"nO4mwj7GT1ibYUBl2h7LZQ","elasticsearch.node.name":"elasticsearch","elasticsearch.cluster.name":"docker-cluster"}
elasticsearch_1  | {"@timestamp":"2024-01-25T16:14:15.396Z", "log.level": "WARN", "message":"received plaintext http traffic on an https channel, closing connection Netty4HttpChannel{localAddress=/172.22.0.2:9200, remoteAddress=/172.22.0.9:36940}", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[elasticsearch][transport_worker][T#8]","log.logger":"org.elasticsearch.http.netty4.Netty4HttpServerTransport","elasticsearch.cluster.uuid":"fJcSqqO5SqyHacd0EqrkUw","elasticsearch.node.id":"nO4mwj7GT1ibYUBl2h7LZQ","elasticsearch.node.name":"elasticsearch","elasticsearch.cluster.name":"docker-cluster"}
elasticsearch_1  | {"@timestamp":"2024-01-25T16:14:15.397Z", "log.level": "WARN", "message":"received plaintext http traffic on an https channel, closing connection Netty4HttpChannel{localAddress=/172.22.0.2:9200, remoteAddress=/172.22.0.9:36948}", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[elasticsearch][transport_worker][T#9]","log.logger":"org.elasticsearch.http.netty4.Netty4HttpServerTransport","elasticsearch.cluster.uuid":"fJcSqqO5SqyHacd0EqrkUw","elasticsearch.node.id":"nO4mwj7GT1ibYUBl2h7LZQ","elasticsearch.node.name":"elasticsearch","elasticsearch.cluster.name":"docker-cluster"}
Idam7961 commented 10 months ago

172.22.0.9 is fleet docker

fleet-compose.yml

version: '3.7'

services:
  fleet-server:
    build:
      context: extensions/fleet/
      args:
        ELASTIC_VERSION: ${ELASTIC_VERSION}
    volumes:
      - fleet-server:/usr/share/elastic-agent/state:Z
      # (!) TLS certificates. Generate using the 'tls' service.
      - ./tls/certs/ca/ca.crt:/usr/share/elastic-agent/ca.crt:ro,z
      - ./tls/certs/fleet-server/fleet-server.crt:/usr/share/elastic-agent/fleet-server.crt:ro,Z
      - ./tls/certs/fleet-server/fleet-server.key:/usr/share/elastic-agent/fleet-server.key:ro,Z
    environment:
      FLEET_SERVER_ENABLE: '1'
      FLEET_SERVER_HOST: 0.0.0.0
      FLEET_SERVER_POLICY_ID: fleet-server-policy
      FLEET_URL: https://fleet-server:8220
      FLEET_SERVER_CERT: /usr/share/elastic-agent/fleet-server.crt
      FLEET_SERVER_CERT_KEY: /usr/share/elastic-agent/fleet-server.key
      ELASTICSEARCH_HOST: https://elasticsearch:9200
      ELASTICSEARCH_CA: /usr/share/elastic-agent/ca.crt
      FLEET_SERVER_ELASTICSEARCH_CA_TRUSTED_FINGERPRINT: xxxxxx
      # Fleet plugin in Kibana
      KIBANA_FLEET_SETUP: '1'
      # Enrollment.
      # (a) Auto-enroll using basic authentication
      ELASTICSEARCH_USERNAME: elastic
      ELASTICSEARCH_PASSWORD: ${ELASTIC_PASSWORD:-}
      # (b) Enroll using a pre-generated service token
      #FLEET_SERVER_SERVICE_TOKEN: <service_token>
    ports:
      - 8220:8220
    hostname: fleet-server
    # Elastic Agent does not retry failed connections to Kibana upon the initial enrollment phase.
    restart: on-failure
    networks:
      - elk
    depends_on:
      - elasticsearch
      - kibana

volumes:
  fleet-server:
antoineco commented 10 months ago

Thanks for the detailed report :+1:

With just a few log lines it is difficult to tell what is going on. Sometimes the reason for something not working is not directly visible inside WARN/ERROR log entries, but instead scattered around the logs.

Are you seeing the Fleet server inside Kibana?

I noted a few things:

Idam7961 commented 10 months ago

Thanks for the detailed report 👍

With just a few log lines it is difficult to tell what is going on. Sometimes the reason for something not working is not directly visible inside WARN/ERROR log entries, but instead scattered around the logs.

Are you seeing the Fleet server inside Kibana?

I noted a few things:

  • "plain text" errors could indicate that you didn't rebuild images after switching branches (see README)
  • The FLEET_SERVER_ELASTICSEARCH_CA_TRUSTED_FINGERPRINT variable is not necessary. The Elasticsearch CA certificate is already mounted inside the Fleet Server container. Only the kibana.yml file requires this fingerprint.

Thanks for the fast reply!

what more logs you need? elastic search just repeats that whole line for the whole buffer.

I dont see the fleet server inside kibana image

its allready rebuilded:

user@host:/etc/elastic-stack/docker-elk# docker-compose -f docker-compose.yml -f extensions/metricbeat/metricbeat-compose.yml -f extensions/filebeat/filebeat-compose.yml -f extensions/heartbeat/heartbeat-compose.yml -f extensions/logspout/logspout-compose.yml -f extensions/fleet/fleet-compose.yml build
Building elasticsearch
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
            Install the buildx component to build images with BuildKit:
            https://docs.docker.com/go/buildx/

Sending build context to Docker daemon  5.632kB
Step 1/2 : ARG ELASTIC_VERSION
Step 2/2 : FROM docker.elastic.co/elasticsearch/elasticsearch:${ELASTIC_VERSION}
 ---> b7c0bf7f2e52
Successfully built b7c0bf7f2e52
Successfully tagged docker-elk_elasticsearch:latest
Building metricbeat
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
            Install the buildx component to build images with BuildKit:
            https://docs.docker.com/go/buildx/

Sending build context to Docker daemon  11.78kB
Step 1/2 : ARG ELASTIC_VERSION
Step 2/2 : FROM docker.elastic.co/beats/metricbeat:${ELASTIC_VERSION}
 ---> 9f5035931f3d
Successfully built 9f5035931f3d
Successfully tagged docker-elk_metricbeat:latest
Building filebeat
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
            Install the buildx component to build images with BuildKit:
            https://docs.docker.com/go/buildx/

Sending build context to Docker daemon  10.24kB
Step 1/2 : ARG ELASTIC_VERSION
Step 2/2 : FROM docker.elastic.co/beats/filebeat:${ELASTIC_VERSION}
 ---> 7c0012435993
Successfully built 7c0012435993
Successfully tagged docker-elk_filebeat:latest
Building kibana
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
            Install the buildx component to build images with BuildKit:
            https://docs.docker.com/go/buildx/

Sending build context to Docker daemon  14.85kB
Step 1/2 : ARG ELASTIC_VERSION
Step 2/2 : FROM docker.elastic.co/kibana/kibana:${ELASTIC_VERSION}
 ---> a5ee2c00b338
Successfully built a5ee2c00b338
Successfully tagged docker-elk_kibana:latest
Building fleet-server
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
            Install the buildx component to build images with BuildKit:
            https://docs.docker.com/go/buildx/

Sending build context to Docker daemon  19.97kB
Step 1/3 : ARG ELASTIC_VERSION
Step 2/3 : FROM docker.elastic.co/beats/elastic-agent:${ELASTIC_VERSION}
 ---> 69a3e94d378d
Step 3/3 : RUN mkdir state
 ---> Using cache
 ---> cf291c5d6d6e
Successfully built cf291c5d6d6e
Successfully tagged docker-elk_fleet-server:latest
Building logstash
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
            Install the buildx component to build images with BuildKit:
            https://docs.docker.com/go/buildx/

Sending build context to Docker daemon  33.79kB
Step 1/7 : ARG ELASTIC_VERSION
Step 2/7 : FROM docker.elastic.co/logstash/logstash:${ELASTIC_VERSION}
 ---> d3299608a390
Step 3/7 : USER root
 ---> Using cache
 ---> 582d5d80ef66
Step 4/7 : RUN mkdir -p /etc/pki/tls/{certs,private}
 ---> Using cache
 ---> 832d1896e9b1
Step 5/7 : ADD ./logstash-beats.crt /etc/pki/tls/certs/logstash-beats.crt
 ---> Using cache
 ---> 2f357c084d4e
Step 6/7 : ADD ./logstash-beats.key /etc/pki/tls/private/logstash-beats.key
 ---> Using cache
 ---> 3f416862328f
Step 7/7 : USER logstash
 ---> Using cache
 ---> 090723fffaaa
Successfully built 090723fffaaa
Successfully tagged docker-elk_logstash:latest
Building logspout
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
            Install the buildx component to build images with BuildKit:
            https://docs.docker.com/go/buildx/

Sending build context to Docker daemon   7.68kB
Step 1/2 : FROM gliderlabs/logspout:master
# Executing 3 build triggers
 ---> Using cache
 ---> Using cache
 ---> Using cache
 ---> a0594aaf2145
Step 2/2 : ENV SYSLOG_FORMAT rfc3164
 ---> Using cache
 ---> e31afb72e052
Successfully built e31afb72e052
Successfully tagged docker-elk_logspout:latest
Building heartbeat
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
            Install the buildx component to build images with BuildKit:
            https://docs.docker.com/go/buildx/

Sending build context to Docker daemon  9.728kB
Step 1/2 : ARG ELASTIC_VERSION
Step 2/2 : FROM docker.elastic.co/beats/heartbeat:${ELASTIC_VERSION}
 ---> ea7959d45f2d
Successfully built ea7959d45f2d
Successfully tagged docker-elk_heartbeat:latest
user@host:/etc/elastic-stack/docker-elk# docker-compose -f docker-compose.yml -f extensions/metricbeat/metricbeat-compose.yml -f extensions/filebeat/filebeat-compose.yml -f extensions/heartbeat/heartbeat-compose.yml -f extensions/logspout/logspout-compose.yml -f extensions/fleet/fleet-compose.yml up -d
docker-elk_elasticsearch_1 is up-to-date
docker-elk_kibana_1 is up-to-date
docker-elk_heartbeat_1 is up-to-date
docker-elk_metricbeat_1 is up-to-date
docker-elk_logstash_1 is up-to-date
docker-elk_filebeat_1 is up-to-date
docker-elk_fleet-server_1 is up-to-date
docker-elk_logspout_1 is up-to-date
antoineco commented 10 months ago

Ok thanks for confirming about the image builds.

Right now the issue is not obvious to me. I wasn't able to reproduce and the automated tests are still passing. I'll try to dig a bit more during the weekend.

antoineco commented 10 months ago

I have a theory.

Could it be that Kibana was originally set up on the main branch, and registered the following Elasticsearch output while bootstrapping the Fleet configuration?:

https://github.com/deviantony/docker-elk/blob/eeb8026baf5f9550bf0ac01e63b9b3d16e7b0e0d/kibana/config/kibana.yml#L30-L39

You should be able to check the Elasticsearch URL propagated to Fleet agents on the Settings tab of the Fleet menu, as described in the docs.

Idam7961 commented 10 months ago

i did started with the main branch and now have switched to tls branch. when i switched to tls i used "git checkout -f tls" should had i done it a different way?

Anyways now on the current file of kibana.yml - it is configured to use https with elasticsearch:

user@host:/etc/elastic-stack/docker-elk/kibana/config# cat kibana.yml
---
## Default Kibana configuration from Kibana base image.
## https://github.com/elastic/kibana/blob/main/src/dev/build/tasks/os_packages/docker_generator/templates/kibana_yml.template.ts
#
server.name: kibana
server.host: 0.0.0.0
elasticsearch.hosts: [ https://elasticsearch:9200 ]

monitoring.ui.container.elasticsearch.enabled: true
monitoring.ui.container.logstash.enabled: true

## X-Pack security credentials
#
elasticsearch.username: kibana_system
elasticsearch.password: ${KIBANA_SYSTEM_PASSWORD}

##
## TLS configuration
## See instructions from README to enable.
##

## Communications between Kibana and Elasticsearch
## see https://www.elastic.co/guide/en/kibana/current/configuring-tls.html#configuring-tls-kib-es
#
elasticsearch.ssl.certificateAuthorities: [ config/ca.crt ]

## Communications between web browsers and Kibana
## see https://www.elastic.co/guide/en/kibana/current/configuring-tls.html#configuring-tls-browser-kib
#
server.ssl.enabled: true
server.ssl.certificate: config/kibana.crt
server.ssl.key: config/kibana.key

## Encryption keys (optional but highly recommended)
##
## Generate with either
##  $ docker container run --rm docker.elastic.co/kibana/kibana:8.6.2 bin/kibana-encryption-keys generate
##  $ openssl rand -hex 32
##
## https://www.elastic.co/guide/en/kibana/current/using-kibana-with-security.html
## https://www.elastic.co/guide/en/kibana/current/kibana-encryption-keys.html
#
#xpack.security.encryptionKey:
#xpack.encryptedSavedObjects.encryptionKey:
#xpack.reporting.encryptionKey:

## Fleet
## https://www.elastic.co/guide/en/kibana/current/fleet-settings-kb.html
#
xpack.fleet.agents.fleet_server.hosts: [ https://fleet-server:8220 ]

xpack.fleet.outputs:
  - id: fleet-default-output
    name: default
    type: elasticsearch
    hosts: [ https://elasticsearch:9200 ]
    # Set to output of 'docker-compose up tls'. Example:
    ca_trusted_fingerprint: xxxxxx
    is_default: true
    is_default_monitoring: true

xpack.fleet.packages:
  - name: fleet_server
    version: latest
  - name: system
    version: latest
  - name: elastic_agent
    version: latest
  - name: apm
    version: latest

xpack.fleet.agentPolicies:
  - name: Fleet Server Policy
    id: fleet-server-policy
    description: Static agent policy for Fleet Server
    monitoring_enabled:
      - logs
      - metrics
    package_policies:
      - name: fleet_server-1
        package:
          name: fleet_server
      - name: system-1
        package:
          name: system
      - name: elastic_agent-1
        package:
          name: elastic_agent
  - name: Agent Policy APM Server
    id: agent-policy-apm-server
    description: Static agent policy for the APM Server integration
    monitoring_enabled:
      - logs
      - metrics
    package_policies:
      - name: system-1
        package:
          name: system
      - name: elastic_agent-1
        package:
          name: elastic_agent
      - name: apm-1
        package:
          name: apm
        # See the APM package manifest for a list of possible inputs.
        # https://github.com/elastic/apm-server/blob/v8.5.0/apmpackage/apm/manifest.yml#L41-L168
        inputs:
          - type: apm
            vars:
              - name: host
                value: 0.0.0.0:8200
              - name: url
                value: https://apm-server:8200
              - name: tls_enabled
                value: true
              - name: tls_certificate
                value: /usr/share/elastic-agent/apm-server.crt
              - name: tls_key
                value: /usr/share/elastic-agent/apm-server.key

Plus, the Elasticsearch URL that is propagated to Fleet agents on the Fleet menu is as follows:

image
antoineco commented 10 months ago

All good, it seems like Kibana did update the output's URL to the correct value.

If your stack does not contain any critical data, would you mind trying a complete reset as follows? (notice the -v flag for "delete volumes")

https://github.com/deviantony/docker-elk/blob/2f8d50d9807d74c683c47747ba0b7e4866143bef/.github/workflows/ci.yml#L241-L250

Idam7961 commented 10 months ago

ive made a new vm and deployed it from the get go before making my configurations to my needs as done in the old vm. fleet is working and is recognized from the start in kibana ui. all i done at the cloned repo files was inserting the fingerprint.

ill continue with my configurations and let you know what breaks the fleet if at all. Thanks for the help!