Open danielfcastro opened 4 days ago
Does docker compose stop kibana logstash
followed by docker compose rm kibana logstash
then docker compose up -d
solve the issue?
Maybe changes to the .env
file were not picked by docker compose up
for some reason (when it does, it should show recreated containers).
No. Same problem
logstash-1 | Using bundled JDK: /usr/share/logstash/jdk kibana-1 | [2024-10-10T14:54:30.725+00:00][ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. security_exception kibana-1 | Root causes: kibana-1 | security_exception: unable to authenticate user [kibana_system] for REST request [/_nodes?filter_path=nodes..version%2Cnodes..http.publish_address%2Cnodes.*.ip] kibana-1 | [2024-10-10T14:54:31.602+00:00][INFO ][plugins.screenshotting.chromium] Browser executable: /usr/share/kibana/node_modules/@kbn/screenshotting-plugin/chromium/headless_shell-linux_x64/headless_shell
Does docker compose logs elasticsearch | grep exceed
return anything?
It returns WARN[0000] The "ELASTIC_VERSION" variable is not set. Defaulting to a blank string. WARN[0000] The "ELASTIC_VERSION" variable is not set. Defaulting to a blank string. WARN[0000] The "ELASTIC_VERSION" variable is not set. Defaulting to a blank string. WARN[0000] The "ELASTIC_VERSION" variable is not set. Defaulting to a blank string.
elasticsearch-1 | {"@timestamp":"2024-10-10T21:59:43.374Z", "log.level": "INFO", "message":"Authentication of [kibana_system] was terminated by realm [reserved] - failed to authenticate user [kibana_system]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[elasticsearch][generic][T#4]","log.logger":"org.elasticsearch.xpack.security.authc.RealmsAuthenticator","trace.id":"a1d2e0eec30690ccc653da106b68ffde","elasticsearch.cluster.uuid":"PgiNMa62TSGz1bRhsTv7eg","elasticsearch.node.id":"UYNSImpIQRm2edS6EN3wVg","elasticsearch.node.name":"elasticsearch","elasticsearch.cluster.name":"docker-cluster"} elasticsearch-1 | {"@timestamp":"2024-10-10T21:59:43.374Z", "log.level": "INFO", "message":"Authentication of [kibana_system] was terminated by realm [reserved] - failed to authenticate user [kibana_system]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[elasticsearch][system_critical_read][T#1]","log.logger":"org.elasticsearch.xpack.security.authc.RealmsAuthenticator","trace.id":"a1d2e0eec30690ccc653da106b68ffde","elasticsearch.cluster.uuid":"PgiNMa62TSGz1bRhsTv7eg","elasticsearch.node.id":"UYNSImpIQRm2edS6EN3wVg","elasticsearch.node.name":"elasticsearch","elasticsearch.cluster.name":"docker-cluster"} elasticsearch-1 | {"@timestamp":"2024-10-10T21:59:43.968Z", "log.level": "INFO", "message":"Authentication of [kibana_system] was terminated by realm [reserved] - failed to authenticate user [kibana_system]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[elasticsearch][system_critical_read][T#2]","log.logger":"org.elasticsearch.xpack.security.authc.RealmsAuthenticator","trace.id":"a1d2e0eec30690ccc653da106b68ffde","elasticsearch.cluster.uuid":"PgiNMa62TSGz1bRhsTv7eg","elasticsearch.node.id":"UYNSImpIQRm2edS6EN3wVg","elasticsearch.node.name":"elasticsearch","elasticsearch.cluster.name":"docker-cluster"}
logstash-1 | [2024-10-10T22:04:20,388][INFO ][logstash.runner ] Jackson default value override logstash.jackson.stream-read-constraints.max-number-length
configured to 10000
logstash-1 | [2024-10-10T22:04:20,396][INFO ][logstash.settings ] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
logstash-1 | [2024-10-10T22:04:20,398][INFO ][logstash.settings ] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
kibana-1 | [2024-10-10T22:04:20.760+00:00][WARN ][plugins.reporting.config] Generating a random key for xpack.reporting.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.reporting.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.
logstash-1 | [2024-10-10T22:04:20,893][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"dfd331b7-f62a-4696-93d6-71d32ea230f0", :path=>"/usr/share/logstash/data/uuid"}
logstash-1 | [2024-10-10T22:04:22,195][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
logstash-1 | [2024-10-10T22:04:23,170][INFO ][org.reflections.Reflections] Reflections took 497 ms to scan 1 urls, producing 138 keys and 481 values
logstash-1 | [2024-10-10T22:04:23,857][INFO ][logstash.javapipeline ] Pipeline main
is configured with pipeline.ecs_compatibility: v8
setting. All plugins in this pipeline will default to ecs_compatibility => v8
unless explicitly configured otherwise.
kibana-1 | [2024-10-10T22:04:23.866+00:00][INFO ][plugins.cloudSecurityPosture] Registered task successfully [Task: cloud_security_posture-stats_task]
logstash-1 | [2024-10-10T22:04:23,892][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//elasticsearch:9200"]}
logstash-1 | [2024-10-10T22:04:24,340][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://logstash_internal:xxxxxx@elasticsearch:9200/]}}
logstash-1 | [2024-10-10T22:04:24,426][INFO ][logstash.outputs.elasticsearch][main] Failed to perform request {:message=>"Connect to elasticsearch:9200 [elasticsearch/172.20.0.2] failed: Connection refused", :exception=>Manticore::SocketException, :cause=>#<Java::OrgApacheHttpConn::HttpHostConnectException: Connect to elasticsearch:9200 [elasticsearch/172.20.0.2] failed: Connection refused>}
logstash-1 | [2024-10-10T22:04:24,429][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://logstash_internal:xxxxxx@elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::SocketException] Connect to elasticsearch:9200 [elasticsearch/172.20.0.2] failed: Connection refused"}
logstash-1 | [2024-10-10T22:04:24,440][INFO ][logstash.outputs.elasticsearch][main] Data streams auto configuration (data_stream => auto
or unset) resolved to true
logstash-1 | [2024-10-10T22:04:24,455][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash.conf"], :thread=>"#<Thread:0x2a554278 /usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:134 run>"}
logstash-1 | [2024-10-10T22:04:25,749][INFO ][logstash.javapipeline ][main] Pipeline Java execution initialization time {"seconds"=>1.29}
logstash-1 | [2024-10-10T22:04:25,769][INFO ][logstash.inputs.beats ][main] Starting input listener {:address=>"0.0.0.0:5044"}
logstash-1 | [2024-10-10T22:04:26,297][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
logstash-1 | [2024-10-10T22:04:26,296][INFO ][logstash.inputs.tcp ][main][a6d7e5cd0ebeef12e25e357933adf35c5833775246a6fb5ddb29229d0e7868f8] Starting tcp input listener {:address=>"0.0.0.0:50000", :ssl_enabled=>false}
logstash-1 | [2024-10-10T22:04:26,306][INFO ][org.logstash.beats.Server][main][84a78b7e7eaefaa0e0313503a927de1a007ec091fdd6b2c2558732619c00dc49] Starting server on port: 5044
logstash-1 | [2024-10-10T22:04:26,330][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
kibana-1 | [2024-10-10T22:04:26.628+00:00][INFO ][plugins.securitySolution.endpoint:user-artifact-packager:1.0.0] Registering endpoint:user-artifact-packager task with timeout of [20m], interval of [60s] and policy update batch size of [25]
kibana-1 | [2024-10-10T22:04:26.633+00:00][INFO ][plugins.securitySolution.endpoint:complete-external-response-actions] Registering task [endpoint:complete-external-response-actions] with timeout of [5m] and run interval of [60s]
kibana-1 | [2024-10-10T22:04:29.388+00:00][INFO ][plugins.screenshotting.chromium] Browser executable: /usr/share/kibana/node_modules/@kbn/screenshotting-plugin/chromium/headless_shell-linux_x64/headless_shell
logstash-1 | [2024-10-10T22:04:29,509][INFO ][logstash.outputs.elasticsearch][main] Failed to perform request {:message=>"Connect to elasticsearch:9200 [elasticsearch/172.20.0.2] failed: Connection refused", :exception=>Manticore::SocketException, :cause=>#<Java::OrgApacheHttpConn::HttpHostConnectException: Connect to elasticsearch:9200 [elasticsearch/172.20.0.2] failed: Connection refused>}
logstash-1 | [2024-10-10T22:04:29,521][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://logstash_internal:xxxxxx@elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::SocketException] Connect to elasticsearch:9200 [elasticsearch/172.20.0.2] failed: Connection refused"}
kibana-1 | [2024-10-10T22:04:34.493+00:00][ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. security_exception
kibana-1 | Root causes:
kibana-1 | security_exception: unable to authenticate user [kibana_system] for REST request [/_nodes?filter_path=nodes..version%2Cnodes..http.publish_address%2Cnodes.*.ip]
logstash-1 | [2024-10-10T22:04:34,599][WARN ][logstash.outputs.elasticsearch][main] Health check failed {:code=>401, :url=>http://elasticsearch:9200/, :message=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/'"}
logstash-1 | [2024-10-10T22:04:34,599][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://logstash_internal:xxxxxx@elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/'"}
logstash-1 | [2024-10-10T22:04:39,661][WARN ][logstash.outputs.elasticsearch][main] Health check failed {:code=>401, :url=>http://elasticsearch:9200/, :message=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/'"}
logstash-1 | [2024-10-10T22:04:39,661][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://logstash_internal:xxxxxx@elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/'"}
logstash-1 | [2024-10-10T22:04:44,722][WARN ][logstash.outputs.elasticsearch][main] Health check failed {:code=>401, :url=>http://elasticsearch:9200/, :message=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/'"}
logstash-1 | [2024-10-10T22:04:44,723][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://logstash_internal:xxxxxx@elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/'"}
logstash-1 | [2024-10-10T22:04:49,783][WARN ][logstash.outputs.elasticsearch][main] Health check failed {:code=>401, :url=>http://elasticsearch:9200/, :message=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/'"}
logstash-1 | [2024-10-10T22:04:49,785][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://logstash_internal:xxxxxx@elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/'"}
logstash-1 | [2024-10-10T22:04:54,846][WARN ][logstash.outputs.elasticsearch][main] Health check failed {:code=>401, :url=>http://elasticsearch:9200/, :message=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/'"}
logstash-1 | [2024-10-10T22:04:54,846][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://logstash_internal:xxxxxx@elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/'"}
WHat happens is after runing setup and starting the user with changeme password works when I run
'''docker compose up'''
Then I run Reset passwords for default user Then I run Replace usernames and passwords in configuration files Then I run docker compose up -d logstash kibana
This is a misleading step. I am not sure if I need to run with the pods running from the latest docker compose up command or if I have to turn down my pods and run the command file.
After this step the systems don't connect each other
When you run docker compose up setup
, passwords are read from the .env
file and set in Elasticsearch. The initial password is "changeme" to allow people to try things out quickly and effortlessly, but you can also bring your own passwords and set them in the .env
file right away, before running docker compose up setup
.
In your previous message you wrote:
Then I run Replace usernames and passwords in configuration files
There is no password to be replaced in configuration files, just in the .env
file. Compose will pass these to the various containers via environmental variables.
Another thing, in a previous message you showed the following warning:
WARN[0000] The "ELASTIC_VERSION" variable is not set. Defaulting to a blank string.
This warning means that you don't have a .env
file in your project directory at all. This is not going to work. Did you delete it?
In any case, I recommend throwing all containers and data away with docker compose down -v
(don't forget -v
), and restart with a fresh clone of docker-elk (no local modification to any default file).
And remember:
.env
file without using the password tool, docker compose up setup
must be executed to submit the changes to Elasticsearch..env
file and "up" the ELK stack again
Problem description
When I run the docker composer file I get an authentication problem that keeps Kibana on "Kibana server is not ready yet." forever
Extra information
I execute as the README.MD describes until I reach
Then the issue happens
Stack configuration
Default configuration
Docker setup
Container logs