miguelcallejasp / logging-filebeat-containerd

Logging Solution based on Filebeat for ContainerD Clusters
5 stars 3 forks source link

advice on logstash elasticsearch #1

Open hholst80 opened 2 years ago

hholst80 commented 2 years ago

Hi, thank you for setting up this guide. I have the same use case as you it seems.

I ran into (as expected) additional problems. How do you work around this?

We have no interest in elasticsearch and it is just an obstacle here.

OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.                       
WARNING: An illegal reflective access operation has occurred                                                                                                    WARNING: Illegal reflective access by com.headius.backport9.modules.Modules (file:/usr/share/logstash/logstash-core/lib/jars/jruby-complete-9.2.8.0.jar) to fiel
WARNING: Please consider reporting this to the maintainers of com.headius.backport9.modules.Modules                                                             WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations                                                           
WARNING: All illegal access operations will be denied in a future release                                                                                       Thread.exclusive is deprecated, use Thread::Mutex                                                                                                               
Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties                                                                 [2022-09-13T12:09:16,168][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}       
[2022-09-13T12:09:16,236][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/de[2022-09-13T12:09:17,416][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.4.0"}                                                     
[2022-09-13T12:09:17,451][INFO ][logstash.agent           ] No persistent UUID file found. Generating new UUID {:uuid=>"500105c8-e07d-419f-8edc-ffd04d5ea246", :[2022-09-13T12:09:19,820][WARN ][logstash.monitoringextension.pipelineregisterhook] xpack.monitoring.enabled has not been defined, but found elasticsearch confi
[2022-09-13T12:09:24,108][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:[2022-09-13T12:09:24,628][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://
[2022-09-13T12:09:24,777][WARN ][logstash.licensechecker.licensereader] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::Ho[2022-09-13T12:09:24,787][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"Elasticsearch Unr
[2022-09-13T12:09:24,835][ERROR][logstash.monitoring.internalpipelinesource] Failed to fetch X-Pack information from Elasticsearch. This is likely due to failur[2022-09-13T12:09:25,423][ERROR][logstash.agent           ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"L
[2022-09-13T12:09:26,091][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}                                            [2022-09-13T12:09:30,683][INFO ][logstash.runner          ] Logstash shut down.                                                                                 
Stream closed EOF for logging/logstash-678749dbb8-5qmlb (logstash)         
miguelcallejasp commented 2 years ago

Hi @hholst80, this looks like the logstash deployment was deployed before the configuration. Can you try deleting everything and deploy the following workloads in this order:

That message you get should either appear and keep going (it shouldn't stop the pod), or not appear at all.

hholst80 commented 2 years ago

Sorry, I am confused. Are there more than two components needed for the solution? I thought the Filebeat (DaemonSet) and the Logstash (Deployment) was sufficient?

miguelcallejasp commented 2 years ago

There are only 2 components. Filebeat and logstash, but each one has a Configuration Map in the YAML file. That configuration map determines how the deployments are going to startup.