[2017-07-18T17:55:34,221][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://host:9200/]}}
[2017-07-18T17:55:34,224][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://host:9200/, :path=>"/"}
[2017-07-18T17:55:34,300][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#<Java::JavaNet::URI:0x6c7da35a>}
[2017-07-18T17:55:34,302][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>[#<Java::JavaNet::URI:0x490865f>]}
[2017-07-18T17:55:34,304][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>250}
[2017-07-18T17:55:34,332][INFO ][logstash.pipeline ] Pipeline main started
[2017-07-18T17:55:34,393][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
I've tried running Logstash with the --verbose flag but I'm not seeing any issues or errors. On the AWS side, I can confirm that my credentials were used to access logs. I've tried explicitly adding the key_id and secret_key in the cloudwatch_logs blob to no avail (currently using an IAM role associated with the instance).
Any ideas on how to debug would be appreciated. I tried adding the output to stdout to see if there was an issue with Elasticsearch receiving the logs, but the issue seems to be with Logstash receiving the logs.
FWIW, I was using Filebeat previously and had no issues receiving logs, creating the index, and adding the log data to the ES index.
Hi, I'm unable to actually stream any logs. I'm sure my IAM permissions etc are set (I was seeing unauthorized errors before updating them)
Below is my Logstash configuration file
This is output from my logstash logs
I've tried running
Logstash
with the--verbose
flag but I'm not seeing any issues or errors. On the AWS side, I can confirm that my credentials were used to access logs. I've tried explicitly adding thekey_id
andsecret_key
in thecloudwatch_logs
blob to no avail (currently using an IAM role associated with the instance).Any ideas on how to debug would be appreciated. I tried adding the output to
stdout
to see if there was an issue withElasticsearch
receiving the logs, but the issue seems to be withLogstash
receiving the logs. FWIW, I was usingFilebeat
previously and had no issues receiving logs, creating the index, and adding the log data to the ES index.Thanks!