Closed mikehaertl closed 3 years ago
@mikehaertl do you mind turning tracing on in the logstash
section of your fig.yml
?
logstash:
environment:
LOGSTASH_TRACE: true
From there, please run again and either drop the relevant info either in here or in a gist. I think this issue is similar to what @systemfreund reported in issue #59.
Sure, here's the gist: https://gist.github.com/mikehaertl/954952e09ead26f7baf4
BTW since the config vars ES_HOST
and ES_PORT
get overwritten in my local config, I have to reset them to those vars each time i restart the setup (otherwhise it would use the wrong IP as containers get new IPs if they are recreated). Is there a way to avoid that?
@mikehaertl thanks for the gist. Are you manually setting the port => "tcp://172.17.0.82:9200"
in your logstash.conf
? If you are, that should only be a port, port => "9200"
.
I'm writing some tests now based on your fig.yml
. I should have them ready at some point today.
@mikehaertl I think that I have your issue sorted out. The problem was that the ES_HOST
variable that I was using is also being set by Docker when you have a linked container. That's what @systemfreund's issue #59 found.
In the fix I just made, there are now separate env variables available for the Elasticsearch service:
# If there is a linked Elasticsearch container, use it's host.
# If there isn't a link, use ES_SERVICE_HOST if it is defined.
# Otherwise fall back to 127.0.0.1.
#
function es_service_host() {
local default_host=${ES_SERVICE_HOST:-127.0.0.1}
local host=${ES_PORT_9200_TCP_ADDR:-$default_host}
echo "$host"
}
# If there is a linked Elasticsearch container, use it's port.
# If there isn't a link, use ES_SERVICE_PORT if it is defined.
# Otherwise fall back to 9200.
#
function es_service_port() {
local default_port=${ES_SERVICE_PORT:-9200}
local port=${ES_PORT_9200_TCP_PORT:-$default_port}
echo "$port"
}
And the Elasticsearch proxy used by Kibana:
function es_proxy_host() {
local host=${ES_PROXY_HOST:-'"+window.location.hostname+"'}
echo "$host"
}
function es_proxy_port() {
local port=${ES_PROXY_PORT:-9200}
echo "$port"
}
I have also beefed up the test suite to cover the edge cases that are starting to crop up. The implementation is basic now. There is an embedded test and a linked test. The linked test is based on this issue.
I am going to merge the fix in soon. If you happen to have time, please give the hotfix/elasticsearch-port-fix branch a try.
I'm not at my test machine right now, but only that much: The ES_*
vars are set because of this setting in the fig.yml
:
links:
- elastic:es
This was taken from your example for linked containers:
$ docker run -d \
--link <your_es_container_name>:es \
-p 9292:9292 \
pblittle/docker-logstash
We could simply remove the es
alias, if it's not required.
@mikehaertl the es
alias is used throughout the codebase. It can be renamed (see the second note below), but it will break existing installations of this container.
Unless I'm missing a third option, to fix this issue, I am going to have to decide between two potentially breaking changes to resolve the variable name conflict:
ES_HOST
& ES_PORT
environment variables to ES_SERVICE_HOST
& ES_SERVICE_PORT
as I did in this PR.es
link name that creates the ES_*
environment variables as you noted.The ES_HOST
& ES_PORT
placeholders used for config file interpolation will continue to work regardless.
I would love to find a third option that doesn't break any existing implementations.
Sorry if I missed it, but how is this fixed now? I'm now using pblittle/docker-logstash:0.12.1
.
I still set this in my customer logstash.conf
:
output {
stdout {
codec => rubydebug
}
elasticsearch {
bind_host => "ES_HOST"
port => "ES_PORT"
index => "apache-logs"
index_type => "apache"
}
}
I get the same error as before. Afterwards the variables in the logfile are replaced:
output {
stdout {
codec => rubydebug
}
elasticsearch {
bind_host => "172.17.0.11"
port => "9200"
index => "apache-logs"
index_type => "apache"
}
}
Apart from it still not working, I still have the problem, that I always have to restore the ES_HOST
and ES_PORT
variables in my custom config whenever I restart the machines. So somehow this replacement approach isn't very userfriendly really. Maybe there's a better way?
Since docker will write a mapping of the ip address to the linked alias into /etc/hosts it is prefered to used that (in this case by using the hostname es) instead of environment variables like ES_PORT_9200_TCP_ADDR because while the /etc/hosts file gets updated on restart of the linked container the environment variables wont get updated (see docker/docker#9891).
@griff, thanks for bringing this issue back to life. Revisiting it was on my todo list. @mikehaertl, sorry to leave you hanging. :smile:
I wish I had more time to follow the Docker issue tracker. I totally missed the issue and related documentation. I would imagine this oversight in my image impacts a lot of folks.
@mikehaertl do you think this approach would solve your issue?
@pblittle No worries - I had no time either :D. I think I've tried the hostname before and it didn't work - but I may be wrong. Will test again when I find time.
Any updates to this issue ? :)
@Tazer, sorry I haven't gotten back to you yet. The short answer is no, I don't have any updates. :) I take it you are experiencing the issue also? Is there anything you can add to the comments above? Any information specific to your deployment would help.
As i want to try out Kibana4, i've set up a linked container from the official
elasticasearch
image. Here's thefig.yml
The config in
logstash/logstash.conf
is:Note: I had
ES_HOST
andES_PORT
there before and it got replaced with the IP address.When I try to start that setup with
fig up
I get this error:The IP address is correct and I can connect to the ES server from inside the logstash container: