Closed clintongormley closed 8 years ago
From @inqueue on October 30, 2015 21:20
I should note this configuration is not any different than versions prior to 2.x. The difference is Elasticsearch no longer binds to all interfaces by default which can be very unexpected if going to 2.0 from an upgrade for example. See Breaking Changes in 2.0 for more details on the change.
From @jprante on October 31, 2015 13:56
A note regarding IPv6 address syntax would be helpful, too.
To bind to all IPv6/IPv4 addresses, you can use
network.bind_host: "0"
network.bind_host: "::"
To bind to IPv4 loopback (localhost) only, you can use
network.bind_host: "0.0.0.0"
together with JAVA_OPTS="$JAVA_OPTS -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Addresses"
To bind to IPv6 loopback (localhost) only, you can use
network.bind_host: "::1"
From @inqueue on October 31, 2015 18:46
Thank you @jprante for your contributions.
To bind to IPv4 loopback (localhost) only, you can use
network.bind_host: "0.0.0.0"
In my observations, this binds to all IPv6/IPv4 just like network.bind_host: 0
:
bash-3.2$ uname -a
Darwin peanut.local 14.5.0 Darwin Kernel Version 14.5.0: Wed Jul 29 02:26:53 PDT 2015; root:xnu-2782.40.9~1/RELEASE_X86_64 x86_64
bash-3.2$ grep bind_host elasticsearch.yml
network.bind_host: "0.0.0.0"
bash-3.2$ netstat -p tcp -tna | egrep "(9200|9300).*LISTEN"
tcp46 0 0 *.9200 *.* LISTEN
tcp46 0 0 *.9300 *.* LISTEN
bash-3.2$ curl -I http://192.168.5.120:9200
HTTP/1.1 200 OK
Content-Type: text/plain; charset=UTF-8
Content-Length: 0
bash-3.2$ curl -I http://[fe80::a299:9bff:fe11:5f69%en0]:9200
HTTP/1.1 200 OK
Content-Type: text/plain; charset=UTF-8
Content-Length: 0
bash-3.2$ curl -I http://127.0.0.1:9200
HTTP/1.1 200 OK
Content-Type: text/plain; charset=UTF-8
Content-Length: 0
bash-3.2$ curl -I http://[::1]:9200
HTTP/1.1 200 OK
Content-Type: text/plain; charset=UTF-8
Content-Length: 0
From @jprante on October 31, 2015 19:25
You must set JAVA_OPTS="$JAVA_OPTS -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Addresses"
to ES start script to disable IPv6, then network.bind_host: "0.0.0.0"
is IPv4 only (and all IPv6 networking will give java.net.SocketException: Protocol family unavailable
)
From @inqueue on October 31, 2015 19:29
Ah, just as you mentioned in your previous comment @jprante. Thanks!
From @markwalkom on November 2, 2015 5:59
+1 on this, be great to provide more clarification for users.
Moving this issue to the elasticsearch repo
You must set JAVA_OPTS="$JAVA_OPTS -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Addresses" to ES start script to disable IPv6, then network.bind_host: "0.0.0.0" is IPv4 only (and all IPv6 networking will give java.net.SocketException: Protocol family unavailable)
I don't think we should encourage this: then things like 'localhost' won't work as the user expects, and thats probably why they want to do this anyway.
This stuff has more to do with mapped addresses, and not those settings, there is only wildcard, and there is only disabling ipv6 completely, so i would let it be. The networking documentation is confusing enough already.
@rmuir My concern is that users use wildcards and forget about IPv6. localhost works fine if there is a setting in /etc/hosts/
like
127.0.0.1 localhost
::1 localhost
and most OS provide these defaults. A hint would be helpful so users can correctly restrict ES to bind to localhost, whether they want IPv4, IPv6, or both.
They can forget about ipv6 all they want: out of box, binding to 0.0.0.0, ::, or even ::FFFF:0.0.0.0 are all equivalent: they all behave the same and bind to "wildcard" (means both v4 and v6) without any confusing -D's.
Agree with this. Not only is '0' not documented at: https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-network.html that page suggests: "Currently an elasticsearch node may be bound to multiple addresses, but only publishes one."
However, if I try, e.g.,:
- "_local:ipv4_"
- "_non_loopback:ipv4_"
(or similar for network.bind_host) I still get:
[2015-11-12 15:28:30,435][INFO ][org.elasticsearch.http.netty] [hostname] Bound http to address {127.0.0.1:9200} [2015-11-12 15:28:30,436][INFO ][org.elasticsearch.http ] [hostname] bound_address {127.0.0.1:9200}, publish_address {127.0.0.1:9200}
I think the wording there in the docs is just bad. While internally elasticsearch works with multiple addresses in 2.0 (because a hostname can resolve to N, because interfaces can have N, to work well with dual stack environments, etc), you won't be able to pass an array or comma-separated list of addresses until 2.2 (https://github.com/elastic/elasticsearch/pull/13954).
But if you have a hostname in DNS or /etc/hosts and it resolves to multiple A/AAAA addresses, we will bind to all of them.
Yes, but isn't bad wording in the docs exactly what this ticket is about?
Given that the default behavior has changed in 2.0, though, the bad wording in the docs is significant, and I think it makes sense to explicitly document how to restore the expected behavior (and clarify that specifying multiple values, whether explicit addresses or "special" aliases, is not yet supported in 2.0).
I'm don't think its just bad wording in the docs. The configuration API itself is not good, stuff like _non_loopback
is bad too (and that is removed in 3.0), because its totally arbitrary and depends on the order of interfaces.
Its also confusing that the documentation immediately jumps into expert stuff like bind
vs publish
which 99% of people should never care about unless they have a special proxy or static NAT configuration happening, that stuff just confuses everyone completely.
To me all that is important, is that the user makes an active decision to change network.host
before exposing themselves to the world. I think the documentation should center around that, and all the other parameters and stuff should be expert, but yeah, it needs to be rewritten completely IMO.
I am still getting the error after changing the recommended settings. I am facing this issue Release 2.1.0 elastic search. Please help.
[2015-11-29 11:55:55,866][WARN ][transport.netty ] [Node1] exception caught on transport layer [[id: 0xfa429912]], closing connection java.net.SocketException: Protocol family unavailable at sun.nio.ch.Net.connect0(Native Method) at sun.nio.ch.Net.connect(Net.java:435) at sun.nio.ch.Net.connect(Net.java:427) at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:643) at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink.connect(NioClientSocketPipelineSink.java:108) at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink.eventSunk(NioClientSocketPipelineSink.java:70) at org.jboss.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:574) at org.jboss.netty.channel.Channels.connect(Channels.java:634) at org.jboss.netty.channel.AbstractChannel.connect(AbstractChannel.java:216) at org.jboss.netty.bootstrap.ClientBootstrap.connect(ClientBootstrap.java:229) at org.jboss.netty.bootstrap.ClientBootstrap.connect(ClientBootstrap.java:182) at org.elasticsearch.transport.netty.NettyTransport.connectToChannelsLight(NettyTransport.java:913) at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:880) at org.elasticsearch.transport.netty.NettyTransport.connectToNodeLight(NettyTransport.java:852) at org.elasticsearch.transport.TransportService.connectToNodeLight(TransportService.java:250) at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing$3.run(UnicastZenPing.java:395) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [2015-11-29 11:55:57,361][WARN ][transport.netty ] [Node1] exception caught on transport layer [[id: 0xe44a1084]], closing connection java.net.SocketException: Protocol family unavailable at sun.nio.ch.Net.connect0(Native Method) at sun.nio.ch.Net.connect(Net.java:435) at sun.nio.ch.Net.connect(Net.java:427) at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:643) at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink.connect(NioClientSocketPipelineSink.java:108) at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink.eventSunk(NioClientSocketPipelineSink.java:70) at org.jboss.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:574) at org.jboss.netty.channel.Channels.connect(Channels.java:634) at org.jboss.netty.channel.AbstractChannel.connect(AbstractChannel.java:216) at org.jboss.netty.bootstrap.ClientBootstrap.connect(ClientBootstrap.java:229) at org.jboss.netty.bootstrap.ClientBootstrap.connect(ClientBootstrap.java:182) at org.elasticsearch.transport.netty.NettyTransport.connectToChannelsLight(NettyTransport.java:913) at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:880) at org.elasticsearch.transport.netty.NettyTransport.connectToNodeLight(NettyTransport.java:852) at org.elasticsearch.transport.TransportService.connectToNodeLight(TransportService.java:250) at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing$3.run(UnicastZenPing.java:395) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)
@shashidharrao please join us at https://discuss.elastic.co/ or at #elasticsearch on Freenode for troubleshooting help or general questions. We reserve Github for confirmed bugs and feature requests :)
+1, also also like to know how to bind to multiple, or all addresses with elasticsearch 2.1
setting network.bind_host: "0.0.0.0" is not working,
even worse, it binds to ipv6.
tcp6 0 0 :::9320 :::* LISTEN 682/java
tcp6 0 0 :::9200 :::* LISTEN 680/java
tcp6 0 0 :::9300 :::* LISTEN 680/java
tcp6 0 0 :::9210 :::* LISTEN 684/java
tcp6 0 0 :::9310 :::* LISTEN 684/java
tcp6 0 0 :::9220 :::* LISTEN 682/java
this is very unexpected :-/
The network docs have been greatly improved in #15360
Worked on my AWS Linux Machine
cluster.name: myES_Cluster node.name: ESNODE_CYR node.master: true node.data: true transport.host: localhost transport.tcp.port: 9300 http.port: 9200 network.host: 0.0.0.0 discovery.zen.minimum_master_nodes: 2
I have tried with this on the elasticsearch.yml (key:value) and worked fine for me. But it takes 2 days to fix it :wink: :slight_smile: , going on with ES Doc is so tough.
From @inqueue on October 30, 2015 20:42
Can we add documentation to Network Settings that illustrates how to configure binding Elasticsearch to all network interfaces; i.e., using a wildcard?
elasticsearch.yml:
All interfaces are listening:
We do not want to necessarily encourage this configuration, however, it would be very helpful to document how to do it.
As a side note, configuring
network.bind_host: 0.0.0.0
or0.0.0
or0.0
also works which is just unexpected. It seems just documenting a lone0
value would suffice.Copied from original issue: elastic/docs#48