jurmous / etcd4j

Java / Netty client for etcd, the highly-available key value store for shared configuration and service discovery.
Apache License 2.0
267 stars 83 forks source link

It takes around 3 secs to process single request for etcd put,get and wait for change api. It require to improve the async call more better way. #148

Closed suresh-chaudhari closed 7 years ago

suresh-chaudhari commented 7 years ago

We have use this library in our project for so many operation , its performance is not good. could you please provide some benchmark for this client with single etcd node?

lburgazzoli commented 7 years ago

It would be nice to have some code to reproduce the issue. I you have any suggestion on how to improve the async call, you are very welcome to send a PR.

suresh-chaudhari commented 7 years ago

Please find the logs below

Caused by: io.netty.channel.AbstractChannel$AnnotatedSocketException: Cannot assign requested address: vgjb02hr.dc-dublin.de/47.73.60.44:2379 at sun.nio.ch.Net.connect0(Native Method) ~[na:1.8.0_111] at sun.nio.ch.Net.connect(Net.java:454) ~[na:1.8.0_111] at sun.nio.ch.Net.connect(Net.java:446) ~[na:1.8.0_111] at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:648) ~[na:1.8.0_111] at io.netty.channel.socket.nio.NioSocketChannel.doConnect(NioSocketChannel.java:331) ~[netty-transport-4.1.7.Final.jar!/:4.1.7.Final] at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.connect(AbstractNioChannel.java:254) ~[netty-transport-4.1.7.Final.jar!/:4.1.7.Final] at io.netty.channel.DefaultChannelPipeline$HeadContext.connect(DefaultChannelPipeline.java:1266) ~[netty-transport-4.1.7.Final.jar!/:4.1.7.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeConnect(AbstractChannelHandlerContext.java:546) ~[netty-transport-4.1.7.Final.jar!/:4.1.7.Final] at io.netty.channel.AbstractChannelHandlerContext.connect(AbstractChannelHandlerContext.java:531) ~[netty-transport-4.1.7.Final.jar!/:4.1.7.Final] at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.connect(CombinedChannelDuplexHandler.java:494) ~[netty-transport-4.1.7.F inal.jar!/:4.1.7.Final] at io.netty.channel.ChannelOutboundHandlerAdapter.connect(ChannelOutboundHandlerAdapter.java:47) ~[netty-transport-4.1.7.Final.jar!/:4.1.7.Final] at io.netty.channel.CombinedChannelDuplexHandler.connect(CombinedChannelDuplexHandler.java:295) ~[netty-transport-4.1.7.Final.jar!/:4.1.7.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeConnect(AbstractChannelHandlerContext.java:546) ~[netty-transport-4.1.7.Final.jar!/:4.1.7.Final] at io.netty.channel.AbstractChannelHandlerContext.connect(AbstractChannelHandlerContext.java:531) ~[netty-transport-4.1.7.Final.jar!/:4.1.7.Final] at io.netty.channel.ChannelOutboundHandlerAdapter.connect(ChannelOutboundHandlerAdapter.java:47) ~[netty-transport-4.1.7.Final.jar!/:4.1.7.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeConnect(AbstractChannelHandlerContext.java:546) ~[netty-transport-4.1.7.Final.jar!/:4.1.7.Final] at io.netty.channel.AbstractChannelHandlerContext.connect(AbstractChannelHandlerContext.java:531) ~[netty-transport-4.1.7.Final.jar!/:4.1.7.Final] at io.netty.channel.ChannelDuplexHandler.connect(ChannelDuplexHandler.java:50) ~[netty-transport-4.1.7.Final.jar!/:4.1.7.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeConnect(AbstractChannelHandlerContext.java:546) ~[netty-transport-4.1.7.Final.jar!/:4.1.7.Final] at io.netty.channel.AbstractChannelHandlerContext.connect(AbstractChannelHandlerContext.java:531) ~[netty-transport-4.1.7.Final.jar!/:4.1.7.Final] at io.netty.channel.AbstractChannelHandlerContext.connect(AbstractChannelHandlerContext.java:513) ~[netty-transport-4.1.7.Final.jar!/:4.1.7.Final] at io.netty.channel.DefaultChannelPipeline.connect(DefaultChannelPipeline.java:985) ~[netty-transport-4.1.7.Final.jar!/:4.1.7.Final] :

This error comes while has 3 etcd cluster node and performing load testing.

lburgazzoli commented 7 years ago

Bear in mind that each request opens/closes a new socket so it may be possible that you hit some limits on your os.

Again, would it be possible to have a PR with a reproducer ?

suresh-chaudhari commented 7 years ago

I can give the scenario , we are executing 110 thread parallel with 5 minutes load and every time it is failing after 2 minutes , I check with in 2 minutes we are making 8000 request on 5 node etcd cluster.

could you describe , how this is works and why this error generate , so we can work around this. thanks

suresh-chaudhari commented 7 years ago

what do you mean by ' that you hit some limits on your os.'?

lburgazzoli commented 7 years ago

I do not know what you are doing in your thread but if you are busy sending request then etcd4j will open a socket for each request so you may hit some limits like file/socket descriptors etc.

The stack trace shows that the networking stack (netty) is throwing the error so I'm unsure what etcd4j can do.

suresh-chaudhari commented 7 years ago

It is calling 5 etcd request per thread, do you have any idea to solve this? is this issue regarding to port is not able to handle this much request?

can you explain this thing with more description. So which would be help me to resolve this issue?

lburgazzoli commented 7 years ago

I don't as I do not know the reason so first I would check is there is any networking issue, any limit that is reached (ulimit on linux) and I'd would run a similar test using curl (as it is very closed to what etcd4j does).

What would help me is if you can create a simple and isolated test to reproduce the issue and send is as a PR so I can try to digg into it as soon as I have some spare time.

suresh-chaudhari commented 7 years ago

I will try to reproduce this issue. and will create PR for this.

suresh-chaudhari commented 7 years ago

Hi I am creating etcd client using below lines.

EtcdClient etcdClient = new EtcdClient(getEtcdUrls()); etcdClient.setRetryHandler(new RetryWithTimeout(20, 10000)); return etcdClient;

is set 10000 ms will create issue for this error?

lburgazzoli commented 7 years ago

No updates on this since months, going to close it.