mp911de / spinach

Scalable Java Disque client
Apache License 2.0
34 stars 4 forks source link

Open connection is very slow #27

Closed thibaultmeyer closed 8 years ago

thibaultmeyer commented 8 years ago

Hi,

The connection with spinach 0.3 take soooo long time (more than 4 seconds) even when connecting to 127.0.0.1. It seems you dont have implemented connections Pool too ?

We are using Disque 1.0-rc1 git_sha1:0192ba7e

Demo.java
public class Demo {

    public void run() {
        final DisqueClient client = DisqueClient.create(DisqueURI.create("127.0.0.1", 7711));
        final long t1 = System.currentTimeMillis();
        final DisqueConnection<String, String> conn = client.connect();
        final long t2 = System.currentTimeMillis();
        System.err.println("time> " + (t2 - t1) + "ms");
        final DisqueCommands<String, String> sync = conn.sync();

        System.err.println("Add job...");
        String jobId = sync.addjob("queue", "body", 10, TimeUnit.SECONDS);
        System.err.println("Job ID: " + jobId);

        System.err.println("Get job #" + jobId);
        Job<String, String> job = sync.getjob("queue");
        System.err.println("Job #" + jobId + " retrieved");

        System.err.println("ACK job #" + jobId);
        sync.ackjob(job.getId());
        System.err.println("Done");

        sync.close();
    }
}
MainEntry.java
public class MainEntry {

    public static void main(final String[] args) {
        Demo demo = new Demo();
        demo.run();
        demo.run();
    }
}
Console output

time> 4742ms Add job... Job ID: D-139c8829-OgnUFABXlzGo6KfhIQhmpkYG-05a1 Get job #D-139c8829-OgnUFABXlzGo6KfhIQhmpkYG-05a1 Job #D-139c8829-OgnUFABXlzGo6KfhIQhmpkYG-05a1 retrieved ACK job #D-139c8829-OgnUFABXlzGo6KfhIQhmpkYG-05a1 Done time> 4511ms Add job... Job ID: D-139c8829-MG3WVpBTU1+kJJ8NjXJbmyFl-05a1 Get job #D-139c8829-MG3WVpBTU1+kJJ8NjXJbmyFl-05a1 Job #D-139c8829-MG3WVpBTU1+kJJ8NjXJbmyFl-05a1 retrieved ACK job #D-139c8829-MG3WVpBTU1+kJJ8NjXJbmyFl-05a1 Done

mp911de commented 8 years ago

Thanks for your report. These are quite high latencies. Spinach does not use connection pooling as connections are designed to be long-lived with auto-reconnection. Connections are thread safe so they can be reused across your applications. The only reason to use multiple connections (about two) is having one connection for producing jobs and the other one for retrieving jobs. That said, connection pooling won't happen for spinach.

Running the code results for me in:

time> 171ms
Add job...
Job ID: D-4902ccb5-niZk5Am7u5C9/T0hYXW2Nib0-05a1
Get job #D-4902ccb5-niZk5Am7u5C9/T0hYXW2Nib0-05a1
Job #D-4902ccb5-niZk5Am7u5C9/T0hYXW2Nib0-05a1 retrieved
ACK job #D-4902ccb5-niZk5Am7u5C9/T0hYXW2Nib0-05a1
Done
time> 11ms
Add job...
Job ID: D-4902ccb5-vA8nZ+Qv75ScHCUj4yFcHK00-05a1
Get job #D-4902ccb5-vA8nZ+Qv75ScHCUj4yFcHK00-05a1
Job #D-4902ccb5-vA8nZ+Qv75ScHCUj4yFcHK00-05a1 retrieved
ACK job #D-4902ccb5-vA8nZ+Qv75ScHCUj4yFcHK00-05a1

The first connection attempt is massively slower than the next as netty, and its components need to be initialized.

Please change the log level to TRACE for com.lambdaworks, biz.paluch.spinach, and io.netty and post the debug log here for further analysis. If you use log4j and property-based configuration, then the following lines should do the job:

log4j.logger.com.lambdaworks=TRACE
log4j.logger.biz.paluch.spinach=TRACE
log4j.logger.io.netty=TRACE

Final remark: Please reuse DisqueClient as much as possible as creating DisqueClient will create a set of threads that are expensive resources and call Disque.shutdown() once you don't need the instance to shutdown and cleanup all resources. Happens usually when shutting down your application.

mp911de commented 8 years ago

Do you have any update on this issue?

mp911de commented 8 years ago

Closing, no further activity.