reactor / reactor-netty

TCP/HTTP/UDP/QUIC client/server with Reactor over Netty
https://projectreactor.io
Apache License 2.0
2.57k stars 640 forks source link

HttpClient usage #151

Open alexeybozhev opened 7 years ago

alexeybozhev commented 7 years ago

I need to send a lot of concurrent http requests per second. (from 1k to 20k per second, for example). This is probably can be implemented by pure netty, and it will work very fast. However, I would like to take advantage of pretty API, that reactor-netty provides. So I'm using reactor.ipc.netty.http.client.HttpClient. But it shows very high latency, even on low amount of requests. I'm doing a simple measurement, like this:

HttpClient httpClient = HttpClient.create();
AtomicLong total = new AtomicLong(0);
for (int i = 0; i < 1234; i++) {
    long l = System.currentTimeMillis();
    httpClient.get("https://jsonplaceholder.typicode.com/posts/1").timestamp().subscribe(tuple2 -> {
        total.getAndAdd(tuple2.getT1() - l);
    });
}
System.in.read();
System.out.println("total = " + total.get() / 1234);

Following snippet prints 1000 ms average. This is inappropriate for my use case (rest calls for microservices). Is my use case suitable for your library, and if so, how to achieve good results ? I'm using reactor-core 3.0.7. and reactor-netty 0.6.4.

smaldini commented 7 years ago

In this sample you are starting 1234 parallel request immediately you cannot expect the same latency as if you run one by one. Did you try to do block() instead of subscribe() if you want to bench if a for loop ?

alexeybozhev commented 7 years ago

Basically, I want to check how fast X amount of requests (preferably, where X larger than 1000) can run at the same time with a minimal latency. For example, calling another netty based solution, gives me much lower latency on my PC. Eventually, I'm trying to make a proof-of-concept app, that will serve as highly concurrent reverse-proxy / api-gateway with java. Of Course I've tested different urls, in different environments to exclude network problems, etc.

smaldini commented 7 years ago

I'd be interested to compare we are reworking the internals as well as the API. Can you share the code with the other netty solution?

Other than that one item that help these scenario is left to do: http pipelining. Usually netty http clients use this to maximize the reuse of a single connection and hopefully we'll be able to deliver by 0.7.0.

Sent from my iPhone

On Aug 8, 2017, at 3:13 AM, Alexey notifications@github.com wrote:

Basically, I want to check how fast X amount of requests (preferably, where X larger than 1000) can run at the same time with a minimal latency. For example, calling another netty based solution, gives me 20ms in this loop on my PC. Eventually, I'm trying to make a proof-of-concept app, that will serve as highly concurrent reverse-proxy / api-gateway with java.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.

alexeybozhev commented 7 years ago

I've used https://github.com/AsyncHttpClient/async-http-client, making get requests in the same loop.

        DefaultAsyncHttpClientConfig config = new DefaultAsyncHttpClientConfig
                .Builder()
                .setMaxConnections(1000)
                .setMaxConnectionsPerHost(1000)
                .setIoThreadsCount(8)
                .setTcpNoDelay(true)
                .setSoReuseAddress(true)
                .setUseNativeTransport(false)
                .build();
        AsyncHttpClient asyncHttpClient = new DefaultAsyncHttpClient(config);
        BoundRequestBuilder boundRequestBuilder = asyncHttpClient.prepareGet("demo-url");
        //this in the loop
        boundRequestBuilder.execute().toCompletableFuture().thenApply(response -> printTimeStamp());

Currently I'm testing ratpack, but I'm not done yet. For now, results look very promising.