chriskohlhoff / asio

Asio C++ Library
http://think-async.com/Asio
4.91k stars 1.21k forks source link

Performance issues when large-scale request. #406

Open runforu opened 5 years ago

runforu commented 5 years ago

I have a performance issue when I using the example of ASIO. I have no idea to solve the problem. I tested the example code of https://github.com/chriskohlhoff/asio/tree/master/asio/src/examples/cpp03/http/server3. The HTTP server just echo the client's request body. Each request's body is less than 250 bytes. The response time is less than 100ms when there are less requests; but the response time may be more than 3 seconds or even 10 seconds when there are more than 1000 requests. I traced the time used from accepting connection to completing data writing. The HTTP server is hosted on Windows.

raJeev-M commented 5 years ago

I've not done any performance test: but what information more matters are, your H/W config and your N/W bandwidth (client/server)-- crucial piece is missing.

I don't understand this statement: "...the time is almost 0ms....."

arun11299 commented 5 years ago

@runforu Does the example disable nagle ? What is the range of ephimeral ports in your system ? If you have any "netstat" like utility, then you can see if more packets are getting queued up in the send/receive queues.

raJeev-M commented 5 years ago

Do we currently have an option to set TCP_NODELAY via socket_option?

maksqwe commented 5 years ago

https://www.boost.org/doc/libs/1_70_0/doc/html/boost_asio/reference/ip__tcp/no_delay.html

raJeev-M commented 5 years ago

Thank you, was looking at a wrong file.

runforu commented 5 years ago

I have turned the option of TCP_NODELAY, I guess that the problem may be caused by too many non-persistent connection to the server. The performance will be affected when there are too many connection to the server which just connect, request some data and close. In the sample code, the server will close the socket on both sides when the data are written.

runforu commented 5 years ago

TCP_NODELAY doesn't make the performance better.

runforu commented 5 years ago

The problem is easy to repeat. In client, you create as many threads as you can to request data to the HTTP server. You can log the time used by each request. You can find that some request will take more than 3 seconds even the server just echo a small string or the request body.

raJeev-M commented 5 years ago

TCP_NODELAY doesn't make the performance better.

I suspected that.

raJeev-M commented 5 years ago

...create as many threads as you can ...You can find that some request will take more than 3 seconds even the server just echo a small string or the request body.

I suspect 3-second is not the time taken for the server to finish its call, rather, it may also include OS scheduler overhead (threads Concurrently running in a preemptive processor (?)).

Why running your client concurrently? why not running it asynchronously? threads per-request defeats the purpose of Asio too (?)

runforu commented 5 years ago

On released Http server, there are many clients doing HTTP request, some clients will wait for more than 3 seconds to get response.

raJeev-M commented 5 years ago

can you be more explicit about your setup, are you running client and server in a same machine? how about the server side throughput? tracked the handlers? you say "create as many thread" : how many did you create? don't you worry about thread over-subscription?

runforu commented 5 years ago

I have created a repository at https://github.com/runforu/HttpServer. To repeat the issue, you can run the http server and run a http client which creats 1000 threads and each of thread posts json to the server as follows:

 try {
    boost::asio::io_context io_context;
    // Get a list of g_endpoints corresponding to the server name.
    boost::asio::ip::tcp::resolver resolver(io_context);
    boost::asio::ip::tcp::resolver::query query(host, port);
    boost::asio::ip::tcp::resolver::results_type endpoints = resolver.resolve(query);
    boost::asio::ip::tcp::socket socket(io_context);

    SYSTEMTIME time0;
    GetLocalTime(&time0);

    boost::asio::connect(socket, endpoints);

    boost::asio::streambuf request;
    std::ostream request_stream(&request);
    request_stream << "POST " << path << " HTTP/1.0\r\n";
    request_stream << "Host: " << host << "\r\n";
    request_stream << "Accept: */*\r\n";
    request_stream << "Connection: close\r\n";
    request_stream << "Content-Length: " << m_content.length() << "\r\n";
    request_stream << "Content-Type: application/json\r\n\r\n";
    request_stream << m_content;

    boost::asio::write(socket, request);

    boost::asio::streambuf response;
    boost::system::error_code error;
    boost::asio::read(socket, response, boost::asio::transfer_at_least(1), error);

    SYSTEMTIME time1;
    GetLocalTime(&time1);

    std::cout << "Request takes " << DiffTime(time1, time0) << std::endl;
    std::cout << "Request complete. \n";
    socket.close();
} catch (std::exception& e) {
    std::cout << "Exception: " << e.what() << "\n";
}