dfleury2 / beauty

A Simple C++ Http server/client above Boost.Beast
MIT License
188 stars 23 forks source link

HTTP Client set thread pool #26

Open ceyert opened 1 year ago

ceyert commented 1 year ago

Is there any to set threads for http client? Seems like http client has only one worker thread to process async requests in order.

For Example : /endpoint1 (3 sec processing delay) /endpoint2 (10 sec processing delay)

When sending 10 request using http client async way, still waits response from server then send next request.

dfleury2 commented 1 year ago

Hi, in async mode, it should not wait for request completion. The first request will establish the connection, then the second request will be queued unless the first is already completed. Do you have a full simple working sample ?

Regards,

ceyert commented 1 year ago

Thank you, the point is http client should receive callbacks after 3 secs, while server handle request for 10 secs.

Here is simple use case : Async http server (ptyhon - aiohttp)

http client sends 10 request to both these endpoints(/endpoint1 and /endpoint2) :

Current Logs : endpoint1: 1 sent. - TIME: 49:21.945 endpoint2: 1 sent. - TIME: 49:22.946 endpoint1: 2 sent. - TIME: 49:23.946 endpoint2: 2 sent. - TIME: 49:24.947 endpoint1 success: {"id": 1} - TIME : 49:24.962 endpoint1: 3 sent. - TIME: 49:25.947 endpoint2: 3 sent. - TIME: 49:26.948 endpoint1: 4 sent. - TIME: 49:27.948 endpoint1 success: {"id": 1} - TIME : 49:27.978 endpoint2: 4 sent. - TIME: 49:28.949 endpoint1: 5 sent. - TIME: 49:29.950 endpoint2: 5 sent. - TIME: 49:30.950 endpoint1: 6 sent. - TIME: 49:31.951 endpoint2: 6 sent. - TIME: 49:32.951 endpoint1: 7 sent. - TIME: 49:33.952 endpoint2: 7 sent. - TIME: 49:34.953 endpoint1: 8 sent. - TIME: 49:35.953 endpoint2: 8 sent. - TIME: 49:36.954 endpoint1: 9 sent. - TIME: 49:37.954 endpoint2 success: {"id": 1} - TIME : 49:38.030 endpoint2: 9 sent. - TIME: 49:38.955 endpoint1: 10 sent. - TIME: 49:39.956 endpoint2: 10 sent. - TIME: 49:40.956 endpoint2 success: {"id": 1} - TIME : 49:48.066 endpoint1 success: {"id": 2} - TIME : 49:51.083 endpoint1 success: {"id": 2} - TIME : 49:54.100 endpoint2 success: {"id": 2} - TIME : 50:04.152 endpoint2 success: {"id": 2} - TIME : 50:14.204 endpoint1 success: {"id": 3} - TIME : 50:17.221 endpoint1 success: {"id": 3} - TIME : 50:20.228 endpoint2 success: {"id": 3} - TIME : 50:30.280 endpoint2 success: {"id": 3} - TIME : 50:40.284

Expected : /endpoint1 response will receive without waiting /endpoint2 process

Example : /endpoint1 sent 1 /endpoint2 sent 1 /endpoint1 sent 2 /endpoint2 sent 2 ..... /endpoint1 received 1 (3 sec later) /endpoint1 received 2 (3 sec later) /endpoint1 received 3 (3 sec later) /endpoint2 received 1 (10 sec later)

ceyert commented 1 year ago

Hi, in async mode, it should not wait for request completion. The first request will establish the connection, then the second request will be queued unless the first is already completed. Do you have a full simple working sample ?

Regards,

Why should we wait to finish first request to process next one? Can we use something like callback queue when a request is finished to avoid wait to send next request?

Best Regards.

dfleury2 commented 1 year ago

The first request is effectively sent to the server when when connection is established, it can take some time.

During the wait, the user can send a second request before a first one is processed (connection not done, or not responded), so I need to queue it.

When the first send completion callback is done, the second one is sent, without waiting for the completion of the first request.

In your example, are you sure it is not the server that wait for completion before processing another request ? I assume that you send one request each 1s alternatively to each endpoint.

ceyert commented 1 year ago

yes, it's just a dummy server in python :

import asyncio from aiohttp import web

async def endpoint1(request): request_data = await request.json() id = request_data.get('id') print("endpoint1 received.. ", id) await asyncio.sleep(2) response_data = {'id': id} return web.json_response(response_data)

async def endpoint2(request): request_data = await request.json() id = request_data.get('id') print("endpoint2 received.. ", id) await asyncio.sleep(10) response_data = {'id': id} return web.json_response(response_data)

async def endpoint3(request): print("endpoint3 received.. ") await asyncio.sleep(2) response_data = {'id': "id"} return web.json_response(response_data)

async def endpoint4(request): print("endpoint4 received.. ") await asyncio.sleep(10) response_data = {'id': "id"} return web.json_response(response_data)

async def handler(request): if request.path == '/endpoint1': return await endpoint1(request) elif request.path == '/endpoint2': return await endpoint2(request) elif request.path == '/endpoint3': return await endpoint3(request) elif request.path == '/endpoint4': return await endpoint4(request)

async def init(): app = web.Application() app.router.add_post('/endpoint1', handler) app.router.add_post('/endpoint2', handler) app.router.add_get('/endpoint3', handler) app.router.add_get('/endpoint4', handler) return app if name == 'main': loop = asyncio.get_event_loop() app = loop.run_until_complete(init()) web.run_app(app, port=8081)

And client side : 10 iteration which: send /endpoint1 request then wait for1 sec afterward send /endpoint2 request wait for 1 sec.

"When the first send completion callback is done, the second one is sent" I think this is root cause. I believe that, we should not wait to completion for send other one, this causes blocking requests.

Best Regards

dfleury2 commented 1 year ago

"When the first send completion callback is done, the second one is sent

This is when the SENT is done, not the RECEIVED request.

I will make a example (I think there is alreadye a UT that show that it works but it was a long time ago, so it will remind me)

dfleury2 commented 1 year ago

May be this is another issue, can you check if the python implementation respect the keep alive ? It will cause issue if not.

ceyert commented 1 year ago

yes, seems like still occurred same response order even keep-alive enabled.
You can test it in your local environment to observe result.

Best Regards.

dfleury2 commented 1 year ago

I have made a test, and you are correct.

Here my example based on server_long_transaction.cpp, and client_async.cpp

Here the server (python equivalent)

#include <beauty/beauty.hpp>

int main() {
  auto request_handler = [](auto &res, int delay_s) {
    res.postpone();

    beauty::after(delay_s, [&, d = delay_s]() {
      std::cout << "Sleep for " << d << "s done..." << std::endl;
      res.body() = std::to_string(d) + "s";
      res.done();
    });
  };

  beauty::server server;

  server
      .get("/slow-response",
           [&](const auto &req, auto &res) { request_handler(res, 7); })
      .get("/fast-response",
           [&](const auto &req, auto &res) { request_handler(res, 2); });

  // Open the listening port
  server.listen(8085);

  // Run the event loop - Warning, add a new thread (to be updated may be)
  server.wait();
}

Here the client_async.cpp :

#include <beauty/beauty.hpp>

#include <chrono>
#include <iostream>

int main() {
  int sent = 0;
  int received = 0;

  auto response_handler = [&](auto ec, auto &&response) {
    // Check the result
    ++received;

    if (!ec) {
      if (response.is_status_ok()) {
        // Display the body received
        std::cout << std::chrono::duration_cast<std::chrono::seconds>(
                         std::chrono::system_clock::now().time_since_epoch())
                  << ": get: " << response.body() << std::endl;
      } else {
        std::cout << response.status() << std::endl;
      }
    } else if (!ec) {
      // An error occurred
      std::cout << ec << ": " << ec.message() << std::endl;
    }
  };

  beauty::client client;  // client outside the loop will cause the issue

  // Need to wait a little bit to receive the response
  for (int i = 0; i < 10; ++i) {
    // Request an URL
    // beauty::client client; // client inside the loop will be fine
    auto url = std::string("http://127.0.0.1:8085/") +
               (!(i % 2) ? "slow" : "fast") + "-response";
    std::cout << std::chrono::duration_cast<std::chrono::seconds>(
                     std::chrono::system_clock::now().time_since_epoch())
              << ": query: " << url << std::endl;
    client.get(url, response_handler);
    ++sent;
    std::this_thread::sleep_for(std::chrono::milliseconds(1000));
  }
  std::cout << std::endl;

  while (sent != received) {
    std::this_thread::sleep_for(std::chrono::milliseconds(1000));
  }
}

When I was using the beauty client, I advised to instantiate the beauty client inside the loop due to an issue with the keep alive connection management (an issue I did not fixed yet).

For your issue, looking at the client code (session_client.hpp), the second write is done after the read done. Probably not the good way to do this in this case, even I use the beast example to write this... I need to be sure that I can interleave read/write for multiple request/response.

Anyway, if you want to use beauty client, it's better if you can long transaction to use one client for each query (inside loop mode), until at least I fixed the issue one the keep alive mode.

Regards,

ceyert commented 1 year ago

Thank you for your observations. May I ask, can you provide an estimation time for address this issue? Since we are consider to use your library in our product, asynchronicity is high priority for us.

And another thing which I want to ask why this library require c++ 20? It could be possible at least C++ 17?

Best Regards

dfleury2 commented 1 year ago

Hi, happy to know your consider it for production use.

The server part is ready for prod. The client is more "beta" product, or at least you MUST instantiate a client for each request. Unfortunately, when I designed the API I have seen quick enough that it will be hard to take it correct without some choices.

For the fix, I am not event sure if I can interleave request and reply this way. I will need to test it to be sure to not break how it works now. I have not a lot of time since I move on to a new project in another branch. I will see if I can take time for this. Or you can fix it and create a pull request when it works like you want.

Regards,

dfleury2 commented 1 year ago

Beauty can compile using C++17 standard. Removing some designated intializers

ceyert commented 1 year ago

Thank you for recent updates. May I ask, this changes available in conan artifactory as well? (https://conan.io/center/recipes/beauty?version=1.0.0-rc1)

ceyert commented 1 year ago

Any updates, regarding publishing on conan with C++17?

dfleury2 commented 1 year ago

My conan account was deactivated, I will need to reactivate it. It should be possible for you to pull the conan center repo and add it to the recipes locally so you can create it from beauty github directly.