nxxm / xxhr

intuitive c++ http client library
https://nxxm.github.io/xxhr
Other
28 stars 8 forks source link

Does xxhr using thread pool or std::async? #13

Open portsip opened 3 years ago

portsip commented 3 years ago

Hi, does the xxhr is using the thread pool or std::async? Since the CPR used the std::async to perform the POST and GET to support asynchronous API, it will cause massive threads if there have a lot of requests performed.

Thanks

daminetreg commented 3 years ago

Dear @portsip xxhr doesn’t use any threads it relies on a Boost Asio io_context to schedule execution of requests, hence letting the asynchronous io mechanisms of the host OS come into play.

In an upcoming update it will be possible to provide your own io_context/io_service to xxhr so that you can then decide to io_context.run() in the number of threads you want.

But as of today it actually uses just 1 thread.m: the calling thread.

portsip commented 3 years ago

Dear @portsip xxhr doesn’t use any threads it relies on a Boost Asio io_context to schedule execution of requests, hence letting the asynchronous io mechanisms of the host OS come into play.

In an upcoming update it will be possible to provide your own io_context/io_service to xxhr so that you can then decide to io_context.run() in the number of threads you want.

But as of today it actually uses just 1 thread.m: the calling thread.

Got it, thanks, this is really better than CPR.

portsip commented 3 years ago

I‘ve tried to compile the example but failed since there is no enum.h file included, where can I get the enum.h? I think it's http://aantron.github.io/better-enums/, right?

portsip commented 3 years ago

Dear @portsip xxhr doesn’t use any threads it relies on a Boost Asio io_context to schedule execution of requests, hence letting the asynchronous io mechanisms of the host OS come into play.

In an upcoming update it will be possible to provide your own io_context/io_service to xxhr so that you can then decide to io_context.run() in the number of threads you want.

But as of today it actually uses just 1 thread.m: the calling thread.

Sorry, does xxhr is really an asynchronous library? As my tested, if the destination host is down or can't be reachable, the xxhr::GET will be blocked until return an error.

daminetreg commented 3 years ago

I‘ve tried to compile the example but failed since there is no enum.h file included, where can I get the enum.h? I think it's http://aantron.github.io/better-enums/, right?

It is meant to be compiled with https://tipi.build you can get it by following the onboarding instruction behind https://tipi.build/signin

Then to build locally just run : tipi . -t linux-cxx17 or tipi . -t windows-cxx17 or tipi . -t macos-cxx17:

Sorry, does xxhr is really an asynchronous library? As my tested, if the destination host is down or can't be reachable, the xxhr::GET will be blocked until return an error.

It is IO free the application from any active wait or so, the OS come back in the provided on_response handler when it is so far.

The fact that the GET example block is because of the io.run() called within GET; in an upcoming update it will be possible to optionally pass the io_service to GET and then the lib won't do the call to run() itself ( it's a trivial change but this isn't there yet ).

portsip commented 3 years ago

I‘ve tried to compile the example but failed since there is no enum.h file included, where can I get the enum.h? I think it's http://aantron.github.io/better-enums/, right?

It is meant to be compiled with https://tipi.build you can get it by following the onboarding instruction behind https://tipi.build/signin

Then to build locally just run : tipi . -t linux-cxx17 or tipi . -t windows-cxx17 or tipi . -t macos-cxx17:

Sorry, does xxhr is really an asynchronous library? As my tested, if the destination host is down or can't be reachable, the xxhr::GET will be blocked until return an error.

It is IO free the application from any active wait or so, the OS come back in the provided on_response handler when it is so far.

The fact that the GET example block is because of the io.run() called within GET; in an upcoming update it will be possible to optionally pass the io_service to GET and then the lib won't do the call to run() itself ( it's a trivial change but this isn't there yet ).

Thanks, may I know when will the next update be ready?

BR

daminetreg commented 3 years ago

Sure, should be at the latest in November, I'll mention you on the update. In the meantime you can wrap the calls in your own std::thread call from there on it will be fully async.

As well on Webassembly targets there we don't use boost asio but the underlying XMLHttpRequest API (in the future should be fetch API), hence not blocking browser or the node js main loop through C++ webrequests.t