Open portsip opened 3 years ago
Dear @portsip xxhr doesn’t use any threads it relies on a Boost Asio io_context to schedule execution of requests, hence letting the asynchronous io mechanisms of the host OS come into play.
In an upcoming update it will be possible to provide your own io_context/io_service to xxhr so that you can then decide to io_context.run() in the number of threads you want.
But as of today it actually uses just 1 thread.m: the calling thread.
Dear @portsip xxhr doesn’t use any threads it relies on a Boost Asio io_context to schedule execution of requests, hence letting the asynchronous io mechanisms of the host OS come into play.
In an upcoming update it will be possible to provide your own io_context/io_service to xxhr so that you can then decide to io_context.run() in the number of threads you want.
But as of today it actually uses just 1 thread.m: the calling thread.
Got it, thanks, this is really better than CPR.
I‘ve tried to compile the example but failed since there is no enum.h file included, where can I get the enum.h? I think it's http://aantron.github.io/better-enums/, right?
Dear @portsip xxhr doesn’t use any threads it relies on a Boost Asio io_context to schedule execution of requests, hence letting the asynchronous io mechanisms of the host OS come into play.
In an upcoming update it will be possible to provide your own io_context/io_service to xxhr so that you can then decide to io_context.run() in the number of threads you want.
But as of today it actually uses just 1 thread.m: the calling thread.
Sorry, does xxhr is really an asynchronous library? As my tested, if the destination host is down or can't be reachable, the xxhr::GET will be blocked until return an error.
I‘ve tried to compile the example but failed since there is no enum.h file included, where can I get the enum.h? I think it's http://aantron.github.io/better-enums/, right?
It is meant to be compiled with https://tipi.build you can get it by following the onboarding instruction behind https://tipi.build/signin
Then to build locally just run : tipi . -t linux-cxx17
or tipi . -t windows-cxx17
or tipi . -t macos-cxx17
:
Sorry, does xxhr is really an asynchronous library? As my tested, if the destination host is down or can't be reachable, the xxhr::GET will be blocked until return an error.
It is IO free the application from any active wait or so, the OS come back in the provided on_response handler when it is so far.
The fact that the GET example block is because of the io.run() called within GET; in an upcoming update it will be possible to optionally pass the io_service
to GET and then the lib won't do the call to run() itself ( it's a trivial change but this isn't there yet ).
I‘ve tried to compile the example but failed since there is no enum.h file included, where can I get the enum.h? I think it's http://aantron.github.io/better-enums/, right?
It is meant to be compiled with https://tipi.build you can get it by following the onboarding instruction behind https://tipi.build/signin
Then to build locally just run :
tipi . -t linux-cxx17
ortipi . -t windows-cxx17
ortipi . -t macos-cxx17
:Sorry, does xxhr is really an asynchronous library? As my tested, if the destination host is down or can't be reachable, the xxhr::GET will be blocked until return an error.
It is IO free the application from any active wait or so, the OS come back in the provided on_response handler when it is so far.
The fact that the GET example block is because of the io.run() called within GET; in an upcoming update it will be possible to optionally pass the
io_service
to GET and then the lib won't do the call to run() itself ( it's a trivial change but this isn't there yet ).
Thanks, may I know when will the next update be ready?
BR
Sure, should be at the latest in November, I'll mention you on the update. In the meantime you can wrap the calls in your own std::thread call from there on it will be fully async.
As well on Webassembly targets there we don't use boost asio but the underlying XMLHttpRequest API (in the future should be fetch API), hence not blocking browser or the node js main loop through C++ webrequests.t
Hi, does the xxhr is using the thread pool or std::async? Since the CPR used the std::async to perform the POST and GET to support asynchronous API, it will cause massive threads if there have a lot of requests performed.
Thanks