Open SethHamilton opened 6 years ago
Just a quick response regarding making the 100 requests with io_service->run()
on each call, you need to call service->reset()
before calling the second io_service->run()
. For instance, try apply the following changes on http_examples.cpp
diff --git a/http_examples.cpp b/http_examples.cpp
index 3e88570..726df6b 100644
--- a/http_examples.cpp
+++ b/http_examples.cpp
@@ -229,12 +229,16 @@ int main() {
cerr << "Client request error: " << e.what() << endl;
}
- // Asynchronous request example
- client.request("POST", "/json", json_string, [](shared_ptr<HttpClient::Response> response, const SimpleWeb::error_code &ec) {
- if(!ec)
- cout << response->content.rdbuf() << endl;
- });
- client.io_service->run();
+ for(int c = 0; c < 100; ++c) {
+ // Asynchronous request example
+ client.request("POST", "/json", json_string, [](shared_ptr<HttpClient::Response> response, const SimpleWeb::error_code &ec) {
+ if(!ec)
+ cout << response->content.rdbuf() << endl;
+ });
+ std::cout << c << endl;
+ client.io_service->reset();
+ client.io_service->run();
+ }
server_thread.join();
}
I'll answer your other questions Tomorrow morning hopefully, but generally, in quite many applications, one would have only one io_service->run()
, that is one event/main loop. Although it would depend on your application of course.
Wow. Thank you. That made sense.
I made a bunch of threads, detached them and made them all share the same client
, called reset
then run
and it worked. I also stole the server.io_service
, that seemed to work as well (I'm actually connecting back to myself using your web server code).
Slick.
Update: had to make a new io_service object for the clients, stealing the server object did not seem to work (I must have been imagining that it did).
If you already are using the server's io_service, copy that into your clients and call server->start(). If you are using one server thread, only one event thread will be running and all your event handlers will run sequentially (most common use case).
Let's say you do not have a server, but still want to run all your clients in one event thread. Then I would create one io_service and use the io_service::work class to let io_service::run() continue and wait for more work instead of returning when its work is done.
I have some example here that I made for my students: https://github.com/ntnu-tdat2004/asio/blob/master/example.cpp
@SethHamilton Regarding your last update: here is an example server and client both using the same io_service running on 1 thread (all handlers, server and client handlers, are run sequentially):
#include "client_http.hpp"
#include "server_http.hpp"
using namespace std;
using HttpClient = SimpleWeb::Client<SimpleWeb::HTTP>;
using HttpServer = SimpleWeb::Server<SimpleWeb::HTTP>;
int main() {
HttpServer server;
server.config.port = 8080;
server.resource["^/$"]["GET"] = [](shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> /*request*/) {
response->write("test1");
};
thread server_thread([&server]() {
// Start server
server.start();
});
std::this_thread::sleep_for(1s);
HttpClient client("localhost:8080");
client.io_service = server.io_service;
client.request("GET", [](shared_ptr<HttpClient::Response> response, const SimpleWeb::error_code &ec) {
if(!ec)
cout << response->content.string() << endl;
else
cout << "error: " << ec.message() << endl;
});
server_thread.join();
}
@eidheim Thank you. Interestingly, when I create clients from application threads using a shared io_service
will cause stalling, where nothing happens until a timeout occurs, I've had to move back to your default method where your client class creates fresh io_service
objects.
I think this makes a thread per client. This differs from libuv
where a main event loop handled everything. If you wanted to make a request, you stuffed an event to kick it off into the main event loop.
Do you think this is normal behavior? It seems to work (very) well. If these io_service
objects are creating threads, I may have to think about adding a connection pool to my app.
Could you have a second look at my example above? The clients using an external io_service should not stall as long as the external io_service is running (waiting for tasks/work). In the example above, the io_service is running in the server.start()
method.
With respect to libuv, using one io_service results in the same event_loop as far as I understand libuv. But using asio you need the io_service waiting for tasks. The cleanest approach is to use the work class as shown in https://github.com/ntnu-tdat2004/asio/blob/master/example.cpp. But when you already have a running io_service in a server (like in the above example), the io_service is waiting for tasks just like what would happen if the work class is used.
I had almost exactly your example at first. I even had the sleep to wait for the server to start and set the server.io_service
. My clients are created elsewhere in the project in worker threads. I will play with this some more, what you are describing is what I would expect.
Of interest, using the servers io_service
it would sometimes "resume" everything after a client had a timeout event.
Make sure you never call io_service::run several times and never call io_service::reset. My assumption here is that you call io_service::run in only one server object, and that you only use one io_service instance.
Also linking to another example I have created where two servers use the same io_service: https://gitlab.com/eidheim/desktop-stream/blob/master/main.cpp
edit: desktop-stream moved to gitlab
I was definitely calling reset
and run
like in the example from the second post. Perhaps that's the issue.
My bad, sorry for the conflicting explanations. Here is a better example:
#include "client_http.hpp"
#include "server_http.hpp"
using namespace std;
using HttpClient = SimpleWeb::Client<SimpleWeb::HTTP>;
using HttpServer = SimpleWeb::Server<SimpleWeb::HTTP>;
int main() {
auto io_service = std::make_shared<SimpleWeb::asio::io_service>();
HttpServer server;
server.io_service = io_service; // use external io_service
server.config.port = 8080;
server.resource["^/$"]["GET"] = [](shared_ptr<HttpServer::Response> response, shared_ptr<HttpServer::Request> /*request*/) {
response->write("test");
};
server.start(); // This function is now non-blocking since external io_service is used
HttpClient client("localhost:8080");
client.io_service = io_service; // use external io_service
client.request("GET", [](shared_ptr<HttpClient::Response> response, const SimpleWeb::error_code &ec) {
if(!ec)
cout << response->content.string() << endl;
else
cerr << "error: " << ec.message() << endl;
});
// When the io_service is run, the server will first start accepting connections,
// and then the client's request will happen:
io_service->run(); // io_service will always have work to do and thus this function will block,
// since the server is always starting async_accept
}
edit: cleanup
I updated the example above just now.
It still stalls, not right away, but eventually. Do you think it has to do with calling your request functions from other threads? Do I need to lock the io_service
somehow, or is it thread safe?
Posting jobs to io_service is thread safe yes.
Also make sure you do not use the client’s synchronous request calls, since they use io_service reset and run internally.
@SethHamilton Did you figure out your stalling issue?
@eidheim no, unfortunately. To get around this I created a worker thread for each client connection and call reset
and run
. I think the issue is that my program has many threads and they create client connections.
I've been playing with this, it's very nice (thank you).
Here is my testing:
If I make 100 requests (in a loop) and call
client.io_service->run()
on each call, I loose request, no error, just the callback never happens.If I make the 100 requests, then call
run()
I get all 100 requests (in order, and very quickly).In my usecase I would like to simply hold a client in an object, and call it async when I need it and have my callback called. I would be doing this from other threads. My understanding is that
io_service->run()
is something you want to run once per thread.Perhaps, I should put these clients each in a class in a thread, and manage feeding the async client via a queue or something to that effect? Or, perhaps I should use multiple clients in a pool per thread, and use the synchronous functions and just kill the clients as they age.
Also, I would love some clarification, does the client (async) request function use multple connections? I'm guessing not because they all return sequentially, but I do see a variable called
unused_connections
which leads me to believe I may be wrong about that.I like the idea of reusing connections (especially with HTTPS). I want to be able to call a client object from any thread, and have that thread wait on a callback (was going to signal a
condition_variable
).So, should I just make new clients as I need them? Am I over complicating life?
BTW I'm switching an open source project using a custom protocol and
libuv
to useSimple-Web-Server
(My project is here: https://github.com/opset/openset)