skypjack / uvw

Header-only, event based, tiny and easy to use libuv wrapper in modern C++ - now available as also shared/static library!
MIT License
1.82k stars 207 forks source link

How to Handle a Broken Pipe (SIGPIPE) in UVW? #291

Closed juliancnn closed 1 year ago

juliancnn commented 1 year ago

Hello,

I'm currently working on an application using UVW's PipeHandle to communicate via UNIX sockets. I've encountered a situation where the application unexpectedly exits with a status code 141 (which corresponds to a broken pipe) when a client abruptly closes the connection.

Here's the scenario: the server is processing a long task (like sleeping for several seconds :laughing: ), and during this time, the client (using netcat) forcefully closes the connection. In this situation, my application does not seem to catch this event with an ErrorEvent but rather exits with code 141.

Could you please provide some guidance on how to handle this scenario? Should UVW's ErrorEvent be able to capture such a situation, or is there another event or method I should be using to prevent the application from terminating in this way?

Steps to reproduce:

  1. Run the server application (provided below).
  2. Connect to the UNIX socket using netcat with nc -U /tmp/zz_onlyCPP.sock.
  3. Send the message wait 10 from netcat to make the server sleep for 10 seconds.
  4. While the server is sleeping, forcefully close the netcat client with ctrl+c.
  5. Observe the server application's termination with exit code 141, indicating a broken pipe.

Code to reproduce:

Toggle me! ```cpp #include #include #include #include #include #include #include // Unlink #include #include void configureCloseClient(std::shared_ptr client) { auto gracefullEnd = [client]() { if (!client->closing()) { client->close(); std::cout << "Client closed" << std::endl; } }; // On error client->on( [gracefullEnd](const uvw::ErrorEvent& error, uvw::PipeHandle& client) { std::cout << "Error: " << error.name() << " - " << error.what() << std::endl; gracefullEnd(); }); // On close client->on( [gracefullEnd](const uvw::CloseEvent&, uvw::PipeHandle& client) { std::cout << "CloseEvent" << std::endl; gracefullEnd(); }); // On Shutdown client->on( [gracefullEnd](const uvw::ShutdownEvent&, uvw::PipeHandle& client) { std::cout << "Client shutdown connection" << std::endl; gracefullEnd(); }); // End event client->on( [gracefullEnd](const uvw::EndEvent&, uvw::PipeHandle& client) { std::cout << "Client ended connection" << std::endl; gracefullEnd(); }); } std::shared_ptr createClient(std::shared_ptr loop) { // Create a new client auto client = loop->resource(); // Configure the close events for the client configureCloseClient(client); // Create 1 protocol handler per client client->on( [client](const uvw::DataEvent& data, uvw::PipeHandle& clienRef) { // Add header and attach the loopback data auto dataStr = std::string(data.data.get(), data.length); std::cout << "Received: " << dataStr << std::endl; auto Loopback = fmt::format("Loopback: {}", dataStr); // if received data is "wait xxx" then wait xxx seconds std::string waitPrefix = "wait "; if (dataStr.substr(0, waitPrefix.size()) == waitPrefix) { auto waitTime = std::stoi(dataStr.substr(waitPrefix.size())); std::cout << "Waiting " << waitTime << " seconds" << std::endl; std::this_thread::sleep_for(std::chrono::seconds(waitTime)); std::cout << "Waited " << waitTime << " seconds" << std::endl; } // Write the data back to the client client->write(Loopback.data(), Loopback.size()); std::cout << "Sent: " << Loopback << std::endl; // If the received data is "exit" then close the client if (dataStr == "exit\n") { std::cout << "Client requested exit" << std::endl; client->close(); } }); return client; } int main() { constexpr auto UNIX_SOCKET_PATH = "/tmp/zz_onlyCPP.sock"; // Create the loop auto loop = uvw::Loop::getDefault(); // Handle event loop loop->on([](const uvw::ErrorEvent& e, uvw::Loop&) { std::cout << "Error: " << e.name() << " - " << e.what() << std::endl; }); // Create the socket server auto unixServer = loop->resource(); // Bind unixServer->bind(UNIX_SOCKET_PATH); // Server in case of error unixServer->on([](const uvw::ErrorEvent& event, uvw::PipeHandle& handle) { std::cout << "Error: " << event.name() << " - " << event.what() << std::endl; }); // Server in case of close unixServer->on([](const uvw::CloseEvent&, uvw::PipeHandle& handle) { std::cout << "Closed" << std::endl; }); // Server in case of connection unixServer->on( [loop](const uvw::ListenEvent&, uvw::PipeHandle& handle) { std::cout << "Listening" << std::endl; auto client = createClient(loop); handle.accept(*client); std::cout << "Client accepted" << std::endl; client->read(); }); // Listen for incoming connections unixServer->listen(); // Run the loop loop->run(); return 0; } ```

Environment:


skypjack commented 1 year ago

Well, first of all, is it possible in libuv? If yes, how does it work? uvw adds nothing on top of libuv, it only wraps the underlying library. Therefore, the first thing to understand is if it works and how it works with libuv. Then we can map your findings to uvw. 🙂

juliancnn commented 1 year ago

Thank you for your response and guidance. Following your suggestion, I've investigated this further and I believe I've identified the solution based on the capabilities of libuv.

The issue seems to stem from a signal known as SIGPIPE, which is sent to a process when it attempts to write to a pipe without a reader. When this signal is not handled, the process gets terminated with a status code 141 indicating a broken pipe. This is exactly the situation I am experiencing when the client abruptly disconnects.

After doing some research, I came across these relevant resources:

Based on these sources, the solution is to ignore the SIGPIPE signal, allowing libuv to handle the situation and prevent the application from terminating unexpectedly. This can be done using the signal() function like so:

signal(SIGPIPE, SIG_IGN);

By adding this line of code, any SIGPIPE signals that occur are simply ignored, and libuv is allowed to manage the situation as it was designed to. This appears to have resolved the issue on my end.

I hope this solution can be of help to other developers who might encounter a similar issue. Let me know if there's anything else that should be addressed or explained further.