Open jbreams opened 6 years ago
I am quite sure this is a missing feature.
Having a timeout in a synchronous case is a must for working with UDP.
The default example for the synchronous UDP datetime server has a chance to lock indefinitely if a datagram never arrives. Since UDP is inherently unreliable, this is not a "rare unexpected exotic obscurity", it's one of the reasons why UDP might be used.
This problems is not present for TCP, since TCP has built-in failure mechanisms when a "connection" breaks. But an UDP socket doesn't have to be "connected" even in a limited UDP sense.
I would like to ask that a timeout argument is added to the "receive_from" procedure.
The same effect can be achieved using:
{
std::thread t([&]() { io_context.run(); });
t.detach();
auto recv_length = socket.async_receive_from(
boost::asio::buffer(recv_buf), sender_endpoint, 0,
boost::asio::use_future);
if (recv_length.wait_for(std::chrono::seconds(5)) != std::future_status::timeout) {
std::cout.write(recv_buf.data(), recv_length.get());
}}
However , this requires switching from "receive_from" to "async_receive_from" and spawning a thread, which is a giant over-kill for a simple synchronous application which is happy with losing a packet that might be outdated anyway.
Instead of having something like
for (;;)
{
// Try to complete the operation without blocking.
signed_size_type bytes = socket_ops::recv(s, bufs, count, flags, ec);
// Check if operation succeeded.
if (bytes > 0)
return bytes;
// Check for EOF.
if ((state & stream_oriented) && bytes == 0)
{
ec = asio::error::eof;
return 0;
}
// Operation failed.
if ((state & user_set_non_blocking)
|| (ec != asio::error::would_block
&& ec != asio::error::try_again))
return 0;
// Wait for socket to become ready.
if (socket_ops::poll_read(s, 0, ec) < 0)
return 0;
}
have something like
setsockopt(s, SOL_SOCKET, SO_RCVTIMEO,(struct timeval *)&tv,sizeof(struct timeval));
auto starttime = now();
while ((now()-starttime) < timeout)
{
// Try to complete the operation without blocking.
signed_size_type bytes = socket_ops::recv(s, bufs, count, flags, ec);
// Check if operation succeeded.
if (bytes > 0)
return bytes;
// Check for EOF.
if ((state & stream_oriented) && bytes == 0)
{
ec = asio::error::eof;
return 0;
}
// Operation failed.
if ((state & user_set_non_blocking)
|| (ec != asio::error::would_block
&& ec != asio::error::try_again))
return 0;
// Wait for socket to become ready.
if (socket_ops::poll_read(s, 0, ec) < 0)
return 0;
}
The socket option would guarantee that the loop would check condition at least once.
I'm trying to figure out the intended behavior of synchronous send/recv with the SO_SND/RCVTIMEO socket options set.
When you set a send or receive timeout socket option on a socket you intend to use synchronously, the recvmsg and sendmsg calls time out, but then sync_recv immediately goes into poll, except if the socket has been explicitly set to non-blocking mode. Below is a snippet of
asio/include/asio/detail/impl/socket_ops.ipp
to show what I mean:Was the intention here that users wouldn't want to know about would_block/try_again errors unless they had specifically requested non-blocking mode? That doesn't make any sense to me because in blocking mode, would_block/try_again indicate the socket operation timed out and should only occur if the user specifically set a timeout. Is this supposed to differentiate between internal_non_blocking and user_set_non_blocking?
Would the above work better as:
When do we actually want to fall back on poll_read() to wait for input?