When using async_accept(endpoint &, Token &&) with a generic lambda using auto for the second parameter and building as C++20, the type of the accepted socket is deduced to basic_stream_socket<ip::tcp, ip::tcp::endpoint>, using ip::tcp::endpoint as Executor, which obviously is wrong.
It works fine when building as C++17 or, obviously, when defining the type to ip::tcp::socket.
Minimal example:
#include <boost/asio.hpp>
using namespace boost::asio;
struct my_client {
ip::tcp::socket socket;
};
int main(){
io_context ctx;
ip::tcp::acceptor acceptor(ctx, {{}, 6000});
ip::tcp::endpoint peer_addr;
// This compiles with -std=c++17, but fails with -std=c++20.
// It works with c++20 when defining the type of `peer` as ip::tcp::socket.
// auto derives to basic_stream_socket<ip::tcp, ip::tcp::endpoint> for
// some reason.
acceptor.async_accept(peer_addr, [](auto ec, auto peer) {
auto client = my_client{std::move(peer)};
});
ctx.run();
}
Godbolt (with boost 1.81, but I tested locally with asio 1.27 – same issue)
When using
async_accept(endpoint &, Token &&)
with a generic lambda usingauto
for the second parameter and building as C++20, the type of the accepted socket is deduced tobasic_stream_socket<ip::tcp, ip::tcp::endpoint>
, usingip::tcp::endpoint
asExecutor
, which obviously is wrong.It works fine when building as C++17 or, obviously, when defining the type to
ip::tcp::socket
.Minimal example:
Godbolt (with boost 1.81, but I tested locally with asio 1.27 – same issue)