Closed nunojpg closed 6 months ago
I think as things stand right now it's not possible, at least not easily. DTLS will invoke tls_emit_data
one at a time as the handshake messages are produced, and I'm assuming your implementation of that (reasonably!) just sends a packet for each call, resulting in the behavior you see in the capture.
There are different approaches we could take here.
One would be to have some additional callback which is called when we are done sending a flight, and we know that we must receive some data from the peer to make any forward progress. The application could then buffer data until it has seen this callback. But this would be troublesome to use in practice, and push work onto the application.
The other approach would be internally to cork the handshake data in a flight and then call tls_emit_data
in one go when the flight is completed (potentially splitting it across a few calls depending on MTU settings). This would be a lot nicer for users since it just works. It would also benefit TLS, since we'd similarly send the handshake flight in a single TCP packet.
Thanks for your answer. I don't understand all the terms. Yes, I am sending one packet for every call of tls_emit_data.
Could I just concatenate them? That would be actually trivial in my code base, since I always use the same server the transaction always follows the same sequence.
Sorry about the terminology. If it helps "flight" is a term from DTLS spec which refers to the sequence of handshake messages which are naturally grouped in that until one party sends all of them, the peer is just waiting until they arrive. (Here Certificate through the Finished message which follows the Change Cipher Spec).
If "cork" is the issue that's a reference to TCP_CORK
(https://baus.net/on-tcp_cork/) which basically stops any packets written from actually going onto the network until the socket is "uncorked". A similar notion is often applied in DTLS for example https://man7.org/linux/man-pages/man3/gnutls_record_cork.3.html
Yes, it's fine to just concatenate the handshake messages. The main thing is determining when you must flush (since otherwise the protocol state machine will deadlock). You can probably manage this using the tls_inspect_handshake_msg
callback, watch for the message with type Handshake_Message::HandshakeCCS
and then flush on the message following (which will be the encrypted Finshed
message). This is not ideal but I think should work.
Thanks for explaining everything.
Works perfectly! (no surprise for you)
I am just counting the packets, concat 3 to 5 and send it.
Btw, reason for coming to this issue was https://github.com/wolfSSL/wolfssl/issues/7512.
WolfSSL wouldn't deal with some handshake packets reorder. I don't know if Botan has the same bug, it is a bad corner case.
But my first workaround was just to put enough delay in those packets to make it almost for sure they arrive in order. But by concatenating them I can make it guaranteed, it becomes a single packet!
For the record: The TLS 1.3 implementation does combine multiple messages that don't change the connections overall state machine into single records:
https://github.com/randombit/botan/blob/master/src/lib/tls/tls13/tls_channel_impl_13.h#L26-L34
Not sure, we can easily apply this approach to the TLS 1.2 implementation, let alone DTLS.
The following is a handshake of DTLS 1.2, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, with mutual certificate authentication, of Botan client and WolfSSL server.
In orange it's the client.
By using WolfSSL wolfSSL_set_group_messages I was able to reduce the packets sent from the server from 7 to 3.
Is there any chance to do the same with Botan? In this Wireshark printout there are 5 consecutive packets that would fit in the MTU.