quinn-rs / quinn

Async-friendly QUIC implementation in Rust
Apache License 2.0
3.85k stars 394 forks source link

feat: use BytesMut for Transmit content #1545

Closed dignifiedquire closed 1 year ago

dignifiedquire commented 1 year ago

This is a potential solution for my current problem. The issue I am running into, is that I have to split up the Transmits that are sent out, and send them over a channel. but because I only get access to a &[Transmit], I have to copy the content buffer.

The solution here switches the Vec<u8> simply for BytesMut, which I can then easily send over a channel (and store for potential delayed submission) without copying.

One additional change that for convenience I would add, is to derive Clone for Transmit, as now this is an actual cheap operation.

dignifiedquire commented 1 year ago

error: package regex v1.8.1 cannot be built because it requires rustc 1.60.0 or newer, while the currently active rustc version is 1.59.0

CI failures seem unrelated

djc commented 1 year ago

Would you mind adding a commit that bumps the MSRV to 1.60?

dignifiedquire commented 1 year ago

Would you mind adding a commit that bumps the MSRV to 1.60?

done

dignifiedquire commented 1 year ago

error: package time v0.3.20 cannot be built because it requires rustc 1.63.0 or newer, while the currently active rustc version is 1.60.0

@djc looks some more dependencies are increasing MSRV

Ralith commented 1 year ago

I have to split up the Transmits that are sent out

Can you elaborate on this requirement a bit? quinn-proto is intended to prepare bytes optimally for consumption by the underlying UDP stack.

One additional change that for convenience I would add, is to derive Clone for Transmit, as now this is an actual cheap operation.

BytesMut is specifically unique, so I don't think this change makes cloning any cheaper.

dignifiedquire commented 1 year ago

BytesMut is specifically unique, so I don't think this change makes cloning any cheaper.

oh no, you are right, that allocates, okay I guess I will have to change this to store Bytes then to actually solve my issue.

dignifiedquire commented 1 year ago

Can you elaborate on this requirement a bit?

I have three different types of connections in the background, 1 UDP socket for ip4, 1UDP socket for ip6 and a TCP based relay. Packets are sent dynamically to one of these three, depending on the best available route.

Ralith commented 1 year ago

okay I guess I will have to change this to store Bytes then

I thought you only needed to split them up, not duplicate them?

Packets are sent dynamically to one of these three, depending on the best available route.

Why does this require splitting anything?

dignifiedquire commented 1 year ago

okay I guess I will have to change this to store Bytes then

I thought you only needed to split them up, not duplicate them?

Packets are sent dynamically to one of these three, depending on the best available route.

Why does this require splitting anything?

Sorry I wasn't clear enough. By "Splitting up" I meant that I would need to potentially do different things with each Transmit.

The current operations I have to do concretely are

Ralith commented 1 year ago

None of those operations obviously involve splitting or cloning the payload. Can you elaborate?

dignifiedquire commented 1 year ago

The signature of poll_send only gives access to &[Transmit], which results in

Ralith commented 1 year ago

Ah, I had assumed you were using quinn-proto, not implementing AsyncUdpSocket; this makes more sense then, thanks for clarifying!

Ralith commented 1 year ago

MSRV update is in main now, so a rebase should clean up CI.

dignifiedquire commented 1 year ago

@Ralith any other thoughts on this? would love to get this in

Ralith commented 1 year ago

The implementation here looks good. I'm hesitating because I can't help but wonder if your use case would be better addressed by maintaining separate connections and switching them at the application layer, in which case we could avoid this (admittedly slight) increase in API complexity. Running QUIC over TCP in particular is going to be rather inefficient, and it would be perfectly reasonable to have parallel QUIC connections over IPv4/IPv6; this is basically happy eyeballs.

dignifiedquire commented 1 year ago

I can't help but wonder if your use case would be better addressed by maintaining separate connections and switching them at the application layer,

I have been wondering that as well, but the biggest obstacle to that that I see currently is that I need to be able to do parallel sends of packets, to reduce latency in connection establishment. Basically I am using the relay transport for the first few packets, while hole punching is running, and switch over as soon as I can, and if I am unsure if my udp addresses are any good I double send packets over both the relay and the udp until I get some kind of confirmation. I don't quite see how I can achieve the same robustness with switching connections at the application layer.

Ralith commented 1 year ago

I'm guessing "the relay" here means the TCP connection?

It sounds like you have two separate concerns: latency and robustness. Robustness is easy to achieve with application-layer switching: always send data only on a connection that's been successfully established, sorting by priority (presumably favoring QUIC once available).

It sounds like you're also invested in avoiding a superfluous round-trip between hole punching success and QUIC connection establishment. Could you use the actual initial QUIC handshake packet for the hole punching, so that success implies connection establishment?

dignifiedquire commented 1 year ago

That might be possible, but currently we are following an existing design that has proven to work for a similar use case (tailscale to be specific). So while we might be able to optimize things to that at later stage, we will need to make it work like this for the moment.

dignifiedquire commented 1 year ago

done, should be fixed

dignifiedquire commented 1 year ago

okay, MSRV is upset again :( you want me to bump to 1.65?

djc commented 1 year ago

I'm working on a fix for the MSRV issue in #1558.

djc commented 1 year ago

Going to merge this, since the MSRV issue will be fixed separately (and FreeBSD CI passes).