quinn-rs / quinn

Async-friendly QUIC implementation in Rust
Apache License 2.0
3.76k stars 380 forks source link

Get current connection blocking status #1660

Closed szguoxz closed 1 year ago

szguoxz commented 1 year ago

Is there anyway to get the current connection blocking status?

  1. how many open_uni is waiting?
  2. What is the current budget available for open_uni?
  3. how many unistream is waiting to be finished?

Something else: Can I use a different algorithm to generate stream id?

Thanks.

Ralith commented 1 year ago

Most of that information is not currently exposed. quinn_proto::Streams::send_streams and remote_open_streams may be of interest, though they are not currently exposed at the quinn layer. What is your use case for these?

Can I use a different algorithm to generate stream id?

No. Stream IDs are a QUIC implementation detail; they are always opened in order.

szguoxz commented 1 year ago

I am building a vpn which can use two or more lines. i.e. from A, to B, they can connect directly, but I could have a server C, somewhere else act as a udp proxy to forward udp packet from A to B. Thus from A I can connect to two servers, B, C, but both will end up to B. Then I want balance the traffic bewteen A-> B, or A->C->B.

I know it's better to be done at a lower level, but I have been working with quinn, never worked with quinn_proto or quin_udp. Since there's not much documentation, I would rather stay with quinn.

The information I am asking is high level enough, I think it won't hurt if they are exposed at quinn layer? The information in [quinn_proto::Streams::send_streams] is indeed useful. I will see if I can access it somehow, or maybe customize quinn a little bit. :-) But I really rather not to.

Ralith commented 1 year ago

I know it's better to be done at a lower level, but I have been working with quinn, never worked with quinn_proto or quin_udp

We generally encourage folks to stay at the quinn level if at all possible.

I think it won't hurt if they are exposed at quinn layer?

Adding APIs imposes some maintenance cost and adds noise to documentation, so we prefer not to unless strongly motivated. If you're trying to balance traffic between two separate QUIC connections, you could keep track of how many resources you're using in each with application-layer counters, without requiring any changes to Quinn.

szguoxz commented 1 year ago

You are right, I can track the resources I used, that's wat I am doing. I am tracking how many uni streams are open. :-) There's no efficient way for me to know if the stream is succeeded unless I call finish, then I got to wait. It seems the connection.stats().path.cwnds can be a indicator too. but not quite sure exactly does it mean. All the information the path stats seems important, but the docs just give it a one liner, hard for me to see what it is. :-)

djc commented 1 year ago

If you have specific questions about our documentation we'll be happy to consider expanding it!

szguoxz commented 1 year ago

Great! Here is a couple of my questions regarding PathStats:

  1. rtt: Does it include the waiting time for send stream because the flow control? I mean, even before the packet start sending? I guess not, but it will be great to make it clear whether it's traditional classical rtt.
  2. cwnd: What does this mean? Is it the setting on transportconfig? or the current window size? I guess it's later, still, I have to dig into the code to be sure.
  3. Lost bytes, lost packets, sent packates ... etc. Is it of the total life-span of the connection? The reason I am worried is that cwnd does not seem statistics info, if I am right about CWND definition. If it's not really statistics, then what are all the following numbers?

All in all, CWND makes the confusion, make me start to worry about all the other numbers.

pub struct PathStats { /// Current best estimate of this connection's latency (round-trip-time) pub rtt: Duration, /// Current congestion window of the connection pub cwnd: u64, /// Congestion events on the connection pub congestion_events: u64, /// The amount of packets lost on this path pub lost_packets: u64, /// The amount of bytes lost on this path pub lost_bytes: u64, /// The amount of packets sent on this path pub sent_packets: u64, /// The amount of PLPMTUD probe packets sent on this path (also counted by sent_packets) pub sent_plpmtud_probes: u64, /// The amount of PLPMTUD probe packets lost on this path (ignored by lost_packets and /// lost_bytes) pub lost_plpmtud_probes: u64, /// The number of times a black hole was detected in the path pub black_holes_detected: u64, }