private-octopus / picoquic

Minimal implementation of the QUIC protocol
MIT License
540 stars 159 forks source link

Avoid doubling memory in case of error #1504

Closed huitema closed 1 year ago

huitema commented 1 year ago

See issue #1499. Copying "lost" stream data in a separate memory structure causes memory allocation to double in case of losses. Avoiding that would help limiting the overall memory usage.

huitema commented 1 year ago

On the other hand, copying lost data in a separate area has a big benefit. The previous implementation would immediately add all the data to an outgoing packet, regardless of congestion control. That would create peaks of data transmission, thus more losses. The alternative would be to only do loss detection when congestion control credits are available, but that means delaying loss detection, and that's not good either. Copying the data in a separate area solves both issues, but the implementation increases memory requirements. Possible options:

1) Do not keep the whole packet in the loss confirmation queue. Instead of stashing 1600 bytes per packet, one could probably make do with 100 bytes.

2) Keep a copy of the original packet in a "data to retransmit" queue, and retransmit from there directly. This requires keeping more metadata, such as a double queue of lost packet: one for the purpose of detecting spurious losses, the other for preparing data to be resent.

The second solution seems most attractive, as it suppresses some of the memory allocation. However, it requires some extra work:

1) Manage a queue of packets that contain data for retransmission. That queue needs to be global per connection. 2) Probably add some index, such as "last stream data frame not fully copied". 3) Manage partial copy of stream data frames, if they do not fit in the available MTU size. 4) Only recycle a packet if it is fully processed for retransmission and also not needed anymore for loss detection. 5) Remove packets from retransmission queues if fully processed.