This fixes a regression in the behavior of how we handle large messages.
The correct behavior is to multiplex these frames among smaller requests
so that a large request doesn't block all other interaction until it is
done writing. Instead, we were queuing them all back to back. This was
also taking up a lot of memory because all fragments of a large message
would be held in memory until they were all written. With this change,
we won't get to the next fragment until the current fragment has been
serialized and written to the wire (message_factory.fragment is a
generator).
This also allows us to revert the revert of 0.30.2 since this takes care
of that memory leak as well. So we should see fewer obscure TimeoutError
exceptions.
This fixes a regression in the behavior of how we handle large messages. The correct behavior is to multiplex these frames among smaller requests so that a large request doesn't block all other interaction until it is done writing. Instead, we were queuing them all back to back. This was also taking up a lot of memory because all fragments of a large message would be held in memory until they were all written. With this change, we won't get to the next fragment until the current fragment has been serialized and written to the wire (
message_factory.fragment
is a generator).This also allows us to revert the revert of 0.30.2 since this takes care of that memory leak as well. So we should see fewer obscure TimeoutError exceptions.
CC @blampe @willhug