Open davidlar opened 1 month ago
If enabling request streaming it only takes one large request to kill the server...
/bounty $250
/attempt #3173
with your implementation plan/claim #3173
in the PR body to claim the bountyThank you for contributing to zio/zio-http!
Add a bounty • Share on socials
Attempt | Started (GMT+0) | Solution |
---|---|---|
🟢 @davidlar | Nov 9, 2024, 8:26:35 AM | #3174 |
/attempt #3173
💡 @davidlar submitted a pull request that claims the bounty. You can visit your bounty board to reward.
Describe the bug We have a service that proxies large streams to different backends. It's compiled with GraalVM native-image and is running in docker with a quite small heap ~80Mb. It has worked fine for a long time but when we updated to zio-http 3.0.0 from 3.0.0-RC9 it started to get OOM instantly. I did some investigation and the problem is the unbounded queue in AsyncBody.asStream that was introduced. It completely disables back pressure for incoming streams. So if the producer is faster than the consumer, the data will be buffered in the unbounded queue. This could lead to OOM.
To Reproduce Steps to reproduce the behaviour:
Expected behaviour It should work like it did before (but without blocking netty, of course)