Open onepiers opened 3 years ago
There have been some improvements in reactor-netty and spring cloud. Is it possible to try with the latest versions?
If you would like us to look at this issue, please provide the requested information. If the information is not provided within the next 7 days this issue will be closed.
I also encountered the problem when the request body was greater than 5M
What versions are you using?
Spring boot version 2.3.5.RELEASE Spring cloud version Hoxton.SR10 Spring cloud gateway version 2.2.7.RELEASE
test demo
https://github.com/Fangfeikun/test-gateway.git https://github.com/Fangfeikun/test-web.git
The request stopped and the memory was not released.
anyone solve this problem? I also encountered the problem 。 version info : Spring boot version 2.3.12.RELEASE Spring cloud version Hoxton.SR11 Spring cloud gateway version 2.2.8.RELEASE
when I run program on idea at local enviroment , every thing is fine. but when I run program on server , this this problem is show up.
stack:
Is there any solution to this problem?I also encountered this problem 。
@spencergibb anyone solve this problem?
I also encounter this problem,what should I do to solve this problem. Spring boot version 2.2.10.RELEASE Spring cloud gateway version 2.2.6.RELEASE @spencergibb
I faced similar issue, by when can we expect a fix please.
I'm facing this issue even using the current version: Spring Boot Version 3.2.4 Spring-cloud-starter-gateway 4.1.2
Any idea how to deal with this?
Reproducible at: 'org.springframework.boot' version '3.1.8' 'springCloudVersion', "2022.0.4"
Service with config like -XX:MaxRAM=1200m -XX:MaxRAMPercentage=[40,50,60] [empty, -XX:MaxDirectMemorySize=64m, -XX:MaxDirectMemorySize=256m, -XX:MaxDirectMemorySize=512m]
will fail when:
Uploading(routing to downstream service) of 50 MB file through gateway with "java.lang.OutOfMemoryError: Cannot reserve X bytes of direct buffer memory (allocated: Y, limit: Z), it always try to use more than DirectMemorySize."
It is reproducible on just started service that did not route any uploads before so it does not look like a leak. It just try to use X times more memory than uploaded file size.
Everything works fine when I turn off
spring.cloud:
gateway:
default-filters:
- name: Retry
args:
retries: 3
statuses: BAD_GATEWAY, SERVICE_UNAVAILABLE, GATEWAY_TIMEOUT
methods: GET
backoff:
firstBackoff: X
maxBackoff: Y
factor: 2
basedOnPreviousValue: true
Describe the bug When using the gateway in combination with a backend application that accepts large multipart file uploads (tested with 250MB) we encountered the situation that the direct memory overflowed. The filter is configured to only retry GET requests but the caching mechanism intercepts the complete route and therefore also caches the multipart in the buffers. After completion the buffers are not released.
For the moment we worked around by using a dedicated GET route before all other HTTP methods.
Sample Reproducible with a plain spring cloud gateway project (spring boot 2.3.5 / Hoxton.SR8) with a connected spring application that accepts multipart file uploads. The associated application config is the following.