Open metaversed opened 5 years ago
are you passing it as NGINX configuration? why are you doing so? what are you trying to achieve?
client_body_buffer_size
is not the maximum body size for requests. That is controlled by client_max_body_size
. And that one is already set to unlimited.
As nginx documentation says, client_body_buffer_size
is:
Sets buffer size for reading client request body. In case the request body is larger than the buffer, the whole body or only its part is written to a temporary file. By default, buffer size is equal to two memory pages. This is 8K on x86, other 32-bit platforms, and x86-64. It is usually 16K on other 64-bit platforms.
I doubt you want clients to allow you to send a 300MB file and store it in the server's memory. That would lead to very easy DoS attacks.
@mikz @lucamaf This is an interesting question. What should be a recommended size we need to allow anyone to use APICAST while sending files?
@apibizops APIcast has unlimited client body size already. No need to do anything.
The issue here is about reading that file into memory for inspection. That is strongly discouraged.
@mikz Agreed. So when we say reading a given file into memory for inspection, is that an optional configuration where we could choose to send file without buffering it?
Bodies larger than client_body_buffer_size
are stored as files, otherwise stored in memory as described in https://github.com/3scale/APIcast/issues/1078#issuecomment-505467120
So with the default settings only payloads smaller than 8/16K are stored in memory. But no matter what, they are going to be sent to the upstream server.
Do you need to inspect the payload?
@mikz There is no specific scenario that has been driving to inspect payload unless needed. So can we say the ideal scenario is not to perform inspection and state it as an optional?
I don't get it :) APIcast already allows unlimited bodies. If you don't need to inspect the payload all works out of the box already. What do you want to state as optional? Definitely inspecting large payloads would be an issue, as it would have to be loaded into memory and could be vulnerable to DoS attacks, so we don't do that and the default is to store only small (8K/16K) payloads in memory.
@mikz please some more details below. But if I need to oversimplify the question that I may be wrong but does each and every file still needs to be buffered to disk before it gets sent to the API backend even if we do not want to inspect the payload?
Initial observation of the problem is detailed below When running the image 3scale-amp25/apicast-gateway: latest in Openshift 3.9 we notice that setting the client_body_buffer_size to a value of 10m will let files smaller then 10m be sent directly through the gateway while files bigger then 10m are cached to disk in /tmp/lua_xxxxx/client_body_temp. This is the expected behavior. The problem shows when the file has been cached and it is then to be sent to the backend api.
It fails and the temporary file is removed.
When if we change the client_body_buffer_size to 1m, files that are smaller then 1m will pass through the gateway while files bigger then 1m gets buffered to disk but is then not sent to the backend. So the problem follows a pattern.
We have an older Apicast installation running in Docker with the the 3scale-amp24/apicast-gateway: latest image. In this environment, the bigger files are buffered to local disk and then sent to the backend without any problem. In this setup, we haven't set the client_body_buffer_size to any value so here we are running by the default value of 16k.
APIcast will not limit any client body since version 3.1 (3scale AMP 2.1).
You should never ever need to change client_body_buffer_size
. Setting it to higher values will open a DoS vulnerability.
You are right that client_body_buffer_size
changes whether body payloads are stored on disk or only kept in memory. But it does not matter where they are stored as they are always forwarded upstream.
What means "it fails and the temporary file is removed" ? Is there any message in the error log ? (standard error).
I imagine this could be caused by nginx not being able to write to the temporary directory, but that should be logged in the error log.
@mikz how would APICast behave if you were to set proxy_request_buffering off
in nginx? That would allow for large payloads without using large amounts of memory or disk if I understand the directive correctly. http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_request_buffering
@magnusvage My understanding is that:
But yes, it would not store the payload on the disk. Why are you so concerned with storing payloads on the disk in the first place?
@mikz I think we should focus on the key requirements. Should we be able to send files using apicast with no issues otherwise caused by any custom configurations? if so "storing payloads on the disk" still should not impact the file being sent to upstream, it would be concerning on the allowed file size aspect in case the gateway is being used to expose other APIs as well considering the number of requests that can be allowed in a given amount of time in relation to the overhead added by the apicast
@raghubanda yes, there is no limit on any payload. Everyhing should get through. If not, it is a bug.
Description As of today, we do not have any guidelines on the limits that can be allowed so that relevant configuration values can be established.
When running the image 3scale-amp25/apicast-gateway:latest in Openshift we notice that setting the client_body_buffer_size to a value of 10m will let files smaller then 10m be sent directly through the gateway while files bigger then 10m are cached to disk in /tmp/lua_xxxxx/client_body_temp. This is the expected behavior.
Will it not impact the performance of the gateway if we allow files of large size (300 MB) in terms of other services using the same instance of the gateway?