proxy_buffers: this is the total size of the buffer that Nginx can use to hold the response from upstream. If the size of the response larger then the total size of the proxy_buffers Nginx will write response to disk
proxy_buffer_size: mainly used to hold the response header.
proxy_busy_buffer_size: size of the buffer can be used for delivering the response to clients while it was not fully read from the upstream server.
What are the correct values for the buffers?
proxy_buffers:
Default: 8 4k|8k;
Min: must be at least 2 buffers
Max: no limit
proxy_buffer_size:
Min: No limit, can be set to smaller default value (4k|8k) but it's not recommend.
Max: No limit, but should be no less than the maximum possible size of response HTTP headers
Default: one memory page size (4k|8k)
proxy_busy_buffer_size:
Min: can't be smaller than a single proxy_buffers and must be equal to or greater than the maximum of the value of proxy_buffer_size and one of the proxy_buffers.
Max: must be less than the total value of proxy_buffers minus one buffer. (ie 8*4 = 32k - 4k = 28k)
Default: if not explicitly defined, the value for proxy_busy_buffer_size is "the bigger of: twice proxy_buffer_size and the size of two proxy_buffers". This also mean if you set bigger proxy_buffer_size, you are implicitly increasing proxy_busy_buffer_size as well.
This is equal to one memory page size, ie either 4K or 8K, depending on a platform.
How to check my pagesize
$ getconf PAGE_SIZE
Would increase buffer size also increase the memory consumption?
Yes the buffer is allocated per connection. How much you may ask? I honestly don't know, once I get the profiling tool sorted I'll run a few benchmark tests.
Increase the buffer number vs increase the buffer size
The difference between a bigger number of smaller buffers, or a smaller number of bigger buffers, may depend on each user use case, ie a lot of small size response vs a lot of big response. As well as how much memory they have and how much memory they want to be wasted. So it's hard to provide one solution to fit all.
Due to the above complex rule, I personally think we should just provide one setting and increase the buffer size instead of messing around with the number and size of the buffer. And memory is cheap.
The downside of this approach is if user set a really big buffer size, ie proxy_buffers: 8 1024k ie allocating a 1MB buffer for every buffered connection even the response can fit in the default memory page size (4k|8k). However from my initial test, nginx allow allocate needed memory, again I will need to get those profiling tools sorted so I can peek into what is allocated.
Does this setting apply per product?
No this setting is global.
Common errors:
upstream sent too big header while reading response header from upstream
proxy_buffer_size is the only directive that needs tuning in order to solve the error. However due to the rule described above, proxy_busy_buffer_size also needs adjustment
What
Fix https://issues.redhat.com/browse/THREESCALE-8410
Notes
What are the differences between buffers?
proxy_buffers
: this is the total size of the buffer that Nginx can use to hold the response from upstream. If the size of the response larger then the total size of the proxy_buffers Nginx will write response to diskproxy_buffer_size
: mainly used to hold the response header.proxy_busy_buffer_size
: size of the buffer can be used for delivering the response to clients while it was not fully read from the upstream server.What are the correct values for the buffers?
proxy_buffers
:proxy_buffer_size
:proxy_busy_buffer_size
:Min: can't be smaller than a single proxy_buffers and must be equal to or greater than the maximum of the value of
proxy_buffer_size
and one of theproxy_buffers
.Max: must be less than the total value of proxy_buffers minus one buffer. (ie 8*4 = 32k - 4k = 28k)
Default: if not explicitly defined, the value for proxy_busy_buffer_size is "the bigger of: twice proxy_buffer_size and the size of two proxy_buffers". This also mean if you set bigger proxy_buffer_size, you are implicitly increasing proxy_busy_buffer_size as well.
Reference: https://github.com/nginx/nginx/blob/master/src/http/modules/ngx_http_proxy_module.c#L3442
Why 4k|8k?
This is equal to one memory page size, ie either 4K or 8K, depending on a platform.
How to check my pagesize
Would increase buffer size also increase the memory consumption?
Yes the buffer is allocated per connection. How much you may ask? I honestly don't know, once I get the profiling tool sorted I'll run a few benchmark tests.
Increase the buffer number vs increase the buffer size
The difference between a bigger number of smaller buffers, or a smaller number of bigger buffers, may depend on each user use case, ie a lot of small size response vs a lot of big response. As well as how much memory they have and how much memory they want to be wasted. So it's hard to provide one solution to fit all.
Due to the above complex rule, I personally think we should just provide one setting and increase the buffer size instead of messing around with the number and size of the buffer. And memory is cheap.
The downside of this approach is if user set a really big buffer size, ie
proxy_buffers: 8 1024k
ie allocating a 1MB buffer for every buffered connection even the response can fit in the default memory page size (4k|8k). However from my initial test, nginx allow allocate needed memory, again I will need to get those profiling tools sorted so I can peek into what is allocated.Does this setting apply per product?
No this setting is global.
Common errors:
proxy_buffer_size is the only directive that needs tuning in order to solve the error. However due to the rule described above, proxy_busy_buffer_size also needs adjustment
Verification steps
Checkout this branch
Edit
docker-compose-devel.yaml
as followmy-upstream:
image: mccutchen/go-httpbin
expose:
Create a
apicast-config.json
file with the following contentCheckout this branch and start dev environment
Run apicast locally
Capture apicast IP
Generate big header
Send request with big header
It should return 502
and this line from the log
This time it should return HTTP/1.1 200 OK