wandenberg / nginx-selective-cache-purge-module

127 stars 20 forks source link

Nginx worker stops on big purge with 100% cpu usage #7

Closed ilyaevseev closed 4 years ago

ilyaevseev commented 8 years ago

When we try to destroy more than 40k records at once, Nginx worker stops with 100% CPU usage.

strace shows that Nginx calls "brk(0); brk(bigval);" in infinite loop.

Very quick workaround: replace "if (entry->removed)" to "if(0)" at here, i.e., don't print removed records in HTML response.

ilyaevseev commented 8 years ago

More details for reproducing this bug:

1) Nginx config: https://gist.github.com/ilyaevseev/1646097ffe3b99bdcbd8916e310a3b02#file-nginx-test-conf

2) Script for filling cache quickly, uses lftp utility: https://gist.github.com/ilyaevseev/1646097ffe3b99bdcbd8916e310a3b02#file-fill-nginx-redis-cache-sh

3) Fill cache and purge it:

./fastfill-nginx-redis-cache.sh http://127.0.0.1 8 10000
wget -SO/dev/null 'http://127.0.0.1/cdnnow/purge/*'

4) Profit! Nginx worker eats 100% cpu now and ignores anything except kill.

blikenoother commented 6 years ago

I am also facing the same issue :( The worst part is we realised this on production and we were down for 10 minutes because of this. Reverted cache config changes

wandenberg commented 4 years ago

Fixed by breaking the response to be sent in small chunks. If the response is not being used, I recommend doing the request using the HEAD method. This way the processing time is reduced and request finishes faster.