Open eriede opened 6 years ago
i think your solution may cause memoy leak when there are requests can't end during a TTL -- the previous pool can't be free. And may not solve the memory crash when using upstream keepalive. i think i may solve the problem https://github.com/zhaofeng0019/nginx-upstream-dynamic-resolve-servers and this solution is doing some references to your solution. would you like to check it out? thanks a lot.
It's been a long time from when I looked at this last. You are correct that it does not address the keepalive problem. I remember doing some updates to add keepalive features that I haven't updated. I think it used the connection's pool instead of the request to extend the lifetime.
I requested permission from the open source committee for that change too. I think I did get permission, but I had moved in to a different project and promptly forgot about this.
It's been stable for over 2 years, with no crashes or cores, normally running on a 9 server cluster with 2 cores each but sometimes we size it up to 20 our more servers during DDOS attempts or black Friday.
I'll see about providing the update tomorrow morning (PST). It might take a little time to re-get approval.
Glad to see some interest on this... It seems like the original maintainer might not be maintaining the project anymore
or would you please see my pull request for this project? it solves all the memory problem and works well.
The solution that you're proposing is similar, using the cleanup hooks on the connection pools as a connection closed callback. It will probably work similarly. The drawback I found that this strategy is that it only worked on the round robin lb option, and caused crashing on the other lb keepalive strats, at least on 1.12. Were you able to get the other lb options to work? If so which version of nginx are you using?
you can see that in native nginx code , all the lb option will use round robin finally, including keepalive, so you can see in my code, i only do ngx_http_upstream_init_round_robin function when the dns result changed, you can just save the function pointer of other lb option at the init_process function.
if do in this way, support all the native nginx lb option, no crash, i tested.
I uploaded our keepalive solution that only works for round robin lb with 1.12, to get it out in the public domain. https://github.com/eriede/nginx-upstream-dynamic-servers/tree/round-robin-keepalive. I'm happy to do a code review on your code if you would like, but I don't have the time to do testing on the various nginx versions, so I can't accept pull requests.
When a worker process connection times out after 2 * the DNS TTL period has passed the worker process may crash. (nginx 1.12). AWS S3 has a very short DNS timeout and exhibits the issue.
The issue is that the peer data structure is accessed after the dynamic server releases the peer.
I have created a patch which uses reference counting on active requests instead of the interleaving scheme currently used. This will ensure that the peer's memory remains valid during outstanding requests.