yaoweibin / nginx_ajp_module

support AJP protocol proxy with Nginx
http://github.com/yaoweibin/nginx_ajp_module
246 stars 59 forks source link

There are too many "TIME_WAIT" between nginx and tomcat when your ajp module is in use. #5

Closed HelloJamesLee closed 12 years ago

HelloJamesLee commented 12 years ago

Hi,

When I use Nginx+ajp_module+tomcat and Apache+mod_jk+tomcat, I found two problems. What's the reason? Can you help?

(1) There are too many TIME_WAIT between nginx and tomcat The TIME_WAIT/Total connection is 13890/16030 When I use the Apache+mod_jk+tomcat, the TIME_WAIT/Total is 92/952.

(2) The %CPU of tomcat in Nginx is higher than that of tomcat in apache. -- This is the main problem. The tomcat in Apache used all the maxThreads(512), but tomcat in Nginx only uses 125 threads, via "ps -efL | grep catalina "

The configuration of tomcats in these two situatin are the same, and the concurrency is also the same. The configuration of ajp module is as follow: ajp_connect_timeout 10; ajp_read_timeout 10; upstream loadbalancer { server 127.0.0.1:8001; server 127.0.0.1:8002; server 127.0.0.1:8003;

    #keepalive 6400;
}

Whether i enable keepalive in upstream directive or not, the two problem exists all the same.
The ajp_module version is 0.2.5, the latest version.

Can you give me some clues to reslove these problems? Thank you!

yaoweibin commented 12 years ago

What's your Nginx version?

On 2011-11-4 13:17, HelloJamesLee wrote:

Hi,

When I use Nginx+ajp_module+tomcat and Apache+mod_jk+tomcat, I found two problems. What's the reason? Can you help?

(1) There are too many TIME_WAIT between nginx and tomcat The TIME_WAIT/Total connection is 13890/16030 When I use the Apache+mod_jk+tomcat, the TIME_WAIT/Total is 92/952.

(2) The %CPU of tomcat in Nginx is higher than that of tomcat in apache. -- This is the main problem. The tomcat in Apache used all the maxThreads(512), but tomcat in Nginx only uses 125 threads, via "ps -efL | grep catalina "

The configuration of tomcats in these two situatin are the same, and the concurrency is also the same. The configuration of ajp module is as follow: ajp_connect_timeout 10; ajp_read_timeout 10; upstream loadbalancer { server 127.0.0.1:8001; server 127.0.0.1:8002; server 127.0.0.1:8003;

     #keepalive 6400;
 }

Whether i enable keepalive in upstream directive or not, the two problem exists all the same. The ajp_module version is 0.2.5, the latest version.

Can you give me some clues to reslove these problems? Thank you!


Reply to this email directly or view it on GitHub: https://github.com/yaoweibin/nginx_ajp_module/issues/5

Weibin Yao

HelloJamesLee commented 12 years ago

My Nginx is 0.8.54.

There is the connection pool mechanism in mod_jk of apache. The connection_pool_size in mod_jk is automatically detected by mod_jk according to the number of threads per web server process. The connections between apache and tomcat can be reused by mod_jk. So the TIME_WAIT in apache is lower.

I guess that the keepalive between nginx and tomcat dosen't work very well. Only a small number connections can be keeped alive. Do you think so?

yaoweibin commented 12 years ago

Can you show me the debug.log with several requests?

http://wiki.nginx.org/Debugging

wangbin579 commented 12 years ago

try to set accept_mutex off maybe this will help you

HelloJamesLee commented 12 years ago

The reason: In the test servers, there are many times the read event was triggered. But the socket didn't read any data. In the old keepalive module, it just closes the idle keepalive socket. It may cause much unexpected connection closing.

Weibin has fixed this keepalive problem. Thanks Weibin! Thank you!