wandenberg / nginx-push-stream-module

A pure stream http push technology for your Nginx setup. Comet made easy and really scalable.
Other
2.22k stars 295 forks source link

Maximum message size #29

Closed JHeidinga closed 12 years ago

JHeidinga commented 12 years ago

First of all, you have written a great piece of software. A big thanks for a huge effort.

As stated in issue #10 (max length of message) there is a limitation in the size of a message.

In your answer you state:

-- snippet of issue #10's answer: push stream module probably received you full message and tried to delivery it, but for some reason the chunked connection drop large chunks. I saw that happen some times when developing the module, I thought that was a problem in my code, but using the chunked filter from nginx the problem still happening. I don't know if it is a limitation of chunked connections or not. -- snippet end

I'am trying to send a large message with a size of 249843 bytes. In my case only 46332 bytes are received by the client. I tried to disable chunking after reading your answer, but without any luck.

I've read that the nginx-push-stream-module disables the chunked filter of Nginx. Is there any way to send large messages, or is there simply an upper limit of the message size?

stockrt commented 12 years ago

Did you try to tune some Nginx's buffers config on publisher location? client_max_body_size 32k; client_body_buffer_size 32k;

Regards, Rogério Schneider

On Thu, May 3, 2012 at 7:49 AM, JHeidinga < reply@reply.github.com

wrote:

First of all, you have written a great piece of software. A big thanks for a huge effort.

As stated in issue #10 (max length of message) there is a limitation in the size of a message.

In your answer you state:

-- snippet of issue #10's answer: push stream module probably received you full message and tried to delivery it, but for some reason the chunked connection drop large chunks. I saw that happen some times when developing the module, I thought that was a problem in my code, but using the chunked filter from nginx the problem still happening. I don't know if it is a limitation of chunked connections or not. -- snippet end

I'am trying to send a large message with a size of 249843 bytes. In my case only 46332 bytes are received by the client. I tried to disable chunking after reading your answer, but without any luck.

I've read that the nginx-push-stream-module disables the chunked filter of Nginx. Is there any way to send large messages, or is there simply an upper limit of the message size?


Reply to this email directly or view it on GitHub: https://github.com/wandenberg/nginx-push-stream-module/issues/29

Rogério Schneider http://stockrt.github.com

JHeidinga commented 12 years ago

Thanx for the reply Rogério. No luck with the suggested options.

It seems only the first chuck is send. The whole message is received by the push module as shown by the Nginx debug log:

----- Nginx log snippet: 2012/05/04 10:06:40 [debug] 4241#0: 153 write new buf t:0 f:0 B5525000, pos B5525000, size: 250094 file: 0, size: 0 2012/05/04 10:06:40 [debug] 4241#0: 153 http write filter: l:0 f:1 s:250094 2012/05/04 10:06:40 [debug] 4241#0: 153 http write filter limit 0 2012/05/04 10:06:40 [debug] 4241#0: 153 writev: 46336


The writev: 46336 corresponds with the first received chuck. After this chunk, the connection is closed.

Listening with curl (curl -s -v 'http://localhost/lp/channel_2') results in:

--- Header snippet: Server: nginx/1.1.15 Date: Fri, 04 May 2012 08:06:40 GMT Content-Type: application/json Last-Modified: Fri, 04 May 2012 08:06:40 GMT Connection: close Cache-Control: no-cache Set-Cookie: removed Transfer-Encoding: chunked Etag: 0


wandenberg commented 12 years ago

Hi,

I did some search and tests and notice that the maximum message size depends on some configurations on your server. As I said on issue #10 and you checked on your log, push module receives the message and try to send, but the system operation cut some parts off.

Try to change the tcp_wmen at your server. O my machine I'm using min: 374784 default: 499712 max: 24582912 and be able to send and receive a message with more than 249843 bytes.

Of course you need to adjust the directives Rogério mentioned too.

as reference: http://www.kernel.org/doc/Documentation/networking/ip-sysctl.txt http://proj.sunet.se/E2E/tcptune.html

JHeidinga commented 12 years ago

Hi,

Thanks for the suggestion! It works indeed, as long as the tcp_wmen default is larger than the message to send. When the tcp_wmen is set to the default setting, the same large message fails. Strangely enough, when the message that failed is retreived from the message store (push_stream_store_messages on), it is happily retreived. It seems like something is handled differently when a message is put on queue, or am I missing something obvious?

wandenberg commented 12 years ago

How are your testing this? (just to use the same use case as you)

On Mon, May 7, 2012 at 10:07 AM, JHeidinga < reply@reply.github.com

wrote:

Hi,

Thanks for the suggestion! It works indeed, as long as the tcp_wmen default is larger than the message to send. When the tcp_wmen is set to the default setting, the same large message fails. Strangely enough, when the message that failed is retreived from the message store (push_stream_store_messages on), it is happily retreived. It seems like something is handled differently when a message is put on queue, or am I missing something obvious?


Reply to this email directly or view it on GitHub:

https://github.com/wandenberg/nginx-push-stream-module/issues/29#issuecomment-5549546

JHeidinga commented 12 years ago

Hi,

Messages are send by the ruby application served by Unicorn. For testing I'am using curl (with the same results) /usr/bin/curl -s -v -X POST 'http://localhost/updates_publish?id=user_2' -d @./large_message.json

The file large_message.json contains a json message with a total size of 250622 bytes.

The client connects using long-polling to a url which is then redirected using X-Accel-Redirect to ensure authentication:

response.headers['X-Accel-Buffering'] = "no"
response.headers['X-Accel-Redirect'] = "/lp/channel_#{channel.id}"
response.headers['Content-Type'] = 'text/plain'
response.headers['Cache-Control'] = 'no-cache'

I tested this without redirecting as well, with the same results.

------------------------- Nginx conf:

worker_processes 1;
user nobody nogroup;

pid /tmp/nginx.pid;
error_log /usr/local/nginx/logs/nginx.error.log; 

events {
  worker_connections 1024;
  accept_mutex off; # "on" if nginx worker_processes > 1 
}

http {
  include mime.types;
  default_type application/octet-stream;
  access_log /usr/local/nginx/logs/nginx.access.log combined;
  charset utf-8;

  keepalive_timeout               10;
  send_timeout                    10;
client_body_timeout             10;
client_header_timeout           10;
client_header_buffer_size       1k;
large_client_header_buffers     2 4k;
client_max_body_size            10m;
client_body_buffer_size         16k;
ignore_invalid_headers          on;

sendfile on;
tcp_nopush on;
tcp_nodelay off;

gzip on; 
gzip_http_version 1.0;
gzip_proxied any;
gzip_min_length 500;
gzip_disable "MSIE [1-6]\.";
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;

#Messages
push_stream_store_messages on;
push_stream_message_ttl 5m;

upstream unicorn_server {
 server unix:/var/www/join/tmp/sockets/unicorn.sock fail_timeout=0;
}

server {
  listen 80;
  server_name _;

  keepalive_timeout 5;

  root /var/www/wapp/public;

  location / {

    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

    proxy_set_header Host $http_host;
    proxy_redirect off;

    if (!-f $request_filename) {
      proxy_pass http://unicorn_server;
      break;
    }
  }

  location /updates_publish {
      push_stream_publisher admin;
      set $push_stream_channel_id             $arg_id;
    error_log /usr/local/nginx/logs/updates_publish.log;

      allow 127.0.0.1;
      deny all;
  }

  location ~ /lp/(.*) {
      internal;

      push_stream_subscriber      long-polling;

      set $push_stream_channels_path    $1;

      push_stream_header_template "[";
      push_stream_message_template "~text~,";
      push_stream_footer_template "[]]";
      push_stream_content_type application/json;

      push_stream_longpolling_connection_ttl        3m;

      error_log /usr/local/nginx/logs/updates_listen.log;
  }

  error_page 500 502 503 504 /500.html;
  location = /500.html {
    root /var/www/join/public;
  }

}

}

------------------------- Nginx conf:

wandenberg commented 12 years ago

Hi,

please test the last update on longpolling_with_array_support branch. There was a little difference on the situations you described, thanks for the feedback.

Regards, Wandenberg

On Mon, May 7, 2012 at 10:54 AM, JHeidinga < reply@reply.github.com

wrote:

Hi,

Messages are send by the ruby application served by Unicorn. For testing I'am using curl (with the same results) /usr/bin/curl -s -v -X POST ' http://localhost/updates_publish?id=user_2' -d @./large_message.json

The file large_message.json contains a json message with a total size of 250622 bytes.

The client connects using long-polling to a url which is then redirected using X-Accel-Redirect to ensure authentication:

response.headers['X-Accel-Buffering'] = "no" response.headers['X-Accel-Redirect'] = "/lp/channel_#{channel.id}" response.headers['Content-Type'] = 'text/plain' response.headers['Cache-Control'] = 'no-cache'

I tested this without redirecting as well, with the same results.

------------------------- Nginx conf:

worker_processes 1; user nobody nogroup;

pid /tmp/nginx.pid; error_log /usr/local/nginx/logs/nginx.error.log;

events { worker_connections 1024; accept_mutex off; # "on" if nginx worker_processes > 1 }

http { include mime.types; default_type application/octet-stream; access_log /usr/local/nginx/logs/nginx.access.log combined; charset utf-8;

 keepalive_timeout               10;
 send_timeout                    10;

client_body_timeout 10; client_header_timeout 10; client_header_buffer_size 1k; large_client_header_buffers 2 4k; client_max_body_size 10m; client_body_buffer_size 16k; ignore_invalid_headers on;

sendfile on; tcp_nopush on; tcp_nodelay off;

gzip on; gzip_http_version 1.0; gzip_proxied any; gzip_min_length 500; gzip_disable "MSIE [1-6]."; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;

Messages

push_stream_store_messages on; push_stream_message_ttl 5m;

upstream unicorn_server { server unix:/var/www/join/tmp/sockets/unicorn.sock fail_timeout=0; }

server { listen 80; servername ;

 keepalive_timeout 5;

 root /var/www/wapp/public;

 location / {

   proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

   proxy_set_header Host $http_host;
   proxy_redirect off;

   if (!-f $request_filename) {
     proxy_pass http://unicorn_server;
     break;
   }
 }

 location /updates_publish {
     push_stream_publisher admin;
     set $push_stream_channel_id             $arg_id;
   error_log /usr/local/nginx/logs/updates_publish.log;

     allow 127.0.0.1;
     deny all;
 }

 location ~ /lp/(.*) {
     internal;

     push_stream_subscriber      long-polling;

     set $push_stream_channels_path    $1;

     push_stream_header_template "[";
     push_stream_message_template "~text~,";
     push_stream_footer_template "[]]";
     push_stream_content_type application/json;

     push_stream_longpolling_connection_ttl        3m;

     error_log /usr/local/nginx/logs/updates_listen.log;
 }

 error_page 500 502 503 504 /500.html;
 location = /500.html {
   root /var/www/join/public;
 }

} }

------------------------- Nginx conf:


Reply to this email directly or view it on GitHub:

https://github.com/wandenberg/nginx-push-stream-module/issues/29#issuecomment-5550445

JHeidinga commented 12 years ago

Hi,

Excellent! The branch is working as expected. Thanks a lot for your time and effort!

What was the fix? (just out of curiosity).

Jasper.

wandenberg commented 12 years ago

The way nginx close the connection when use ngx_http_finalize_request(r, NGX_DONE); is different of use ngx_http_finalize_request(r, NGX_OK); Seems it close it before let the client read the full buffer.