slact / nchan

Fast, horizontally scalable, multiprocess pub/sub queuing server and proxy for HTTP, long-polling, Websockets and EventSource (SSE), powered by Nginx.
https://nchan.io/
Other
2.99k stars 292 forks source link

High rate publishing, slow rate subscriber messages #653

Closed delleceste closed 1 year ago

delleceste commented 1 year ago

Hello. I am using nchan over nginx as a pub sub system. The publisher POSTs updates on the nchan channel at 10Hz but the client receives messages much more slowly, like 1 per second. After a while, nginx complains:

2022/11/02 14:29:47 [crit] 759861#0: *81 open() "/usr/local/nginx-1.23.1/client_body_temp/0000001209" failed (24: Too many open files), client: 192.168.205.25, server: taeyang.elettra.eu, request: "POST /pub/qumbia-client-49060be867ea9dec28f5dcad124803a9/ws HTTP/1.1", host: "taeyang.elettra.eu:8001"

So I guess I have two problems:

  1. avoid buffering / whatever buffers or decimates messages so that if the publisher publishes at 10Hz, the subscriber receives updates accordingly
  2. avoid nginx buffering on file.

NOTE: the messages are quite long (can be up to 10 MB)

I don't know whether this scenario is an abuse of nginx + nchan or it is fine with proper tuning.

nginx-1.23.1 nchan from git

Regards

delleceste commented 1 year ago

An update: if the published message is little (instead of the 4MB body used in the previous tests) the subscriber receives the updates at a high rate.

probably there is a problem transferring big body messages at high rates

delleceste commented 1 year ago

I sorted out the problem. On the server side, 10 messages per second are published (each message is 4MB) The network is slow, so I get about 1 message per second. Now the problem is that nginx is saving a 4MB temporary file on client_body_temp quickly saturating the file system... Is there a way to avoid this? I tried the directives:

   nchan_message_timeout 1s;
   client_max_body_size 0;
   proxy_http_version 1.1;
   proxy_request_buffering off;
   proxy_buffering off;

without success: nginx still bufferizes every message within client_body_temp/

Is there a way to tell nginx to discard old messages if not delivered within a fraction of a second?

Thanks

slact commented 1 year ago

the channel groups feature could be used to limit the number of messages (and storage space) used by a group of channels