Open gannebamm opened 1 year ago
@gannebamm what is your client_max_body_size of NGINX conf?
Good idea I haven´t changed anything in that regard in the geonode.conf (https://github.com/GeoNode/geonode-docker/blob/master/docker/nginx/geonode.conf.envsubst)
Therefore it is using those values
# max upload size
client_max_body_size 100G;
client_body_buffer_size 256K;
client_body_timeout 600s;
large_client_header_buffers 4 64k;
proxy_connect_timeout 600;
proxy_send_timeout 600;
proxy_read_timeout 600;
uwsgi_read_timeout 600;
send_timeout 600;
This client_max_body_size 100G;
seems fine but the timeouts could get fired. All of them.
uwsgi_response_write_body_do() TIMEOUT !!!
is likely bound to uwsgi_read_timeout 600;
being to small.
that could be the case, yes!
In case you change it, be sure that it really changed as in my case it was set back to the old value on container restart somehow.
ok, almost have it. Remaining error
geonode.log
Wed Oct 25 17:56:22 2023 - worker 6 (pid: 344) is taking too much time to die...NO MERCY !!!
[busyness] 1s average busyness is at 0%, cheap one of 9 running workers
worker 6 killed successfully (pid: 344)
uWSGI worker 6 cheaped.
Wed Oct 25 17:56:52 2023 - worker 7 (pid: 345) is taking too much time to die...NO MERCY !!!
worker 7 killed successfully (pid: 345)
uWSGI worker 7 cheaped.
Will check for this tomorrow. Time to leave :wave:
Wed Oct 25 17:56:22 2023 - worker 6 (pid: 344) is taking too much time to die...NO MERCY !!!
I guess harakiri is still too small. In other words the process did not finish in time and is getting killed with ... no mercy :)
We had to tweak the same values for a client with a huge download: uwsgi_read_timeout
inside nginx.conf
and harakiri
inside uwsgi.ini
Hi, I am facing the same problems when downloading datasets beyond 1 GB in size from the frontend. The download is capped at 1 GB. I already tweaked the values mentioned above. I also sufficiently upped the values of GeoServers WPS parameters. At the moment these are my configurations:
uwsgi.ini
:
harakiri = 600 ; also tried 1200
geonode.conf
:
client_max_body_size 100G;
client_body_buffer_size 256K;
client_body_timeout 600s;
large_client_header_buffers 4 64k;
proxy_connect_timeout 600; also tried 1200 for all of the following
proxy_send_timeout 600;
proxy_read_timeout 600;
uwsgi_read_timeout 600;
send_timeout 600;
This is the error I get:
nginx4mygeonode | 2024/07/23 07:21:52 [error] 18#18: *133059 readv() failed (104: Connection reset by peer) while reading upstream, client: 999.999.999.999, server: mygeonode.de, request: "GET /datasets/geonode:Lot2_Sidescan_g/dataset_download HTTP/1.1", upstream: "http://172.18.0.6:8000/datasets/geonode:Lot2_Sidescan_g/dataset_download", host: "mygeonode.de", referrer: "https://mygeonode.de/"
We need to upload massive (50 GB) documents to one of our GeoNode instances.
Expected Behavior
After changing the UPLAD_SIZE_LIMITS via Django Admin (https://docs.geonode.org/en/master/admin/upload-size-limits/index.html#upload-size-limits) and changing uwsgi.ini (https://github.com/GeoNode/geonode/blob/master/uwsgi.ini#L22) for higher harakiri values you should be able to upload huge files sizes.
Actual Behavior
The upload will trigger an error stating the upload was not successful and we shall check the file's integrity. Nonetheless, a document object will be created. After downloading the file, we see it is capped at 1 GB and will not unzip after downloading it from the frontend.
Steps to Reproduce the Problem
Specifications
Did I miss something?
Some error messages I spotted in the geonode.log