Closed SparkeyinVA closed 4 months ago
I was able to find the error log for NGINX. It was at /var/log/nginx/error.log (I added line breaks to make the log easer to read. )
The first three logs were made before I changed nginx.conf
2024/01/31 08:29:55 [error] 804#804: *86 upstream prematurely closed connection while reading response header from upstream, client: 192.168.1.124, server: , request: "POST /admin/part/part/export/? HTTP/1.1", upstream: "http://127.0.0.1:6000/admin/part/part/export/?", host: "192.168.1.200", referrer: "http://192.168.1.200/admin/part/part/export/?"
2024/01/31 08:32:22 [error] 804#804: *86 upstream prematurely closed connection while reading response header from upstream, client: 192.168.1.124, server: , request: "POST /admin/part/part/export/?assembly__exact=0 HTTP/1.1", upstream: "http://127.0.0.1:6000/admin/part/part/export/?assembly__exact=0", host: "192.168.1.200", referrer: "http://192.168.1.200/admin/part/part/export/?assembly__exact=0"
2024/01/31 08:35:11 [error] 804#804: *86 upstream prematurely closed connection while reading response header from upstream, client: 192.168.1.124, server: , request: "POST /admin/part/part/export/?assembly__exact=0 HTTP/1.1", upstream: "http://127.0.0.1:6000/admin/part/part/export/?assembly__exact=0", host: "192.168.1.200", referrer: "http://192.168.1.200/admin/part/part/export/?assembly__exact=0"
I made the following changes to nginx.conf,
http{
...
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_send_timeout 300;
...
}
Next, the log shows the restart after the changes were made
2024/01/31 09:01:30 [notice] 12460#12460: signal process started
The rest of the log were me trying again after increasing timeout.
2024/01/31 09:02:51 [error] 12462#12462: *168 upstream prematurely closed connection while reading response header from upstream, client: 192.168.1.124, server: , request: "POST /admin/part/part/export/? HTTP/1.1", upstream: "http://127.0.0.1:6000/admin/part/part/export/?", host: "192.168.1.200", referrer: "http://192.168.1.200/admin/part/part/export/?"
2024/01/31 09:06:51 [error] 12461#12461: *192 upstream prematurely closed connection while reading response header from upstream, client: 192.168.1.124, server: , request: "GET /api/part/?cascade=1&category=null&category_detail=true&export=csv HTTP/1.1", upstream: "http://127.0.0.1:6000/api/part/?cascade=1&category=null&category_detail=true&export=csv", host: "192.168.1.200", referrer: "http://192.168.1.200/part/"
2024/01/31 09:14:31 [error] 12461#12461: *235 upstream prematurely closed connection while reading response header from upstream, client: 192.168.1.124, server: , request: "POST /admin/part/part/export/? HTTP/1.1", upstream: "http://127.0.0.1:6000/admin/part/part/export/?", host: "192.168.1.200", referrer: "http://192.168.1.200/admin/part/part/export/?"
2024/01/31 09:16:08 [error] 12461#12461: *235 upstream prematurely closed connection while reading response header from upstream, client: 192.168.1.124, server: , request: "POST /admin/part/part/export/? HTTP/1.1", upstream: "http://127.0.0.1:6000/admin/part/part/export/?", host: "192.168.1.200", referrer: "http://192.168.1.200/admin/part/part/export/?"
2024/01/31 09:55:53 [error] 12461#12461: *275 upstream prematurely closed connection while reading response header from upstream, client: 192.168.1.124, server: , request: "POST /admin/part/part/export/? HTTP/1.1", upstream: "http://127.0.0.1:6000/admin/part/part/export/?", host: "192.168.1.200", referrer: "http://192.168.1.200/admin/part/part/export/?"
2024/01/31 11:41:48 [error] 12461#12461: *286 upstream prematurely closed connection while reading response header from upstream, client: 192.168.1.124, server: , request: "POST /admin/part/part/export/? HTTP/1.1", upstream: "http://127.0.0.1:6000/admin/part/part/export/?", host: "192.168.1.200", referrer: "http://192.168.1.200/admin/part/part/export/?"
2024/01/31 11:44:27 [error] 12461#12461: *291 upstream prematurely closed connection while reading response header from upstream, client: 192.168.1.124, server: , request: "POST /admin/part/part/export/? HTTP/1.1", upstream: "http://127.0.0.1:6000/admin/part/part/export/?", host: "192.168.1.200", referrer: "http://192.168.1.200/admin/part/part/export/?"
I edited the above log, as I had not copied the full log; now it is complete.
Things I have noticed, I am no export, and these may be correct in this installation.
all the logs refer to a client at 192.168.1.124, I am not sure what that is for? The system is at 192.168.1.200, and everything is on one computer.
But since I can get a small subset to download (100 items), this may be correct.
@SparkeyinVA thanks for reporting this - this is in line with a few other reports regarding timeout errors on large log downloads. To truly address this, we need to refactor a bunch of the code - we have identified the issues but needs some significant dev time to really do anything about.
Briefly, what we need to do is:
A) Optimize the database queries when exporting data to file - it is much slower than exporting to the API currently. We use a third-party library for exporting to file, which I think we can work around and do a much better job B) Offload file creation to the background worker thread. This will skirt around the connection timeout which is much shorter than the background worker timeout
Perhaps you (or others wanting to see a quick resolution to this) would be willing to sponsor this development? I have created a new issue to track this concept
@SchrodingersGat Thank you for the feedback and all you do with InvenTree. I currently sponsor InvenTree on Patreon. Is there a specific way to sponsor this project?
This issue seems stale. Please react to show this is still important.
This will be addressed in https://github.com/inventree/InvenTree/pull/6911
Any discussion around the new implementation (or support / funding / etc) should be directed over there
Please verify that this bug has NOT been raised before.
Describe the bug*
This is the same/similar bug that I reported in #5571.
I use the one-line installer, as I have outlined in #6177. I am now at V0.13.3 and still having an issue where I can no longer download the parts list from the mail parts list.
Or from the Admin Section
Both result in a timeout and a 502 error
I have a parts database with currently 777 parts. Every few months, I export a few reports and merge them with our Sage 50 accounting system.
Several months ago, I was able to export the parts list which had 652 parts just fine. Now I am only getting 502 Bad Gateway when I try to export my parts database.
Other reports can be downloaded with no problem. For example, the Supplier Parts, which has 837 items, downloads fine.
I have tried increasing the timeout of the NGINX server as we discussed in #5571 , I followed the steps outlined on the ubiq website. https://ubiq.co/tech-blog/increase-request-timeout-nginx/ I used the 300 as shown in their example.
File I edited /etc/nginx/nginx.conf under the section
http {
here is what I added.. I confirmed the configuration file was correct.The last time I was able to get around the issue by using the supplied filters and download the parts list in two sections, This is no longer working.
The Demo site has 421 parts and the download works as expected.
Any direction on how to fix this issue would be helpful.
Steps to Reproduce
This also occurs when I try to download the parts database from the regular site URL/Part/ and click download table data.
Expected behaviour
I expected the file to download even if it is large.
Deployment Method
Version Information
Version Information:
InvenTree-Version: 0.13.3 Django Version: 3.2.23 Commit Hash: e81349e7 Commit Date: None Commit Branch: stable Database: sqlite3 Debug-Mode: False Deployed using Docker: False Platform: Linux-6.5.0-15-generic-x86_64-with-glibc2.35 Installer: PKG Target: ubuntu:20.04 Active plugins: [{'name': 'InvenTreeBarcode', 'slug': 'inventreebarcode', 'version': '2.0.0'}, {'name': 'InvenTreeCoreNotificationsPlugin', 'slug': 'inventreecorenotificationsplugin', 'version': '1.0.0'}, {'name': 'InvenTreeCurrencyExchange', 'slug': 'inventreecurrencyexchange', 'version': '1.0.0'}, {'name': 'InvenTreeLabel', 'slug': 'inventreelabel', 'version': '1.0.0'}, {'name': 'InvenTreeLabelSheet', 'slug': 'inventreelabelsheet', 'version': '1.0.0'}, {'name': 'DigiKeyPlugin', 'slug': 'digikeyplugin', 'version': '1.0.0'}, {'name': 'LCSCPlugin', 'slug': 'lcscplugin', 'version': '1.0.0'}, {'name': 'MouserPlugin', 'slug': 'mouserplugin', 'version': '1.0.0'}, {'name': 'TMEPlugin', 'slug': 'tmeplugin', 'version': '1.0.0'}, {'name': 'KiCadLibraryPlugin', 'slug': 'kicad-library-plugin', 'version': '1.3.12'}]
Please verify if you can reproduce this bug on the demo site.
Relevant log output