Closed jow- closed 5 years ago
@jow- the problem seems to be that nginx doesn't permit POST to static pages
Can i ask you why we use cgi-upload/download/backup instead of a page from luci webui that execute the command? ( using the way we get data for leases ? )
These are no static pages but executable CGI programs.
Can i ask you why we use cgi-upload/download/backup instead of a page from luci webui that execute the command? ( using the way we get data for leases ? )
I am transitioning LuCI away from server side Lua code for upload and download handling.
@Ansuel not able to help fix this specifically, but if you need any testing of an interim solution, would be happy to.
There seems to be 2 problem... one is easy to solve with
cgi-safe = /usr/libexec/cgi-io
(was related to cgi-io not beeing in the /www dir )
For some reason cgi-io still crash and nginx report 502 error
Real proble is that we don't have a real error neither from nginx, uwsgi or cgi-io (due to the fact that it does log thing differently) Need more help to fix this
Also i think cgi-io was broken from the start...
generate archive also fails with 405 Not Allowed
@lucize: Try to use location /cgi-bin
instead of location /cgi-bin/luci
in luci's nginx configuration file
/etc/nginx/luci_uwsgi.conf
and restart nginx by
service nginx stop; service nginx start
I found that problem with the default configuration in /var/log/nginx/*.log
.
After that nginx should report 502 Bad Gateway and log
[...] upstream prematurely closed connection while reading response header from upstream, client: [CL.IE.NT.IP], server: localhost, request: "POST /cgi-bin/cgi-backup HTTP/1.1", upstream: "uwsgi://unix:////var/run/uwsgi.sock:", [...]
If you logread
you will see the first problem @Ansuel mentions:
[...] daemon.info uwsgi: CGI security error: /usr/libexec/cgi-io is not under /www
You can solve that by appending
cgi-safe = /usr/libexec/cgi-io
to luci's uwsgi configuration file /etc/uwsgi.conf
and reload it by
service uwsgi restart
For debugging you can comment #disable-logging = true
there, too. Then you see the other problem in logread
:
[...] daemon.info uwsgi: [CL.IE.NT.IP] POST /cgi-bin/cgi-backup => generated 0 bytes in 14 msecs.
So, uwsgi will return 500 Internal Server Error to nginx, which does not change its messages.
The problem is, that uwsgi follows the symbolic links cgi-backup
, cgi-download
and cgi-upload
. They all point to the same file /usr/libexec/cgi-io
(so we had to enable it with cgi-safe, too). But cgi-io wants to be called via the symbolic links. It strstr-compares
cgi-{backup,download,upload}
to the command name argv[0]
for changing its behavior. As it is called as cgi-io
, it
return -1;
without any output (generated 0 bytes).
Is there a way to prevent uwsgi following the links or should we change cgi-io?
For testing I changed cgi-io
to call always
return main_backup(argc, argv);
the backup downloads, but takes too long (sometimes?).
I tried different things, and logread
gets
nothing,
[...] daemon.info uwsgi: invalid CGI response !!!
or
[...] daemon.info uwsgi: CGI timeout !!!
and nginx reports 504 Gateway Time-out.
I got rid off that by appending cgi-safe = cgi-bin/cgi-backup
to the file /etc/uwsgi.conf
. Altogether, we would need to append:
cgi-safe = /usr/libexec/cgi-io
cgi-safe = cgi-bin/cgi-backup
cgi-safe = cgi-bin/cgi-download
cgi-safe = cgi-bin/cgi-upload
I think we should just increase the timeout for cgi
Il Dom 6 Ott 2019, 15:20 peter-stadler notifications@github.com ha scritto:
@lucize https://github.com/lucize: Try to use location /cgi-bin instead of location /cgi-bin/luci in luci's nginx configuration file /etc/nginx/luci_uwsgi.conf and restart nginx by service nginx stop; service nginx start I found that problem with the default configuration in /var/log/nginx/*.log.
After that nginx should report 502 Bad Gateway and log [...] upstream prematurely closed connection while reading response header from upstream, client: [CL.IE.NT.IP], server: localhost, request: "POST /cgi-bin/cgi-backup HTTP/1.1", upstream: "uwsgi://unix:////var/run/uwsgi.sock:", [...]
If you logread you will see the first problem @Ansuel https://github.com/Ansuel mentions: [...] daemon.info uwsgi: CGI security error: /usr/libexec/cgi-io is not under /www You can solve that by appending cgi-safe = /usr/libexec/cgi-io to luci's uwsgi configuration file /etc/uwsgi.conf and reload it by service uwsgi restart For debugging you can comment #disable-logging = true there, too. Then you see the other problem in logread: [...] daemon.info uwsgi: [CL.IE.NT.IP] POST /cgi-bin/cgi-backup => generated 0 bytes in 14 msecs. So, uwsgi will return 500 Internal Server Error to nginx, which does not change its messages.
The problem is, that uwsgi follows the symbolic links cgi-backup, cgi-download and cgi-upload. They all point to the same file /usr/libexec/cgi-io (so we had to enable it with cgi-safe, too). But cgi-io wants to be called via the symbolic links. It strstr-compares 'cgi-{backup,download,upload}to the command nameargv[0]` for changing its behavior. As it is called as 'cgi-io', it 'return -1;' without any output (generated 0 bytes).
Is there a way to prevent uwsgi following the links or should we change cgi-io?
For testing I changed cgi-io to call always return main_backup(argc, argv); the backup downloads, but takes too long (sometimes?). I tried different things, and logread gets nothing, [...] daemon.info uwsgi: invalid CGI response !!! or [...] daemon.info uwsgi: CGI timeout !!! and nginx reports 504 Gateway Time-out.
I got rid off that by appending cgi-safe = cgi-bin/cgi-backup to the file /etc/uwsgi.conf. Altogether, we would need to append: cgi-safe = /usr/libexec/cgi-io cgi-safe = cgi-bin/cgi-backup cgi-safe = cgi-bin/cgi-download cgi-safe = cgi-bin/cgi-upload
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/openwrt/packages/issues/10134?email_source=notifications&email_token=AE2ZMQT2C4P4CQOID5N75MTQNHQ2LA5CNFSM4I4SVIA2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEAOJ66Y#issuecomment-538746747, or mute the thread https://github.com/notifications/unsubscribe-auth/AE2ZMQUKC7KJVNEL56N6MU3QNHQ2LANCNFSM4I4SVIAQ .
Sorry, I was wrong: It is not cgi-safe = cgi-bin/cgi-backup
that solves the issue, but restarting uwsgi.
In detail:
logread
we have:
[...] daemon.info uwsgi: CGI timeout !!! [...] daemon.info uwsgi: [CL.IE.NT.IP] POST /cgi-bin/cgi-backup => generated 21219 bytes in 60932 msecs
service uwsgi restart
it works:
[...] daemon.info uwsgi: [CL.IE.NT.IP] POST /cgi-bin/cgi-backup => generated 21219 bytes in 892 msecs
The timeout is not because there is too much data, but because the
splice(fds[0], NULL, 1, NULL, 4096, SPLICE_F_MORE);
does not return in the end. If I add fprintf(file, "writed %d.\r\n", len);
in the while(len > 0)
loop I get:
writed 4096.
writed 4096.
writed 4096.
writed 4096.
writed 4096.
writed 739.
After the restart I get an additional
writed -1.
So, it works besides the error 4 Interrupted system call. I get the same error with luci on uhttpd. @jow-: Is this the right behavior?
Ok i understand the problem...
I added syslog support to the script to check what is the actual argv[0] with the script is called and with my surprise...
Tue Oct 8 00:38:21 2019 local1.notice exampleprog[12482]: Program started by User 0
Tue Oct 8 00:38:21 2019 local1.notice exampleprog[12482]: /usr/libexec/cgi-io
Tue Oct 8 00:38:21 2019 daemon.err nginx: 2019/10/08 00:38:21 [error] 12421#0: *2 upstream prematurely closed connection while reading response header from upstream, client: 192.168.3.12, server: localhost, request: "POST /cgi-bin/cgi-backup HTTP/2.0", upstream: "uwsgi://unix:////var/run/uwsgi.sock:", host: "192.168.3.1", referrer: "https://192.168.3.1/cgi-bin/luci/admin/system/flash"
Tue Oct 8 00:38:21 2019 daemon.info uwsgi: 192.168.3.12 POST /cgi-bin/cgi-backup => generated 0 bytes in 18 msecs
The script expect to be called with argv[0] as cgi-backup or cgi-download or cgi-upload and not with the name of the directory... For this specific reason cgi-io doesn't run anything and terminate
@peter-stadler your solution works because the script runs the backup function anyway... and in my case it does also work (but i don't have the problem you have... i think i have a cgi timeout increased...)
This can be solved in 2 way... find a way to make uwsgi call the cgi the right way... (some reference to static-map can be usefull) or change cgi-io to use args instead of the name
I don't think @jow- will approve the second solution so...
If anyone have any idea about this... it would be very usefull...
To prove that this is the actual problem... just copy cgi-io in the /www/cgi-io dir and rename it to cgi-backup and watch the magic...
Decided to make a patch to uwsgi directly... problem is there... easier to fix
I proposed the patch upstream... For now we can use that....
It looks backup is fixed...
upload is stil WIP... @peter-stadler can you take a look ?
The patch fixes the problem with argv[0]
for me. But backuping (and uploading) still hangs most of the time: The last call to splice does not return. If it returns, then with the value -1 (the errno would be 4 Interrupted system call).
Does somebody else see this behavior? If I use luci on uhttpd, the splice in the main_backup returns -1 with errno=4, too (but works). I used the following patch to see it (in /var/cgi-io.log
):
diff --git a/net/cgi-io/src/main.c b/net/cgi-io/src/main.c
index ca1575842..22b60cb96 100644
--- a/net/cgi-io/src/main.c
+++ b/net/cgi-io/src/main.c
@@ -782,9 +782,14 @@ main_backup(int argc, char **argv)
fflush(stdout);
+ static FILE * file;
+ file = fopen ("/var/cgi-io.log", "w");
+ setlinebuf(file);
do {
len = splice(fds[0], NULL, 1, NULL, 4096, SPLICE_F_MORE);
+ fprintf(file, "writed %d.\r\n", len);
} while (len > 0);
+ fclose (file);
waitpid(pid, &status, 0);
Some update on this... upload actually work... in /tmp firmware.bin actually exist and get created but it doesn't goes futher...
With some test i notice the cgi-io get stuck to the splice() line for some reason... because of this the program never stop and goes to cgi timeout...
I did use your patch for making uwsgi call cgi-io in the expexcted way through the symlinks :-) There are no other commits, right?
My patch would just allow logging the return values of the splice(…)
calls in the backup function. It does not fix anything.
I experience that cgi-io gets stuck to the corresponding splice(…)
line in both, the main_backup(…)
AND the main_upload(…)
function. Both times after completly transferring all bytes, the last call to splice(…)
is not returning—at least mostly.
In some rare cases (first call after the start?) and when uwsgi is killed, the splice(…)
lines return -1 in the end (this is logged by my patch). This is always the case if using uhttpd instead of uwsgi. But -1 is indicating an error (I took a look at the errno
, it is 4=Interrupted system call). Should splice not return 0 at last?
i notice the real problem is with splice trying to read 4096 byte in the last read so this could really be just a bug in cgi-io.... Currently debugging it.
For example by modifing the loop to terminate on len != 4096 the cgi-io program doesn't lock.
Then I think it is intended and the error means that there is just no more data. In this case we have to find out, why it is not returning if it is called in the uwsgi setting.
The len!=4096 would be no solution: There could be exactly a multiple of 4096 bytes to read or the other thread is just too slow ;-) Then it would never stop respectively stop too early …
well actually... no if we read 4096 bytes at times then we will never read more than that... i will md5sum the backup generated from the gui and from the command directly and check if this fix doesn't break anything.
Yes ok... i compared the 2 md5 and they does match... another bug out... Will now check cgi-upload and cgi-download and propose a patch to jow
But what if on the other end are send for example 3*4096 bytes? The first three splice(…) calls return 4096 each, the fourth will not return (it would be -1, wouldn't it?), and we have the same problem in very rare cases. But then it will be very hard to see the problem.
from what I understand splice return the byte read... (so they are already transferred) if splice return less than 4096 we are at the end of the file and we must stop. That's what man (documentation) says and makes sense why splice block the entire function (as we try to read in-existent data)
I just tested it: If we use while(len==4096)
, the problem will reappear, when sysupgrade --create-backup - | wc -c
is a multiple of 4096 :-/
how you tested ?
Quite silly: I created hostnames until the configuration was a multiple, it would have been easier to temporarily replace sysupgrade by something that returns 4096*5 bytes or so ;-)
i will paste the change i made to backup...
(i defined READ_BLOCK to 4096... you can change READ_BLOCK to 4096)
printf("Status: 200 OK\r\n");
printf("Content-Type: application/x-targz\r\n");
printf("Content-Disposition: attachment; "
"filename=\"backup-%s-%s.tar.gz\"\r\n\r\n", hostname, datestr);
fflush(stdout);
splice(fds[0], NULL, 1, NULL, READ_BLOCK, SPLICE_F_MORE);
while (splice(fds[0], NULL, 1, NULL, READ_BLOCK, SPLICE_F_MORE) == READ_BLOCK);
waitpid(pid, &status, 0);
close(fds[0]);
close(fds[1]);
@peter-stadler this is the patch (append .patch to the commit)
https://github.com/Ansuel/packages/commit/f825107e91ee9331166c32383c88ea2022dc10ff
I find the same problem in main_upload... We need to test this with uhttpd...
Ok tried with uhttpd and doesn't look to cause any problem... compared md5sum and they match...
Try to replace /sbin/sysupgrade
temporarly with the following script:
#!/bin/sh
dd if=/dev/urandom bs=4096 count=3
@peter-stadler any idea? how to fix this ahahah...
Not right now, what I want to know is, why cgi-io is behaving differently under uwsgi and uhttpd regarding the splice ...
could be that uwsgi wait for the end of the execution while uhttpd doesn't?
Under uhttpd the last splice actually returns (with -1).
-1 is error btw...
Anyway problem is that splice wait for other data...
Got backup working by disabling threads in uwsgi, try:
sed -i 's/^.*threads\s*=.*//' /etc/uwsgi.conf && service uwsgi restart
This is working with the upstream cgi-io and the uwsgi that does not follow symlinks.
Looks a workaround to me :(
Il Mer 9 Ott 2019, 16:11 peter-stadler notifications@github.com ha scritto:
Got backup working by disabling threads in uwsgi, try: sed -i 's/^.threads\s=.*//' /etc/uwsgi.conf && service uwsgi restart This is working with the upstream cgi-io and the uwsgi that does not follow symlinks.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/openwrt/packages/issues/10134?email_source=notifications&email_token=AE2ZMQQWI3HZBYXKCZEQ4D3QNXQ7HA5CNFSM4I4SVIA2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEAYAQUI#issuecomment-540018769, or mute the thread https://github.com/notifications/unsubscribe-auth/AE2ZMQQFX5YIOJMNEQNG5OTQNXQ7HANCNFSM4I4SVIAQ .
Btw: I think it is intentional that splice is called until it returns -1; the error indicates that there is no more data to read. Is this right, @jow-? But, maybe cgi-io should check the errno and give an error if the -1 is not because of the end of the data. Else we end up with an incomplete backup, aren't we?
Looks a workaround to me :(
Yes, it as workaround. But, I think it is not so problematic. We still have concurrency through the worker processes. If we want to enable threads, we have to investigate how it interferes with the splice(…) through a pipe between forks …
Problem could be related to thread.. so one way to solve this is run an additional uwsgi instance only for cgi-io things that runs with only one thread
This way we won't make luci uwsgi process slower... Will implement this so the only change need should be uwsgi patch with the additional option
Il Mer 9 Ott 2019, 16:17 peter-stadler notifications@github.com ha scritto:
Btw: I think it is intentional that splice is called until it returns -1; the error indicates that there is no more data to read. Is this right, @jow- https://github.com/jow-? But, maybe cgi-io should check the errno and give an error if the -1 is not because of the end of the data. Else we end up with an incomplete backup, aren't we?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/openwrt/packages/issues/10134?email_source=notifications&email_token=AE2ZMQVSNVE4NQLRZ3SG2UDQNXRWTA5CNFSM4I4SVIA2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEAYBF2I#issuecomment-540021481, or mute the thread https://github.com/notifications/unsubscribe-auth/AE2ZMQW5SAA7ZB2DQEBMJNTQNXRWTANCNFSM4I4SVIAQ .
This sounds good :-)
Before splitting that off, we should try to make restore backup work, too: It is hanging after 100% of the data transferred. If you killall cgi-upload
at this point the restore continues and would work ...
Second part of the patch ... main_upload
Il Mer 9 Ott 2019, 16:40 peter-stadler notifications@github.com ha scritto:
Before splitting that off, we should try to make restore backup work, too: It is hanging after 100% of the data transferred. If you killall cgi-upload at this point the restore continues and would work ...
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/openwrt/packages/issues/10134?email_source=notifications&email_token=AE2ZMQXZO2FSKYTUGB2WATLQNXUNBA5CNFSM4I4SVIA2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEAYDXTY#issuecomment-540031951, or mute the thread https://github.com/notifications/unsubscribe-auth/AE2ZMQXGCTI6BXD3FW7RQNLQNXUNBANCNFSM4I4SVIAQ .
@peter-stadler this https://github.com/openwrt/packages/pull/10191 should fix every problem (with thread 1 and worker 1 )
To use new implementation, this 2 other pr are needed https://github.com/openwrt/packages/pull/10173 https://github.com/openwrt/packages/pull/10193
On my side everything works
Looks quite good for me :-)
The root cause of the hanging splice should have been addressed with https://github.com/openwrt/packages/pull/10814 finally.
Thank you for the information :-)
Sorry to bump closed issue, but I am still seeing (what I think is) this problem with:
OpenWrt 19.07.0, r10860-a3ffeb413b
Flash operations > Backup > Generate archive, causes the same symptoms of failing with 405 Not Allowed
. I tried going down the path that @peter-stadler laid out above, modifying the nginx and uwsgi configs. But no joy.
EDIT: I can get past the 405, but still come to the 502 error with
==> /var/log/nginx/access.log <==
192.168.1.144 - - [31/Jan/2020:21:09:03 +0000] "POST /cgi-bin/cgi-backup HTTP/1.1" 502 559 "https://192.168.1.1/cgi-bin/luci/admin/system/flash" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36"
==> /var/log/nginx/error.log <==
2020/01/31 21:09:03 [error] 1710#0: *3 upstream prematurely closed connection while reading response header from upstream, client: 192.168.1.144, server: localhost, request: "POST /cgi-bin/cgi-backup HTTP/1.1", upstream: "uwsgi://unix:////var/run/uwsgi.sock:", host: "192.168.1.1", referrer: "https://192.168.1.1/cgi-bin/luci/admin/system/flash"
Is generate archive expected to work out of the box now with LuCI + Nginx after the PR from this issue?
All configs are basically default.
Here are my currently installed packages:
``` base-files - 204.2-r10860-a3ffeb413b busybox - 1.30.1-5 cgi-io - 16 diffutils - 3.7-2 dnsmasq - 2.80-15 dropbear - 2019.78-2 firewall - 2019-11-22-8174814a-1 fstools - 2020-01-05-823faa0f-1 fwtool - 2 getrandom - 2019-06-16-4df34a4d-3 ip6tables - 1.8.3-1 iptables - 1.8.3-1 jansson - 2.12-1 jshn - 2020-01-20-43a103ff-1 jsonfilter - 2018-02-04-c7e938d6-1 kernel - 4.14.162-1-2e88863ccdd594fb8e842df3c25842ee kmod-gpio-button-hotplug - 4.14.162-3 kmod-ip6tables - 4.14.162-1 kmod-ipt-conntrack - 4.14.162-1 kmod-ipt-core - 4.14.162-1 kmod-ipt-nat - 4.14.162-1 kmod-ipt-offload - 4.14.162-1 kmod-leds-gpio - 4.14.162-1 kmod-lib-crc-ccitt - 4.14.162-1 kmod-nf-conntrack - 4.14.162-1 kmod-nf-conntrack6 - 4.14.162-1 kmod-nf-flow - 4.14.162-1 kmod-nf-ipt - 4.14.162-1 kmod-nf-ipt6 - 4.14.162-1 kmod-nf-nat - 4.14.162-1 kmod-nf-reject - 4.14.162-1 kmod-nf-reject6 - 4.14.162-1 kmod-ppp - 4.14.162-1 kmod-pppoe - 4.14.162-1 kmod-pppox - 4.14.162-1 kmod-slhc - 4.14.162-1 libblobmsg-json - 2020-01-20-43a103ff-1 libc - 1.1.24-2 libcap - 2.27-1 libgcc1 - 7.5.0-2 libip4tc2 - 1.8.3-1 libip6tc2 - 1.8.3-1 libiwinfo-lua - 2019-10-16-07315b6f-1 libiwinfo20181126 - 2019-10-16-07315b6f-1 libjson-c2 - 0.12.1-3 libjson-script - 2020-01-20-43a103ff-1 liblua5.1.5 - 5.1.5-3 liblucihttp-lua - 2019-07-05-a34a17d5-1 liblucihttp0 - 2019-07-05-a34a17d5-1 libnl-tiny - 0.1-5 libopenssl-conf - 1.1.1d-2 libopenssl1.1 - 1.1.1d-2 libpcre - 8.43-1 libpthread - 1.1.24-2 librt - 1.1.24-2 libubox20191228 - 2020-01-20-43a103ff-1 libubus-lua - 2019-12-27-041c9d1c-1 libubus20191227 - 2019-12-27-041c9d1c-1 libuci20130104 - 2019-09-01-415f9e48-3 libuclient20160123 - 2019-05-30-3b3e368d-1 libustream-openssl20150806 - 2019-11-05-c9b66682-2 libuuid1 - 2.34-1 libxtables12 - 1.8.3-1 logd - 2019-06-16-4df34a4d-3 lua - 5.1.5-3 luci - git-20.030.27183-66213ef-1 luci-app-firewall - git-20.030.27183-66213ef-1 luci-app-opkg - git-20.030.27183-66213ef-1 luci-base - git-20.030.27183-66213ef-1 luci-lib-ip - git-20.030.27183-66213ef-1 luci-lib-jsonc - git-20.030.27183-66213ef-1 luci-lib-nixio - git-20.030.27183-66213ef-1 luci-mod-admin-full - git-20.030.27183-66213ef-1 luci-mod-network - git-20.030.27183-66213ef-1 luci-mod-status - git-20.030.27183-66213ef-1 luci-mod-system - git-20.030.27183-66213ef-1 luci-proto-ipv6 - git-20.030.27183-66213ef-1 luci-proto-ppp - git-20.030.27183-66213ef-1 luci-ssl-nginx - git-20.030.27183-66213ef-1 luci-ssl-openssl - git-20.030.27183-66213ef-1 luci-theme-bootstrap - git-20.030.27183-66213ef-1 mtd - 24 netifd - 2019-08-05-5e02f944-1 nginx-mod-luci-ssl - 1.16.1-1 nginx-ssl - 1.16.1-1 odhcp6c - 2019-01-11-e199804b-16 odhcpd-ipv6only - 2019-12-16-e53fec89-3 openssl-util - 1.1.1d-2 openwrt-keyring - 2019-07-25-8080ef34-1 opkg - 2020-01-25-c09fe209-1 ppp - 2.4.7.git-2019-05-25-2 ppp-mod-pppoe - 2.4.7.git-2019-05-25-2 procd - 2020-01-24-31e4b2df-1 rpcd - 2019-11-10-77ad0de0-1 rpcd-mod-file - 2019-11-10-77ad0de0-1 rpcd-mod-iwinfo - 2019-11-10-77ad0de0-1 rpcd-mod-luci - 20191114 rpcd-mod-rrdns - 20170710 swconfig - 12 ubi-utils - 2.1.1-1 ubox - 2019-06-16-4df34a4d-3 ubus - 2019-12-27-041c9d1c-1 ubusd - 2019-12-27-041c9d1c-1 uci - 2019-09-01-415f9e48-3 uclient-fetch - 2019-05-30-3b3e368d-1 uhttpd - 2019-12-22-5f9ae573-1 urandom-seed - 1.0-1 urngd - 2020-01-21-c7f7b6b6-1 usign - 2019-08-06-5a52b379-1 uwsgi-cgi - 2.0.18-2 uwsgi-cgi-luci-support - 2.0.18-2 zlib - 1.2.11-3 ```
Decided to post here before opening something new since it seemed like a close match to what I'm experiencing... Thanks in advance!
post your nginx config and also luci_uwsgi config
Only thing different from default nginx.conf is I started to mess with using a self-signed cert.
nginx.conf
``` user root; worker_processes 4; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 0; client_body_buffer_size 10K; client_header_buffer_size 1k; client_max_body_size 1G; large_client_header_buffers 2 1k; gzip on; gzip_http_version 1.1; gzip_vary on; gzip_comp_level 1; gzip_proxied any; root /www; server { listen 80 default_server; listen [::]:80 default_server; server_name _; return 301 https://$host$request_uri; } server { listen 443 ssl default_server; listen [::]:443 ssl default_server; server_name localhost; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; #ssl_prefer_server_ciphers on; #ssl_ciphers "EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH:DHE+AESGCM:DHE:!RSA!aNULL:!eNULL:!LOW:!RC4:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS:!CAMELLIA:!SEED"; ssl_session_tickets off; #ssl_certificate /etc/nginx/nginx.cer; #ssl_certificate_key /etc/nginx/nginx.key; ssl_certificate /etc/nginx/openwrt-luci.crt; ssl_certificate_key /etc/nginx/openwrt-luci.key; location ~* .(jpg|jpeg|png|gif|ico|css|js)$ { expires 365d; } include luci_uwsgi.conf; } include /etc/nginx/conf.d/*.conf; #this dir is currently empty } ```
luci_uwsgi.conf
``` location /cgi-bin { index index.html; uwsgi_param QUERY_STRING $query_string; uwsgi_param REQUEST_METHOD $request_method; uwsgi_param CONTENT_TYPE $content_type; uwsgi_param CONTENT_LENGTH $content_length if_not_empty; uwsgi_param REQUEST_URI $request_uri; uwsgi_param PATH_INFO $document_uri; uwsgi_param SERVER_PROTOCOL $server_protocol; uwsgi_param REMOTE_ADDR $remote_addr; uwsgi_param REMOTE_PORT $remote_port; uwsgi_param SERVER_ADDR $server_addr; uwsgi_param SERVER_PORT $server_port; uwsgi_param SERVER_NAME $server_name; uwsgi_modifier1 9; uwsgi_pass unix:////var/run/uwsgi.sock; } location /luci-static { } ```
remove luci_uwsgi.conf and reinstall package or reflash the image (if custom compiled)
Maintainer: @Ansuel Environment: -
Description: The
uwsgi-cgi-luci-support
package is unable to execute CGI helpers in/www/cgi-bin/
, breaking firmware upload and backup download on LuCI master.Please refer to https://github.com/openwrt/luci/issues/3140 for a reproducer.