Closed anthonyrisinger closed 11 years ago
UPDATE: i should note... after disabling ugreen
and async
it works in prefork mode (but seems to fully buffer the response before streaming to client?)... still need to get async rocking though... definitely a bug there.
UPDATE: also, i'm now using the ${hex[...]} method for PATH_INFO
as suggested in the "caching cookbook" and it's working great, though it would still be nice to support automatic dirs of some kind (eg, in my case, the cache would then be directly usable as a repository by any static webserver, not just uWSGI)
Can you try if multithreading works fine ? (please try with high values, like threads = 80, and with tiny stack sizes like thread-stack-size = 512)
Definitely it is a bug with async scheduler. Regarding automatic dir i think it could be done (nginx does it too), the only "problem" is doing it in a secure way. Finally, all of the content that need to be "transformed" are fully buffered in memory before being sent. I plan to add some kind of "flush" instruction to even "program" when to send data to the client (so caching to disk can be done after the body has been sent to the client)
sure i can try the thread stuff later tonight.
the concrete/immediate issue i'm having with full-buffering is simply timeouts on the client... in my case pacman
package manager... which will sever the connection if no data has been received for 10 seconds, ergo it "fails" for about 2/3 of all packages simply because they take longer than that to download on the proxy.
...could i use the fastrouters at all to achieve this "transparent proxy repo" goal?
Just committed the "flush" instruction. Pay attention because it is pretty "magic":
Suppose a simple app:
import time def application(e, s): s('200 OK',[('Content-Type','text/html')]) for i in range(1,10): time.sleep(1) yield "ciao"
curl -N -D /dev/stdout http://localhost:9090/
will print a chunk every second
Now we add a transformation (the toupper):
./uwsgi --plugin transformation_toupper --http-socket :9090 -w output --route-run toupper:
the body will be buffered and curl will receive the whole body after 9 seconds.
Now we apply "flush" before toupper
./uwsgi --plugin transformation_toupper --http-socket :9090 -w output --route-run flush: --route-run toupper:
we will get the output with one chunk per second, but the toupper transformation happens after the flush so we will get no output for it (even if internally the transformation is applied, so all fine for caching).
Now, here comes the flush magic:
./uwsgi --plugin transformation_toupper --http-socket :9090 -w output --route-run flush: --route-run toupper: --route-run flush:
after the toupper we flush again, and curl will return that buffer too ;)
Again:
./uwsgi --plugin transformation_toupper --http-socket :9090 -w output --route-run flush: --route-run toupper: --route-run flush: --route-run flush: --route-run flush:
we flush the toupper buffer 2 more times and so on.
To avoid damaging your brain, i suggest this simple phormula: flush send to the client the current response buffer in the transformation chain. The only special case is when the flush is the first rule in the transformation chain: that means send the plugin output as soon as possibile but continue honouring the transformation api.
heh, not all that crazy :) that's pretty much exactly what i expected... but the interface feels clunky to me...
why always a top-level flush
instruction? since everything seems to be heading toward (and xforms probably well suited for) key-val lists, why not instead make flush
a common key for anything that wants to be an xform?
basically the same effect, except:
./uwsgi --plugin transformation_toupper --http-socket :9090 -w output --route-run flush:-1 --route-run toupper:flush=4KiB --route-run gzip:flush=1MiB --route-run cachestore:name=gz,key=uno,flush=0 --route-run flush:8KiB
...in this case:
flush:
sets the [active] default and emits fully unbuffered (-1
) to...toupper:
emits 4KiB chunks to...gzip:
emits 1MiB chunks to...cachestore:
spoils the party by using fully buffered! yikes!flush:
sets the new [active] default to 8KiB, but it's mostly useless (one possible use case is a complex xform chain where most instructions can handle streaming [better perf/latency], but a couple cannot)...this makes for a much nicer/simpler API IMO, and opens up the possibility of customizable "flush boundaries" using a "flush driver" of sorts:
flush=xmpp
smart buffering of streaming XML on message boundaries so the next xform, oh say XSLT, always has a well-formed doc to operate onflush=300ms
flush at 300ms intervalsflush=myexpr
--route-define-flusher name=myexpr,expr=matheval(...)
...does that make sense?
Back to the SEGFAULT, can you try the latest code ? i have refactored async runqueue management. Regarding more advanced flush, i need to investigate, i have a customer interested in bosh support (as you have reported xmpp i suppose you could be interested too) so control over buffering could be really useful.
if I may troll a bit: with all those routing features it looks like "uWSGI config coder" skills will be popping up on linkedin in not so distant future, maybe even facebook will migrate from PHP to uWSGI ini files (or rather xml, since it's such a big corporation)
;)
apologies for delay, i got pretty swamped as of late... will try to be concise, lmk if i miss anything important.
judging from the on-wire HTTP reqs (no longer duplicated/etc), my guess is you did indeed solve the core SEGFAULT i reported... bravo :)
alas, there are other issues surfaceing (if you want new reports so you can close this one, lmk)... here is the config i'm using:
[any@sh dir]$ cat parched.ini
[uwsgi]
http-socket = :8090
processes = 1
master = true
async = 256
ugreen = true
enable-threads = true
mime-file = /etc/mime.types
route-if = isfile:%dcache/${hex[PATH_INFO]} static:%dcache/${hex[PATH_INFO]}
route-run = log:[proxycache] NEW: ${PATH_INFO} => ${hex[PATH_INFO]}
route-run = tofile:filename=%dcache/${hex[PATH_INFO]}
route = ^/([^/]+)/([^/]+)/([^/]+)$ http:129.21.171.98:80,mirror.rit.edu,/archlinux/$1/os/$2/$3
route-run = break:404 Not Found
...couple things going on:
PATTERN (404s expected)...
REQ1: 200 => 404 => 500 => 500
REQ2: 200 => 404 => 200 => 404 => 200 => 404
[proxycache] NEW: /core/x86_64/core.db => 2F636F72652F7838365F36342F636F72652E6462
[pid: 30374|app: -1|req: -1/1] 127.0.0.1 () {24 vars in 323 bytes} [Tue Apr 23 23:44:24 2013] GET /core/x86_64/core.db => generated 107993 bytes in 196 msecs via route() (HTTP/1.1 200) 8 headers in 252 bytes (76 switches on core 255)
[proxycache] NEW: /core/x86_64/core.db.sig => 2F636F72652F7838365F36342F636F72652E64622E736967
[pid: 30374|app: -1|req: -1/2] 127.0.0.1 () {24 vars in 331 bytes} [Tue Apr 23 23:44:24 2013] GET /core/x86_64/core.db.sig => generated 345 bytes in 72 msecs via route() (HTTP/1.1 404) 5 headers in 153 bytes (3 switches on core 255)
[proxycache] NEW: /extra/x86_64/extra.db => 2F65787472612F7838365F36342F65787472612E6462
[pid: 30374|app: -1|req: -1/3] 127.0.0.1 () {24 vars in 327 bytes} [Tue Apr 23 23:44:24 2013] GET /extra/x86_64/extra.db => generated 0 bytes in 863 msecs via route() (HTTP/1.1 500) 8 headers in 253 bytes (1070 switches on core 255)
[proxycache] NEW: /community/x86_64/community.db => 2F636F6D6D756E6974792F7838365F36342F636F6D6D756E6974792E6462
[pid: 30374|app: -1|req: -1/4] 127.0.0.1 () {24 vars in 343 bytes} [Tue Apr 23 23:44:25 2013] GET /community/x86_64/community.db => generated 0 bytes in 984 msecs via route() (HTTP/1.1 500) 8 headers in 253 bytes (1359 switches on core 255)
[pid: 30374|app: -1|req: -1/5] 127.0.0.1 () {24 vars in 323 bytes} [Tue Apr 23 23:44:51 2013] GET /core/x86_64/core.db => generated 107993 bytes in 0 msecs via sendfile() (HTTP/1.1 200) 2 headers in 89 bytes (1 switches on core 255)
[proxycache] NEW: /core/x86_64/core.db.sig => 2F636F72652F7838365F36342F636F72652E64622E736967
[pid: 30374|app: -1|req: -1/6] 127.0.0.1 () {24 vars in 331 bytes} [Tue Apr 23 23:44:51 2013] GET /core/x86_64/core.db.sig => generated 345 bytes in 92 msecs via route() (HTTP/1.1 404) 5 headers in 153 bytes (3 switches on core 255)
[pid: 30374|app: -1|req: -1/7] 127.0.0.1 () {24 vars in 327 bytes} [Tue Apr 23 23:44:51 2013] GET /extra/x86_64/extra.db => generated 1555043 bytes in 2 msecs via sendfile() (HTTP/1.1 200) 2 headers in 90 bytes (5 switches on core 255)
[proxycache] NEW: /extra/x86_64/extra.db.sig => 2F65787472612F7838365F36342F65787472612E64622E736967
[pid: 30374|app: -1|req: -1/8] 127.0.0.1 () {24 vars in 335 bytes} [Tue Apr 23 23:44:51 2013] GET /extra/x86_64/extra.db.sig => generated 345 bytes in 73 msecs via route() (HTTP/1.1 404) 5 headers in 153 bytes (3 switches on core 255)
[pid: 30374|app: -1|req: -1/9] 127.0.0.1 () {24 vars in 343 bytes} [Tue Apr 23 23:44:51 2013] GET /community/x86_64/community.db => generated 1987797 bytes in 1 msecs via sendfile() (HTTP/1.1 200) 2 headers in 90 bytes (5 switches on core 255)
[proxycache] NEW: /community/x86_64/community.db.sig => 2F636F6D6D756E6974792F7838365F36342F636F6D6D756E6974792E64622E736967
[pid: 30374|app: -1|req: -1/10] 127.0.0.1 () {24 vars in 351 bytes} [Tue Apr 23 23:44:51 2013] GET /community/x86_64/community.db.sig => generated 345 bytes in 74 msecs via route() (HTTP/1.1 404) 5 headers in 153 bytes (3 switches on core 255)
%dcache
did not exist (i forgot to mkdir cache
):[proxycache] NEW: /extra/x86_64/extra.db => 2F65787472612F7838365F36342F65787472612E6462
open("/home/anthony/devel/local/parched/demo/cache/2F65787472612F7838365F36342F65787472612E6462"): No such file or directory [plugins/transformation_tofile/tofile.c line 31]
[proxycache] NEW: /extra/x86_64/extra.db => 2F65787472612F7838365F36342F65787472612E6462
!!! uWSGI process 30324 got Segmentation Fault !!!
here are some HTTP transactions via:
[any@sh dir]$ tcpflow -FX -C -i lo 'host localhost and port 8090' | sed -nr '/^(HTTP|GET|[[:alpha:]][-[:alnum:]_]{3,}:)/{s,^(HTTP|GET),\n\0,;p}'
[any@sh dir]$ tcpflow -FX -C -i lan0 'host mirror.rit.edu and port 80' | sed -nr '/^(HTTP|GET|[[:alpha:]][-[:alnum:]_]{3,}:)/{s,^(HTTP|GET),\n\0,;p}'
FIRST (cold cache, error 500s [in logs anyway])
====================(client)
:: Synchronizing package databases...
core 105.5 KiB 0.00B/s 00:00 [######################] 100%
error: failed retrieving file 'extra.db' from localhost:8090 : transfer closed with 905443 bytes remaining to read
error: failed to update extra (download library error)
error: failed retrieving file 'community.db' from localhost:8090 : transfer closed with 1338197 bytes remaining to read
error: failed to update community (download library error)
:: Starting full system upgrade...
there is nothing to do
====================(client=>proxy)
GET /core/x86_64/core.db HTTP/1.1
User-Agent: pacman/4.1.0 (Linux x86_64) libalpm/8.0.0
Host: localhost:8090
Accept: */*
HTTP/1.1 200 OK
Content-Type: application/octet-stream
Accept-Ranges: bytes
ETag: "3027122285"
Last-Modified: Tue, 23 Apr 2013 18:54:33 GMT
Content-Length: 107993
Connection: close
Date: Wed, 24 Apr 2013 05:46:04 GMT
Server: lighttpd/1.4.31
GET /core/x86_64/core.db.sig HTTP/1.1
User-Agent: pacman/4.1.0 (Linux x86_64) libalpm/8.0.0
Host: localhost:8090
Accept: */*
HTTP/1.1 404 Not Found
Content-Type: text/html
Content-Length: 345
Connection: close
Date: Wed, 24 Apr 2013 05:46:04 GMT
Server: lighttpd/1.4.31
GET /extra/x86_64/extra.db HTTP/1.1
User-Agent: pacman/4.1.0 (Linux x86_64) libalpm/8.0.0
Host: localhost:8090
Accept: */*
HTTP/1.1 200 OK
Content-Type: application/octet-stream
Accept-Ranges: bytes
ETag: "3033182160"
Last-Modified: Tue, 23 Apr 2013 21:35:31 GMT
Content-Length: 1555043
Connection: close
Date: Wed, 24 Apr 2013 05:46:04 GMT
Server: lighttpd/1.4.31
GET /community/x86_64/community.db HTTP/1.1
User-Agent: pacman/4.1.0 (Linux x86_64) libalpm/8.0.0
Host: localhost:8090
Accept: */*
HTTP/1.1 200 OK
Content-Type: application/octet-stream
Accept-Ranges: bytes
ETag: "3871829817"
Last-Modified: Tue, 23 Apr 2013 22:38:20 GMT
Content-Length: 1987797
Connection: close
Date: Wed, 24 Apr 2013 05:46:06 GMT
Server: lighttpd/1.4.31
====================(proxy=>upstream)
GET /archlinux/core/os/x86_64/core.db HTTP/1.0
User-Agent: pacman/4.1.0 (Linux x86_64) libalpm/8.0.0
Accept: */*
Host: mirror.rit.edu
Connection: close
X-Forwarded-For: 127.0.0.1
HTTP/1.0 200 OK
Content-Type: application/octet-stream
Accept-Ranges: bytes
ETag: "3027122285"
Last-Modified: Tue, 23 Apr 2013 18:54:33 GMT
Content-Length: 107993
Connection: close
Date: Wed, 24 Apr 2013 05:46:04 GMT
Server: lighttpd/1.4.31
GET /archlinux/core/os/x86_64/core.db.sig HTTP/1.0
User-Agent: pacman/4.1.0 (Linux x86_64) libalpm/8.0.0
Accept: */*
Host: mirror.rit.edu
Connection: close
X-Forwarded-For: 127.0.0.1
HTTP/1.0 404 Not Found
Content-Type: text/html
Content-Length: 345
Connection: close
Date: Wed, 24 Apr 2013 05:46:04 GMT
Server: lighttpd/1.4.31
GET /archlinux/extra/os/x86_64/extra.db HTTP/1.0
User-Agent: pacman/4.1.0 (Linux x86_64) libalpm/8.0.0
Accept: */*
Host: mirror.rit.edu
Connection: close
X-Forwarded-For: 127.0.0.1
HTTP/1.0 200 OK
Content-Type: application/octet-stream
Accept-Ranges: bytes
ETag: "3033182160"
Last-Modified: Tue, 23 Apr 2013 21:35:31 GMT
Content-Length: 1555043
Connection: close
Date: Wed, 24 Apr 2013 05:46:04 GMT
Server: lighttpd/1.4.31
GET /archlinux/community/os/x86_64/community.db HTTP/1.0
User-Agent: pacman/4.1.0 (Linux x86_64) libalpm/8.0.0
Accept: */*
Host: mirror.rit.edu
Connection: close
X-Forwarded-For: 127.0.0.1
HTTP/1.0 200 OK
Content-Type: application/octet-stream
Accept-Ranges: bytes
ETag: "3871829817"
Last-Modified: Tue, 23 Apr 2013 22:38:20 GMT
Content-Length: 1987797
Connection: close
Date: Wed, 24 Apr 2013 05:46:06 GMT
Server: lighttpd/1.4.31
SECOND (warm cache, no issues [tofile: never invoked, sendfile only])
====================(client)
:: Synchronizing package databases...
core 105.5 KiB 0.00B/s 00:00 [######################] 100%
extra 1518.6 KiB 297M/s 00:00 [######################] 100%
community 1941.2 KiB 379M/s 00:00 [######################] 100%
:: Starting full system upgrade...
there is nothing to do
====================(client=>proxy)
GET /core/x86_64/core.db HTTP/1.1
User-Agent: pacman/4.1.0 (Linux x86_64) libalpm/8.0.0
Host: localhost:8090
Accept: */*
HTTP/1.1 200 OK
Content-Length: 107993
Last-Modified: Wed, 24 Apr 2013 05:46:04 GMT
GET /core/x86_64/core.db.sig HTTP/1.1
User-Agent: pacman/4.1.0 (Linux x86_64) libalpm/8.0.0
Host: localhost:8090
Accept: */*
HTTP/1.1 404 Not Found
Content-Type: text/html
Content-Length: 345
Connection: close
Date: Wed, 24 Apr 2013 05:51:38 GMT
Server: lighttpd/1.4.31
GET /extra/x86_64/extra.db HTTP/1.1
User-Agent: pacman/4.1.0 (Linux x86_64) libalpm/8.0.0
Host: localhost:8090
Accept: */*
HTTP/1.1 200 OK
Content-Length: 1555043
Last-Modified: Wed, 24 Apr 2013 05:46:06 GMT
GET /extra/x86_64/extra.db.sig HTTP/1.1
User-Agent: pacman/4.1.0 (Linux x86_64) libalpm/8.0.0
Host: localhost:8090
Accept: */*
HTTP/1.1 404 Not Found
Content-Type: text/html
Content-Length: 345
Connection: close
Date: Wed, 24 Apr 2013 05:51:38 GMT
Server: lighttpd/1.4.31
GET /community/x86_64/community.db HTTP/1.1
User-Agent: pacman/4.1.0 (Linux x86_64) libalpm/8.0.0
Host: localhost:8090
Accept: */*
HTTP/1.1 200 OK
Content-Length: 1987797
Last-Modified: Wed, 24 Apr 2013 05:46:07 GMT
GET /community/x86_64/community.db.sig HTTP/1.1
User-Agent: pacman/4.1.0 (Linux x86_64) libalpm/8.0.0
Host: localhost:8090
Accept: */*
HTTP/1.1 404 Not Found
Content-Type: text/html
Content-Length: 345
Connection: close
Date: Wed, 24 Apr 2013 05:51:38 GMT
Server: lighttpd/1.4.31
====================(proxy=>upstream)
GET /archlinux/core/os/x86_64/core.db.sig HTTP/1.0
User-Agent: pacman/4.1.0 (Linux x86_64) libalpm/8.0.0
Accept: */*
Host: mirror.rit.edu
Connection: close
X-Forwarded-For: 127.0.0.1
HTTP/1.0 404 Not Found
Content-Type: text/html
Content-Length: 345
Connection: close
Date: Wed, 24 Apr 2013 05:51:38 GMT
Server: lighttpd/1.4.31
GET /archlinux/extra/os/x86_64/extra.db.sig HTTP/1.0
User-Agent: pacman/4.1.0 (Linux x86_64) libalpm/8.0.0
Accept: */*
Host: mirror.rit.edu
Connection: close
X-Forwarded-For: 127.0.0.1
HTTP/1.0 404 Not Found
Content-Type: text/html
Content-Length: 345
Connection: close
Date: Wed, 24 Apr 2013 05:51:38 GMT
Server: lighttpd/1.4.31
GET /archlinux/community/os/x86_64/community.db.sig HTTP/1.0
User-Agent: pacman/4.1.0 (Linux x86_64) libalpm/8.0.0
Accept: */*
Host: mirror.rit.edu
Connection: close
X-Forwarded-For: 127.0.0.1
HTTP/1.0 404 Not Found
Content-Type: text/html
Content-Length: 345
Connection: close
Date: Wed, 24 Apr 2013 05:51:38 GMT
Server: lighttpd/1.4.31
BAH!
so i wrote a thing, and thought it was "commented"... but it was just a preview i never pressed comment... annoying.
the 5 second summary: i dropped a bunch of flush:
directives in the config, pacman worked, the cache worked (immediate force reinstall to confirm), but uWSGI segfaulted 3 times in the process. so clearly still a prob, but cool nonetheless :)
full log follows...
[uWSGI] getting INI configuration from parched.ini
;uWSGI instance configuration
[uwsgi]
ini = parched.ini
http-socket = :8090
processes = 1
master = true
async = 256
ugreen = true
enable-threads = true
mime-file = /etc/mime.types
route-if = isfile:/home/anthony/devel/local/parched/cache/${hex[PATH_INFO]} static:/home/anthony/devel/local/parched/cache/${hex[PATH_INFO]}
route-run = log:[proxycache] NEW: ${PATH_INFO} => ${hex[PATH_INFO]}
route-run = flush:
route-run = tofile:filename=/home/anthony/devel/local/parched/cache/${hex[PATH_INFO]}
route-run = flush:
route = ^/([^/]+)/([^/]+)/([^/]+)$ http:129.21.171.98:80,mirror.rit.edu,/archlinux/$1/os/$2/$3
route-run = flush:
route-run = break:404 Not Found
route-run = flush:
show-config = true
;end of configuration
*** Starting uWSGI 1.9.8 (64bit) on [Wed Apr 24 02:30:42 2013] ***
compiled with version: 4.8.0 20130411 (prerelease) on 23 April 2013 23:33:55
os: Linux-3.8.8-1-ARCH #1 SMP PREEMPT Wed Apr 17 10:57:28 CEST 2013
nodename: tower.xtfx.me
machine: x86_64
clock source: unix
pcre jit disabled
detected number of CPU cores: 4
current working directory: /home/anthony/devel/local/parched
detected binary path: /home/anthony/devel/upstream/uwsgi/uwsgi
*** dumping internal routing table ***
[rule: 0] subject: /home/anthony/devel/local/parched/cache/${hex[PATH_INFO]} func: isfile action: static:/home/anthony/devel/local/parched/cache/${hex[PATH_INFO]}
[rule: 1] action: log:[proxycache] NEW: ${PATH_INFO} => ${hex[PATH_INFO]}
[rule: 2] action: flush:
[rule: 3] action: tofile:filename=/home/anthony/devel/local/parched/cache/${hex[PATH_INFO]}
[rule: 4] action: flush:
[rule: 5] subject: path_info regexp: ^/([^/]+)/([^/]+)/([^/]+)$ action: http:129.21.171.98:80,mirror.rit.edu,/archlinux/$1/os/$2/$3
[rule: 6] action: flush:
[rule: 7] action: break:404 Not Found
[rule: 8] action: flush:
*** end of the internal routing table ***
your processes number limit is 31089
your memory page size is 4096 bytes
detected max file descriptor number: 1024
building mime-types dictionary from file /etc/mime.types...1017 entry found
- async cores set to 256 - fd table size: 1024
lock engine: pthread robust mutexes
initializing 256 uGreen threads with stack size of 262144 (256 KB)
uwsgi socket 0 bound to TCP address :8090 fd 3
Python version: 2.7.4 (default, Apr 6 2013, 19:20:36) [GCC 4.8.0]
Python main interpreter initialized at 0x1dcfcc0
python threads support enabled
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
mapped 5359616 bytes (5234 KB) for 256 cores
*** Operational MODE: async ***
*** no app loaded. going in full dynamic mode ***
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI master process (pid: 2923)
spawned uWSGI worker 1 (pid: 2925, cores: 256)
[proxycache] NEW: /core/x86_64/core.db => 2F636F72652F7838365F36342F636F72652E6462
[pid: 2925|app: -1|req: -1/1] 127.0.0.1 () {24 vars in 323 bytes} [Wed Apr 24 02:30:56 2013] GET /core/x86_64/core.db => generated 215986 bytes in 355 msecs via route() (HTTP/1.1 200) 8 headers in 252 bytes (75 switches on core 255)
[proxycache] NEW: /core/x86_64/core.db.sig => 2F636F72652F7838365F36342F636F72652E64622E736967
[pid: 2925|app: -1|req: -1/2] 127.0.0.1 () {24 vars in 331 bytes} [Wed Apr 24 02:30:57 2013] GET /core/x86_64/core.db.sig => generated 690 bytes in 97 msecs via route() (HTTP/1.1 404) 5 headers in 153 bytes (3 switches on core 255)
[proxycache] NEW: /extra/x86_64/extra.db => 2F65787472612F7838365F36342F65787472612E6462
!!! uWSGI process 2925 got Segmentation Fault !!!
*** backtrace of 2925 ***
uwsgi(uwsgi_backtrace+0x31) [0x462c74]
uwsgi(uwsgi_segfault+0x2e) [0x462d34]
/usr/lib/libc.so.6(+0x35240) [0x7fb0dd1af240]
/usr/lib/libc.so.6(+0x8a4cb) [0x7fb0dd2044cb]
uwsgi(uwsgi_buffer_append+0x164) [0x44d01d]
uwsgi(uwsgi_response_write_body_do+0xe4) [0x4503a2]
uwsgi() [0x470084]
uwsgi(uwsgi_apply_transformations+0x50) [0x4607bc]
uwsgi(uwsgi_close_request+0x4b) [0x41cca0]
uwsgi(async_loop+0x747) [0x440860]
uwsgi(uwsgi_ignition+0x1dd) [0x467d3d]
uwsgi(uwsgi_start+0x1cb5) [0x467b56]
uwsgi(main+0x2c5d) [0x465e9a]
/usr/lib/libc.so.6(__libc_start_main+0xf5) [0x7fb0dd19ba15]
uwsgi() [0x41b529]
*** end of backtrace ***
DAMN ! worker 1 (pid: 2925) died :( trying respawn ...
Respawned uWSGI worker 1 (new pid: 2929)
[proxycache] NEW: /extra/x86_64/extra.db.sig => 2F65787472612F7838365F36342F65787472612E64622E736967
[pid: 2929|app: -1|req: -1/3] 127.0.0.1 () {48 vars in 662 bytes} [Wed Apr 24 02:31:03 2013] GET /extra/x86_64/extra.db.sig => generated 690 bytes in 86 msecs via route() (HTTP/1.1 404) 5 headers in 153 bytes (3 switches on core 255)
[proxycache] NEW: /community/x86_64/community.db => 2F636F6D6D756E6974792F7838365F36342F636F6D6D756E6974792E6462
!!! uWSGI process 2929 got Segmentation Fault !!!
*** backtrace of 2929 ***
uwsgi(uwsgi_backtrace+0x31) [0x462c74]
uwsgi(uwsgi_segfault+0x2e) [0x462d34]
/usr/lib/libc.so.6(+0x35240) [0x7fb0dd1af240]
/usr/lib/libc.so.6(+0x8a4cb) [0x7fb0dd2044cb]
uwsgi(uwsgi_buffer_append+0x164) [0x44d01d]
uwsgi(uwsgi_response_write_body_do+0xe4) [0x4503a2]
uwsgi() [0x470084]
uwsgi(uwsgi_apply_transformations+0x50) [0x4607bc]
uwsgi(uwsgi_close_request+0x4b) [0x41cca0]
uwsgi(async_loop+0x747) [0x440860]
uwsgi(uwsgi_ignition+0x1dd) [0x467d3d]
uwsgi(uwsgi_start+0x1cb5) [0x467b56]
uwsgi(main+0x2c5d) [0x465e9a]
/usr/lib/libc.so.6(__libc_start_main+0xf5) [0x7fb0dd19ba15]
uwsgi() [0x41b529]
*** end of backtrace ***
DAMN ! worker 1 (pid: 2929) died :( trying respawn ...
Respawned uWSGI worker 1 (new pid: 2930)
[proxycache] NEW: /community/x86_64/community.db.sig => 2F636F6D6D756E6974792F7838365F36342F636F6D6D756E6974792E64622E736967
[pid: 2930|app: -1|req: -1/4] 127.0.0.1 () {48 vars in 694 bytes} [Wed Apr 24 02:31:07 2013] GET /community/x86_64/community.db.sig => generated 690 bytes in 73 msecs via route() (HTTP/1.1 404) 5 headers in 153 bytes (3 switches on core 255)
[proxycache] NEW: /extra/x86_64/xdg-user-dirs-0.14-3-x86_64.pkg.tar.xz => 2F65787472612F7838365F36342F7864672D757365722D646972732D302E31342D332D7838365F36342E706B672E7461722E787A
[pid: 2930|app: -1|req: -1/5] 127.0.0.1 () {24 vars in 387 bytes} [Wed Apr 24 02:31:11 2013] GET /extra/x86_64/xdg-user-dirs-0.14-3-x86_64.pkg.tar.xz => generated 51408 bytes in 121 msecs via route() (HTTP/1.1 200) 8 headers in 251 bytes (18 switches on core 255)
[proxycache] NEW: /extra/x86_64/mousetweaks-3.8.0-2-x86_64.pkg.tar.xz => 2F65787472612F7838365F36342F6D6F757365747765616B732D332E382E302D322D7838365F36342E706B672E7461722E787A
[pid: 2930|app: -1|req: -1/6] 127.0.0.1 () {24 vars in 385 bytes} [Wed Apr 24 02:31:11 2013] GET /extra/x86_64/mousetweaks-3.8.0-2-x86_64.pkg.tar.xz => generated 164048 bytes in 229 msecs via route() (HTTP/1.1 200) 8 headers in 251 bytes (57 switches on core 255)
[proxycache] NEW: /extra/x86_64/libmusicbrainz5-5.0.1-1-x86_64.pkg.tar.xz => 2F65787472612F7838365F36342F6C69626D75736963627261696E7A352D352E302E312D312D7838365F36342E706B672E7461722E787A
!!! uWSGI process 2930 got Segmentation Fault !!!
*** backtrace of 2930 ***
uwsgi(uwsgi_backtrace+0x31) [0x462c74]
uwsgi(uwsgi_segfault+0x2e) [0x462d34]
/usr/lib/libc.so.6(+0x35240) [0x7fb0dd1af240]
/usr/lib/libc.so.6(+0x8a653) [0x7fb0dd204653]
uwsgi(uwsgi_buffer_append+0x164) [0x44d01d]
uwsgi(uwsgi_response_write_body_do+0xe4) [0x4503a2]
uwsgi() [0x470084]
uwsgi(uwsgi_apply_transformations+0x50) [0x4607bc]
uwsgi(uwsgi_close_request+0x4b) [0x41cca0]
uwsgi(async_loop+0x747) [0x440860]
uwsgi(uwsgi_ignition+0x1dd) [0x467d3d]
uwsgi(uwsgi_start+0x1cb5) [0x467b56]
uwsgi(main+0x2c5d) [0x465e9a]
/usr/lib/libc.so.6(__libc_start_main+0xf5) [0x7fb0dd19ba15]
uwsgi() [0x41b529]
*** end of backtrace ***
DAMN ! worker 1 (pid: 2930) died :( trying respawn ...
Respawned uWSGI worker 1 (new pid: 2931)
[proxycache] NEW: /extra/x86_64/sushi-3.8.1-1-x86_64.pkg.tar.xz => 2F65787472612F7838365F36342F73757368692D332E382E312D312D7838365F36342E706B672E7461722E787A
[pid: 2931|app: -1|req: -1/7] 127.0.0.1 () {48 vars in 766 bytes} [Wed Apr 24 02:31:12 2013] GET /extra/x86_64/sushi-3.8.1-1-x86_64.pkg.tar.xz => generated 132576 bytes in 193 msecs via route() (HTTP/1.1 200) 8 headers in 251 bytes (47 switches on core 255)
[proxycache] NEW: /extra/x86_64/xdg-user-dirs-gtk-0.10-1-x86_64.pkg.tar.xz => 2F65787472612F7838365F36342F7864672D757365722D646972732D67746B2D302E31302D312D7838365F36342E706B672E7461722E787A
[pid: 2931|app: -1|req: -1/8] 127.0.0.1 () {24 vars in 395 bytes} [Wed Apr 24 02:31:12 2013] GET /extra/x86_64/xdg-user-dirs-gtk-0.10-1-x86_64.pkg.tar.xz => generated 74232 bytes in 123 msecs via route() (HTTP/1.1 200) 8 headers in 251 bytes (27 switches on core 255)
defintely another bug, i will check it in the next few hours, in the mean time can you check if using multiprocess and multithread change something ?
...ok so i lost all my comment again **\ github... ill be writing offline then c/p from now on...
SUMMARY: 1 proc seems to work (except the first time, once [mentioned below]), but threads cause probs pretty quick :( ...
2 PROCESSES + 1 THREADS [OK]
(note... even though it worked here, the first time it failed with a 404 [but cached still]... could not repro afterwards)
[any@sh ~]$ uwsgi --ini parched.ini --processes 2 --threads 1 --show-config
[uWSGI] getting INI configuration from parched.ini
;uWSGI instance configuration
[uwsgi]
ini = parched.ini
http-socket = :8090
processes = 1
master = true
async = 256
ugreen = true
enable-threads = true
mime-file = /etc/mime.types
route-if = isfile:/home/anthony/devel/local/parched/cache/${hex[PATH_INFO]} static:/home/anthony/devel/local/parched/cache/${hex[PATH_INFO]}
route-run = log:[proxycache] NEW: ${PATH_INFO} => ${hex[PATH_INFO]}
route-run = flush:
route-run = tofile:filename=/home/anthony/devel/local/parched/cache/${hex[PATH_INFO]}
route-run = flush:
route = ^/([^/]+)/([^/]+)/([^/]+)$ http:129.21.171.98:80,mirror.rit.edu,/archlinux/$1/os/$2/$3
route-run = flush:
route-run = break:404 Not Found
route-run = flush:
processes = 2
threads = 1
show-config = true
;end of configuration
*** Starting uWSGI 1.9.8 (64bit) on [Wed Apr 24 22:40:27 2013] ***
compiled with version: 4.8.0 20130411 (prerelease) on 23 April 2013 23:33:55
os: Linux-3.8.8-1-ARCH #1 SMP PREEMPT Wed Apr 17 10:57:28 CEST 2013
nodename: tower.xtfx.me
machine: x86_64
clock source: unix
pcre jit disabled
detected number of CPU cores: 4
current working directory: /home/anthony/devel/local/parched
detected binary path: /home/anthony/devel/upstream/uwsgi/uwsgi
*** dumping internal routing table ***
[rule: 0] subject: /home/anthony/devel/local/parched/cache/${hex[PATH_INFO]} func: isfile action: static:/home/anthony/devel/local/parched/cache/${hex[PATH_INFO]}
[rule: 1] action: log:[proxycache] NEW: ${PATH_INFO} => ${hex[PATH_INFO]}
[rule: 2] action: flush:
[rule: 3] action: tofile:filename=/home/anthony/devel/local/parched/cache/${hex[PATH_INFO]}
[rule: 4] action: flush:
[rule: 5] subject: path_info regexp: ^/([^/]+)/([^/]+)/([^/]+)$ action: http:129.21.171.98:80,mirror.rit.edu,/archlinux/$1/os/$2/$3
[rule: 6] action: flush:
[rule: 7] action: break:404 Not Found
[rule: 8] action: flush:
*** end of the internal routing table ***
your processes number limit is 31089
your memory page size is 4096 bytes
detected max file descriptor number: 1024
building mime-types dictionary from file /etc/mime.types...1017 entry found
- async cores set to 256 - fd table size: 1024
lock engine: pthread robust mutexes
initializing 256 uGreen threads with stack size of 262144 (256 KB)
uwsgi socket 0 bound to TCP address :8090 fd 3
Python version: 2.7.4 (default, Apr 6 2013, 19:20:36) [GCC 4.8.0]
Python main interpreter initialized at 0x1bcbda0
python threads support enabled
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
mapped 8039424 bytes (7851 KB) for 512 cores
*** Operational MODE: preforking+async ***
*** no app loaded. going in full dynamic mode ***
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI master process (pid: 4699)
spawned uWSGI worker 1 (pid: 4701, cores: 256)
spawned uWSGI worker 2 (pid: 4702, cores: 256)
[proxycache] NEW: /core/x86_64/core.db => 2F636F72652F7838365F36342F636F72652E6462
[pid: 4701|app: -1|req: -1/1] 127.0.0.1 () {24 vars in 293 bytes} [Wed Apr 24 22:40:30 2013] GET /core/x86_64/core.db => generated 215986 bytes in 223 msecs via route() (HTTP/1.1 200) 8 headers in 252 bytes (75 switches on core 255)
1 PROCESSES + 2 THREADS [SIGSEGV+LOOP]
(client never discoes! it doesn't see failing process)
[any@sh ~]$ uwsgi --ini parched.ini --processes 1 --threads 2 --show-config
[uWSGI] getting INI configuration from parched.ini
;uWSGI instance configuration
[uwsgi]
ini = parched.ini
http-socket = :8090
processes = 1
master = true
async = 256
ugreen = true
enable-threads = true
mime-file = /etc/mime.types
route-if = isfile:/home/anthony/devel/local/parched/cache/${hex[PATH_INFO]} static:/home/anthony/devel/local/parched/cache/${hex[PATH_INFO]}
route-run = log:[proxycache] NEW: ${PATH_INFO} => ${hex[PATH_INFO]}
route-run = flush:
route-run = tofile:filename=/home/anthony/devel/local/parched/cache/${hex[PATH_INFO]}
route-run = flush:
route = ^/([^/]+)/([^/]+)/([^/]+)$ http:129.21.171.98:80,mirror.rit.edu,/archlinux/$1/os/$2/$3
route-run = flush:
route-run = break:404 Not Found
route-run = flush:
processes = 1
threads = 2
show-config = true
;end of configuration
*** Starting uWSGI 1.9.8 (64bit) on [Wed Apr 24 22:42:18 2013] ***
compiled with version: 4.8.0 20130411 (prerelease) on 23 April 2013 23:33:55
os: Linux-3.8.8-1-ARCH #1 SMP PREEMPT Wed Apr 17 10:57:28 CEST 2013
nodename: tower.xtfx.me
machine: x86_64
clock source: unix
pcre jit disabled
detected number of CPU cores: 4
current working directory: /home/anthony/devel/local/parched
detected binary path: /home/anthony/devel/upstream/uwsgi/uwsgi
*** dumping internal routing table ***
[rule: 0] subject: /home/anthony/devel/local/parched/cache/${hex[PATH_INFO]} func: isfile action: static:/home/anthony/devel/local/parched/cache/${hex[PATH_INFO]}
[rule: 1] action: log:[proxycache] NEW: ${PATH_INFO} => ${hex[PATH_INFO]}
[rule: 2] action: flush:
[rule: 3] action: tofile:filename=/home/anthony/devel/local/parched/cache/${hex[PATH_INFO]}
[rule: 4] action: flush:
[rule: 5] subject: path_info regexp: ^/([^/]+)/([^/]+)/([^/]+)$ action: http:129.21.171.98:80,mirror.rit.edu,/archlinux/$1/os/$2/$3
[rule: 6] action: flush:
[rule: 7] action: break:404 Not Found
[rule: 8] action: flush:
*** end of the internal routing table ***
your processes number limit is 31089
your memory page size is 4096 bytes
detected max file descriptor number: 1024
building mime-types dictionary from file /etc/mime.types...1017 entry found
- async cores set to 256 - fd table size: 1024
lock engine: pthread robust mutexes
initializing 256 uGreen threads with stack size of 262144 (256 KB)
uwsgi socket 0 bound to TCP address :8090 fd 3
Python version: 2.7.4 (default, Apr 6 2013, 19:20:36) [GCC 4.8.0]
Python main interpreter initialized at 0xd98da0
python threads support enabled
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
mapped 165824 bytes (161 KB) for 2 cores
*** Operational MODE: threaded ***
*** no app loaded. going in full dynamic mode ***
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI master process (pid: 4787)
spawned uWSGI worker 1 (pid: 4789, cores: 2)
!!! uWSGI process 4789 got Segmentation Fault !!!
*** backtrace of 4789 ***
uwsgi(uwsgi_backtrace+0x31) [0x462c74]
uwsgi(uwsgi_segfault+0x2e) [0x462d34]
/usr/lib/libc.so.6(+0x35240) [0x7f644b576240]
uwsgi(wsgi_req_setup+0x1d) [0x41d9cf]
uwsgi(async_loop+0x257) [0x440370]
uwsgi(uwsgi_ignition+0x1dd) [0x467d3d]
uwsgi(uwsgi_start+0x1cb5) [0x467b56]
uwsgi(main+0x2c5d) [0x465e9a]
/usr/lib/libc.so.6(__libc_start_main+0xf5) [0x7f644b562a15]
uwsgi() [0x41b529]
*** end of backtrace ***
DAMN ! worker 1 (pid: 4789) died :( trying respawn ...
Respawned uWSGI worker 1 (new pid: 4792)
!!! uWSGI process 4792 got Segmentation Fault !!!
*** backtrace of 4792 ***
uwsgi(uwsgi_backtrace+0x31) [0x462c74]
uwsgi(uwsgi_segfault+0x2e) [0x462d34]
/usr/lib/libc.so.6(+0x35240) [0x7f644b576240]
uwsgi(wsgi_req_setup+0x1d) [0x41d9cf]
uwsgi(async_loop+0x257) [0x440370]
uwsgi(uwsgi_ignition+0x1dd) [0x467d3d]
uwsgi(uwsgi_start+0x1cb5) [0x467b56]
uwsgi(main+0x2c5d) [0x465e9a]
/usr/lib/libc.so.6(__libc_start_main+0xf5) [0x7f644b562a15]
uwsgi() [0x41b529]
*** end of backtrace ***
DAMN ! worker 1 (pid: 4792) died :( trying respawn ...
Respawned uWSGI worker 1 (new pid: 4793)
!!! uWSGI process 4793 got Segmentation Fault !!!
*** backtrace of 4793 ***
uwsgi(uwsgi_backtrace+0x31) [0x462c74]
uwsgi(uwsgi_segfault+0x2e) [0x462d34]
/usr/lib/libc.so.6(+0x35240) [0x7f644b576240]
uwsgi(wsgi_req_setup+0x1d) [0x41d9cf]
uwsgi(async_loop+0x257) [0x440370]
uwsgi(uwsgi_ignition+0x1dd) [0x467d3d]
uwsgi(uwsgi_start+0x1cb5) [0x467b56]
uwsgi(main+0x2c5d) [0x465e9a]
/usr/lib/libc.so.6(__libc_start_main+0xf5) [0x7f644b562a15]
uwsgi() [0x41b529]
*** end of backtrace ***
DAMN ! worker 1 (pid: 4793) died :( trying respawn ...
worker respawning too fast !!! i have to sleep a bit (2 seconds)...
2 PROCESSES + 2 THREADS [SIGSEGV]
[any@sh ~]$ uwsgi --ini parched.ini --processes 2 --threads 2 --show-config
[uWSGI] getting INI configuration from parched.ini
;uWSGI instance configuration
[uwsgi]
ini = parched.ini
http-socket = :8090
processes = 1
master = true
async = 256
ugreen = true
enable-threads = true
mime-file = /etc/mime.types
route-if = isfile:/home/anthony/devel/local/parched/cache/${hex[PATH_INFO]} static:/home/anthony/devel/local/parched/cache/${hex[PATH_INFO]}
route-run = log:[proxycache] NEW: ${PATH_INFO} => ${hex[PATH_INFO]}
route-run = flush:
route-run = tofile:filename=/home/anthony/devel/local/parched/cache/${hex[PATH_INFO]}
route-run = flush:
route = ^/([^/]+)/([^/]+)/([^/]+)$ http:129.21.171.98:80,mirror.rit.edu,/archlinux/$1/os/$2/$3
route-run = flush:
route-run = break:404 Not Found
route-run = flush:
processes = 2
threads = 2
show-config = true
;end of configuration
*** Starting uWSGI 1.9.8 (64bit) on [Wed Apr 24 22:42:49 2013] ***
compiled with version: 4.8.0 20130411 (prerelease) on 23 April 2013 23:33:55
os: Linux-3.8.8-1-ARCH #1 SMP PREEMPT Wed Apr 17 10:57:28 CEST 2013
nodename: tower.xtfx.me
machine: x86_64
clock source: unix
pcre jit disabled
detected number of CPU cores: 4
current working directory: /home/anthony/devel/local/parched
detected binary path: /home/anthony/devel/upstream/uwsgi/uwsgi
*** dumping internal routing table ***
[rule: 0] subject: /home/anthony/devel/local/parched/cache/${hex[PATH_INFO]} func: isfile action: static:/home/anthony/devel/local/parched/cache/${hex[PATH_INFO]}
[rule: 1] action: log:[proxycache] NEW: ${PATH_INFO} => ${hex[PATH_INFO]}
[rule: 2] action: flush:
[rule: 3] action: tofile:filename=/home/anthony/devel/local/parched/cache/${hex[PATH_INFO]}
[rule: 4] action: flush:
[rule: 5] subject: path_info regexp: ^/([^/]+)/([^/]+)/([^/]+)$ action: http:129.21.171.98:80,mirror.rit.edu,/archlinux/$1/os/$2/$3
[rule: 6] action: flush:
[rule: 7] action: break:404 Not Found
[rule: 8] action: flush:
*** end of the internal routing table ***
your processes number limit is 31089
your memory page size is 4096 bytes
detected max file descriptor number: 1024
building mime-types dictionary from file /etc/mime.types...1017 entry found
- async cores set to 256 - fd table size: 1024
lock engine: pthread robust mutexes
initializing 256 uGreen threads with stack size of 262144 (256 KB)
uwsgi socket 0 bound to TCP address :8090 fd 3
Python version: 2.7.4 (default, Apr 6 2013, 19:20:36) [GCC 4.8.0]
Python main interpreter initialized at 0x2636da0
python threads support enabled
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
mapped 248736 bytes (242 KB) for 4 cores
*** Operational MODE: preforking+threaded ***
*** no app loaded. going in full dynamic mode ***
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI master process (pid: 4821)
spawned uWSGI worker 1 (pid: 4823, cores: 2)
spawned uWSGI worker 2 (pid: 4824, cores: 2)
!!! uWSGI process 4824 got Segmentation Fault !!!
[proxycache] NEW: /core/x86_64/core.db => 2F636F72652F7838365F36342F636F72652E6462
!!! uWSGI process 4823 got Segmentation Fault !!!
*** backtrace of 4824 ***
uwsgi(uwsgi_backtrace+0x31) [0x462c74]
uwsgi(uwsgi_segfault+0x2e) [0x462d34]
/usr/lib/libc.so.6(+0x35240) [0x7f12e663c240]
uwsgi(wsgi_req_setup+0x1d) [0x41d9cf]
uwsgi(async_loop+0x257) [0x440370]
uwsgi(uwsgi_ignition+0x1dd) [0x467d3d]
uwsgi(uwsgi_start+0x1cb5) [0x467b56]
uwsgi(main+0x2c5d) [0x465e9a]
/usr/lib/libc.so.6(__libc_start_main+0xf5) [0x7f12e6628a15]
uwsgi() [0x41b529]
*** end of backtrace ***
*** backtrace of 4823 ***
uwsgi(uwsgi_backtrace+0x31) [0x462c74]
uwsgi(uwsgi_segfault+0x2e) [0x462d34]
/usr/lib/libc.so.6(+0x35240) [0x7f12e663c240]
uwsgi(async_add_fd_write+0x1e) [0x43ff60]
uwsgi() [0x44007a]
uwsgi(uwsgi_proxy_nb+0x78) [0x444ec8]
uwsgi() [0x4a5901]
uwsgi(uwsgi_apply_routes_do+0x1e7) [0x46f687]
uwsgi(uwsgi_apply_routes+0x76) [0x46f7d4]
uwsgi(u_green_request+0x18) [0x4a2448]
/usr/lib/libc.so.6(+0x45ff0) [0x7f12e664cff0]
*** end of backtrace ***
DAMN ! worker 1 (pid: 4823) died :( trying respawn ...
Respawned uWSGI worker 1 (new pid: 4836)
DAMN ! worker 2 (pid: 4824) died :( trying respawn ...
Respawned uWSGI worker 2 (new pid: 4837)
when you test with multithreads you have to remove async and ugreen because they get precedence (but the thread initialization continues). There is something seriously wrong with plain async mode (gevent and coroae works normally) + routing.
just as a reference:
[uwsgi] http-socket = :8090 processes = 1 master = true mime-file = /etc/mime.types gevent = 10 route-if = isfile:%dcache/${hex[PATH_INFO]} static:%dcache/${hex[PATH_INFO]} route-run = flush: route-run = log:[proxycache] NEW: ${PATH_INFO} => ${hex[PATH_INFO]} route-run = tofile:filename=%dcache/${hex[PATH_INFO]} route = ^/([^/]+)/([^/]+)/([^/]+)$ http:129.21.171.98:80,mirror.rit.edu,/archlinux/$1/os/$2/$3
works as well
[uwsgi] http-socket = :8090 processes = 1 master = true mime-file = /etc/mime.types threads = 10 route-if = isfile:%dcache/${hex[PATH_INFO]} static:%dcache/${hex[PATH_INFO]} route-run = flush: route-run = log:[proxycache] NEW: ${PATH_INFO} => ${hex[PATH_INFO]} route-run = tofile:filename=%dcache/${hex[PATH_INFO]} route = ^/([^/]+)/([^/]+)/([^/]+)$ http:129.21.171.98:80,mirror.rit.edu,/archlinux/$1/os/$2/$3
Ok, today i invested hours on improving async modes. Now all should work with uGreen, gevent, stackless, greenlet and ruby fibers. Now i would like to add setjmp/longjmp as a fallback (where ugreen is not available, like OpenBSD or embedded systems)
nice :) thanks for the info about the prcedence of async... makes sense.
everything seems to be working well now! i tried the 3 combos of threads/procs, and pure async, with and without flushing, and there were no faults.
however... while removing all the flush
directives i had in mine to match the single one you referenced, i accidentally put it AFTER tofile:
, and that was causing segfaults (leading me to believe the immediate problem was not fixed for ~30 min):
(NOTE: same traceback as here: https://github.com/unbit/uwsgi/issues/243#issuecomment-16911917)
[uWSGI] getting INI configuration from parched.ini
;uWSGI instance configuration
[uwsgi]
ini = parched.ini
http-socket = :8090
master = true
async = 256
ugreen = true
mime-file = /etc/mime.types
route-if = isfile:/home/anthony/devel/local/parched/cache/${hex[PATH_INFO]} static:/home/anthony/devel/local/parched/cache/${hex[PATH_INFO]}
route-run = log:[proxycache] NEW: ${PATH_INFO} => ${hex[PATH_INFO]}
route-run = tofile:filename=/home/anthony/devel/local/parched/cache/${hex[PATH_INFO]}
route-run = flush:
route = ^/([^/]+)/([^/]+)/([^/]+)$ http:129.21.171.98:80,mirror.rit.edu,/archlinux/$1/os/$2/$3
route-run = break:404 Not Found
show-config = true
;end of configuration
*** Starting uWSGI 1.9.8 (64bit) on [Fri Apr 26 00:37:07 2013] ***
compiled with version: 4.8.0 20130411 (prerelease) on 25 April 2013 23:41:43
os: Linux-3.8.8-1-ARCH #1 SMP PREEMPT Wed Apr 17 10:57:28 CEST 2013
nodename: tower.xtfx.me
machine: x86_64
clock source: unix
pcre jit disabled
detected number of CPU cores: 4
current working directory: /home/anthony/devel/local/parched
detected binary path: /home/anthony/devel/upstream/uwsgi/uwsgi
*** dumping internal routing table ***
[rule: 0] subject: /home/anthony/devel/local/parched/cache/${hex[PATH_INFO]} func: isfile action: static:/home/anthony/devel/local/parched/cache/${hex[PATH_INFO]}
[rule: 1] action: log:[proxycache] NEW: ${PATH_INFO} => ${hex[PATH_INFO]}
[rule: 2] action: tofile:filename=/home/anthony/devel/local/parched/cache/${hex[PATH_INFO]}
[rule: 3] action: flush:
[rule: 4] subject: path_info regexp: ^/([^/]+)/([^/]+)/([^/]+)$ action: http:129.21.171.98:80,mirror.rit.edu,/archlinux/$1/os/$2/$3
[rule: 5] action: break:404 Not Found
*** end of the internal routing table ***
your processes number limit is 31089
your memory page size is 4096 bytes
detected max file descriptor number: 1024
building mime-types dictionary from file /etc/mime.types...1017 entry found
- async cores set to 256 - fd table size: 1024
lock engine: pthread robust mutexes
initializing 256 uGreen threads with stack size of 262144 (256 KB)
uwsgi socket 0 bound to TCP address :8090 fd 3
Python version: 2.7.4 (default, Apr 6 2013, 19:20:36) [GCC 4.8.0]
*** Python threads support is disabled. You can enable it with --enable-threads ***
Python main interpreter initialized at 0x8068a0
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
mapped 5359616 bytes (5234 KB) for 256 cores
*** Operational MODE: async ***
*** no app loaded. going in full dynamic mode ***
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI master process (pid: 11665)
spawned uWSGI worker 1 (pid: 11667, cores: 256)
[proxycache] NEW: /community/x86_64/community.db => 2F636F6D6D756E6974792F7838365F36342F636F6D6D756E6974792E6462
!!! uWSGI process 11667 got Segmentation Fault !!!
*** backtrace of 11667 ***
uwsgi(uwsgi_backtrace+0x31) [0x462b7c]
uwsgi(uwsgi_segfault+0x2e) [0x462c3c]
/usr/lib/libc.so.6(+0x35240) [0x7fe5c0f7e240]
/usr/lib/libc.so.6(+0x8a4cb) [0x7fe5c0fd34cb]
uwsgi(uwsgi_buffer_append+0x164) [0x44cf29]
uwsgi(uwsgi_response_write_body_do+0xe4) [0x4502ae]
uwsgi() [0x46feb8]
uwsgi(uwsgi_apply_transformations+0x50) [0x4606c4]
uwsgi(uwsgi_close_request+0x4b) [0x41cc80]
uwsgi(async_schedule_to_req_green+0x9a) [0x440247]
/usr/lib/libc.so.6(+0x45ff0) [0x7fe5c0f8eff0]
*** end of backtrace ***
DAMN ! worker 1 (pid: 11667) died :( trying respawn ...
Respawned uWSGI worker 1 (new pid: 11700)
... if the file is too big it always faults (core.db never faulted, community.db did every time)... whether ot not the actual downloads were synchronous or parallel didn't seems to matter.
once i move the flush:
statement to the position you have it, everything works fine.
yeah i think the async/ugreen is pretty solid now... i just sent ~200MiB worth of system updates thru it without corruption, fault, or any other badness, and process doesnt seems to be using abnormal resources... so, save the problem described above RE:out-of-place-flushes (which is a wholly different issue tbh) i think this bug is dead....
...if you'd rather open a new report with the flush problem lmk, but me thinks this bug => closed.
can you retry the latest code ? even the flush problem should be fixed
yep, LGTM... sorry for the wait, got wrapped up in _____. just pulled down + installed latest kernel via proxy, works great as always :)
thanks much Roberto!
FIXED
REPRO
...simply build HEAD with default profile, and run this config:
NOTE! i added this patch, probably unrelated, but metioned just in case (essentially mkdir -p impl i copied from SO...
dirmode=0755
kv would be nice ;):RESULT
...first request seems to always work, followed by a legitimate 404, followed by a segfault. aside from that, per the duplicated headers/etc, uWSGI seems to be running multiple async cores accidentally on a single request...
tcpflow: PACMAN => PROXY (extra whitespaces removed!)
tcpflow: PROXY => UPSTREAM (extra whitespaces removed!)
LOG/BACKTRACE
CLIENT OUTPUT
NOTES
uwsgi.async_runqueue_last
... i setup a watchpoint on that address, simplywatch uwsgi.async_runqueue_last
, and normally whenuwsgi.async_runqueue == NULL
,uwsgi.async_runqueue_last
is ALSO NULL... but in the segfault's case, only the latter was NULL, which triggers segfaults atcore/async.c:85
:wsgi.async_runqueue_last->next = uar;
check-static
is not firing for some reason (see the[proxycache] NEW: [...]
log lines)mkdir -p
patch