keplerproject / wsapi

WSAPI is an API that abstracts the web server from Lua web applications.
http://keplerproject.github.io/wsapi
74 stars 33 forks source link

SIGSEGV? #29

Closed petsagouris closed 10 years ago

petsagouris commented 10 years ago

I am running orbit on Lua 5.2 through spawn-fcgi with nginx and am getting a lot of the following frequently and I need to restart the fastcgi:

2013/12/01 21:35:14 [error] 1039#0: *853 upstream prematurely closed connection while reading response header from upstream, client: 127.0.0.1, server: localhost, request: "GET /index.lua/say/wow?greeting=Nice%20one HTTP/1.1", upstream: "fastcgi://unix:/etc/nginx/wsapi.sock:", host: "localhost:8080"

Searching around I realized I could see what happens with strace and this is what I got:

Process 14405 attached - interrupt to quit
accept(0, {sa_family=AF_FILE, NULL}, [2]) = 3
select(4, [3], NULL, NULL, {2, 0})      = 1 (in [3], left {1, 999998})
read(3, "\1\1\0\1\0\10\0\0\0\1\0\0\0\0\0\0\1\4\0\1\3B\6\0\t\10PATH_I"..., 8192) = 880
--- SIGSEGV (Segmentation fault) @ 0 (0) ---
Process 14405 detached

Note that this is the lua process, not the spawn-fcgi one (I couldn't find one in htop). Here is the command ([...] = abbreviated paths):

www-data 16275  0.0  0.0  22940  1716 pts/0    S+   21:44   0:00 /usr/local/bin/lua -e package.path="[...]"..package.path; package.cpath="[...]"..package.cpath -e local k,l,_=pcall(require,"luarocks.loader") _=k and l.add_context("wsapi-fcgi","1.6-1") /usr/local/lib/luarocks/rocks/wsapi-fcgi/1.6-1/bin/wsapi.fcgi -d /path/to/code/wsapitest/

Is this really something that happens with lfcgi?

petsagouris commented 10 years ago

I have a gdb backtrace:

(gdb) backtrace
#0  0x00007f1f20c46f5c in getenv () from /lib/x86_64-linux-gnu/libc.so.6
#1  0x00007f1f203d80c3 in lfcgi_getenv () from /usr/local/lib/lua/5.2/lfcgi.so
#2  0x000000000040853d in luaD_precall ()
#3  0x0000000000412614 in luaV_execute ()
#4  0x0000000000408749 in luaD_call ()
#5  0x000000000041127a in callTM.isra.0 ()
#6  0x000000000041159b in luaV_gettable ()
#7  0x000000000041287f in luaV_execute ()
#8  0x0000000000408749 in luaD_call ()
#9  0x0000000000407d88 in luaD_rawrunprotected ()
#10 0x0000000000408983 in luaD_pcall ()
#11 0x0000000000406623 in lua_pcallk ()
#12 0x000000000041943f in luaB_xpcall ()
#13 0x000000000040853d in luaD_precall ()
#14 0x0000000000412537 in luaV_execute ()
#15 0x0000000000408749 in luaD_call ()
#16 0x0000000000406551 in lua_callk ()
#17 0x0000000000420c0f in ll_require ()
#18 0x000000000040853d in luaD_precall ()
#19 0x0000000000412614 in luaV_execute ()
#20 0x0000000000408749 in luaD_call ()
#21 0x0000000000407d88 in luaD_rawrunprotected ()
#22 0x0000000000408983 in luaD_pcall ()
#23 0x0000000000406623 in lua_pcallk ()
#24 0x00007f1f205dd57a in dostring () from /usr/local/lib/lua/5.2/rings.so
#25 0x000000000040853d in luaD_precall ()
#26 0x0000000000412614 in luaV_execute ()
#27 0x0000000000408749 in luaD_call ()
#28 0x0000000000407d88 in luaD_rawrunprotected ()
#29 0x0000000000408983 in luaD_pcall ()
#30 0x0000000000406623 in lua_pcallk ()
#31 0x000000000041943f in luaB_xpcall ()
#32 0x000000000040853d in luaD_precall ()
#33 0x0000000000412537 in luaV_execute ()
#34 0x0000000000408749 in luaD_call ()
#35 0x0000000000407d88 in luaD_rawrunprotected ()
#36 0x0000000000408983 in luaD_pcall ()
#37 0x0000000000406623 in lua_pcallk ()
#38 0x0000000000404240 in docall ()
#39 0x0000000000404d94 in pmain ()
#40 0x000000000040853d in luaD_precall ()
#41 0x000000000040873d in luaD_call ()
#42 0x0000000000407d88 in luaD_rawrunprotected ()
#43 0x0000000000408983 in luaD_pcall ()
#44 0x0000000000406623 in lua_pcallk ()
#45 0x0000000000403fea in main ()
user-none commented 10 years ago

This looks to be the same as issue #10 which was fixed with pull #16. Are you using a release or building from git? The above request hasn't made it into a release yet.

petsagouris commented 10 years ago

Nope, not from git, it was the 1.6.1 that is current with luarocks. Getting the current head and making a lfcgi.so and replacing the one being used in my system made the problem go away. No it runs solid.