Closed SkeLLLa closed 4 years ago
I can trigger the same issue. I will look into this but WorkerThreads are definitely recommended over Cluster. With WorkerThreads there's a clear idea what Node.js does, while Cluster I have no idea what Node.js does and there could be bigger issues.
I get all kinds of similar issues only starting a basic worker threads example and letting it close down (no server started or anything). This should be fixed and pushed harder as a main feature.
@hst-m In your example you should free uWS instance inside shutdown before killing worker (cluster is a bunch of processes afaik).
In SkeLLLa`s example it can help too but as soon as any worker gets request we can't free uWS in worker, that handled it, for some time.
After uWS.us_listen_socket_close(socket)
thread still handles requests (imo it shouldn't but I'm too dumb to dig into C++).
Sample log of dumb test script and reloading of localhost:3000
(same happens under wrk load for all threads):
Thread dumb ID: log msg
703999: /
703999: /favicon.ico
703999: socket close
703999: /
703999: /favicon.ico
...
703999: /
703999: /favicon.ico
I was able to get (uWebSockets.js 18.13.0 playing around with timeouts between uWS.us_listen_socket_close(socket)
, uWS.free()
and thread termination):
corrupted double-linked list
uv loop at [0x...] has open handles:...
Segmentation fault
Bus error
In some cases it worked fine when delay between last handled request and uWS.free()
was long enough (I guess some timeout in native land closes sockets).
Dumb test script
Navigate to or bombard localhost:3000
or just wait ~21 seconds while it shuts down.
Math.random() > 1 ? 'cleanup' : 'apptest'
- fast auto shutdown (works) or thread socket closing on request to localhost:3000
with timeout shutdown (works under condition described in previous paragraph), random was used to mix both types of threads to find workaround.
Hope it helps.
@lostrepo I don't know what uWS.free does but looks like it gets called automatically on process exit for uWS.js, so don't need to add it for Cluster:
https://github.com/uNetworking/uWebSockets.js/blob/master/src/uws.js#L28
looks like this issue is caused by worker threads not calling uWS.free()
because process.on('exit')
event only gets called for processes? adding in a uWS.free()
in the worker before process.exit(0)
fixes the problem, but @lostrepo is saying there is timing issue on when to call it
@hst-m In your case timing doesn't matter since you create uWS instance for each 'forked' process (cluster stuff).
In case of worker_threads I don't understand what the flow is.
It happens if we receive connections when we close listen sockets (they still get and handle new connections after closing socket). If we call uWS.free()
after some timeout after last connection to listen socket it sometimes works and sometimes doesn't.
In order to support Worker threads, addons need to clean up any resources they may have allocated when such a thread exists. This can be achieved through the usage of the AddEnvironmentCleanupHook() function
It looks like the add-on needs this one instead of using process on exit
I have tested the above, and if you remove uWS.free and process.on('exit') and instead call that stuff inside AddEnvironmentCleanupHook it shuts down properly.
So I will make this change in next release and hopefully it will work better for everyone.
Can everyone who had this issue try the latest binaries:
npm install uNetworking/uWebSockets.js#binaries
Hi, thanks for awesome lib.
I've tried to play around with worker threads example combined with graceful shutdown. And find out that it crashes process on exit.
Config:
Here's code that's mostly taken from examples dir.
It launches without any errors, all threads listening. But when you send
SIGINT
for example node crashes with following core dump:PS: this also reproduces even if you don't listen anything in workers and just require uws it causes crash when process.exit is called.
Is it expected or am I missing something?