Brooooooklyn / snappy

Fastest Snappy compression library in Node.js
MIT License
166 stars 10 forks source link

Segmentation fault when workers exit #83

Open fmmoret opened 2 years ago

fmmoret commented 2 years ago

I get segfaults on node 16 and node 18 when I use these in workers.

I'm on linux gnu x64


Replication steps:

Just run node segfault.js with:

segfault.js

var import_worker_threads = require("worker_threads");

if (import_worker_threads.isMainThread) {
  const worker = new import_worker_threads.Worker("./segfault");
} else {
  var import_snappy = require("snappy");
}

If the parent thread has also imported it, it seems to work fine:

segfault.js with parent import:

var import_worker_threads = require("worker_threads");
var import_snappy = require("snappy");

if (import_worker_threads.isMainThread) {
  const worker = new import_worker_threads.Worker("./segfault");
}
fmmoret commented 2 years ago

opened parent issue too: https://github.com/napi-rs/package-template/issues/308

Brooooooklyn commented 2 years ago

It's a known issue upstream: https://github.com/rust-lang/rust/issues/91979

My suggestion is don't use woker_threads with NAPI-RS packages, because the async function exposed in snappy or other NAPI-RS packages could have been possible to take advantage of the multi-cores.

In snappy for example:

Promise.all([
  compress("hello"),
  compress("hello"),
  compress("hello"),
  compress("hello"), 
])

the code below will execute in four threads.

hanlakewind commented 2 years ago

@Brooooooklyn So you mean that compress() and uncompress() functions are already multi-threaded without worker threads? If so I feel like this information should be put in the README

Brooooooklyn commented 2 years ago

So you mean that compress() and uncompress() functions are already multi-threaded without worker threads? If so I feel like this information should be put in the README

@hanlakewind Yes, they are already multi-threaded, I'll update README.

fmmoret commented 2 years ago

Got it. I am just using a pool of worker threads for a job queue and happen to be using snappy for some stuff -- caught me by surprise that my queue would crash if I ever had a job NOT doing snappy compression.

Forcing it to stay loaded by requiring in the parent process works for now