Open tarquas opened 5 years ago
What is your use-case for needing worker threads, out of curiosity? Is your workload CPU intensive?
We currently have no plan for adding this functionality to headless-gl, but if we get more requests for this feature, and when the node API stabilizes, we'll revisit it.
You're more than welcome to fork this project, add the functionality yourself, and submit a PR!
https://stackoverflow.com/questions/28486891/uncaught-error-module-did-not-self-register was an interesting read. Specifically https://stackoverflow.com/a/28859693/1324039. It may be as simple as rebuilding via:
npm rebuild
or rm -r node_modules
then npm install
Can you try this and see if that works?
Hello! Thanks for the replies.
What is your use-case for needing worker threads, out of curiosity?
I develop web service for my scientific research dealing with time series autoregressive forecasting. I use gpu.js to scale on GPU of some heavy operations like matrix inversion and matrix multiplication (height of matrix is the size of given historical data to analyze!). Since the computation is synchronous, it's designed to perform in separate thread, powered up by worker pool, and delivering the results using messaging (.postMessage()
).
You're more than welcome to fork this project, add the functionality yourself, and submit a PR!
Thanks. I was thinking about it and I can do it if I get some spare time coupled with inspiration.
For now I rewrote the worker pool to use cluster
and that was successful. In prospective future i gonna get back to threads as they're more optimal.
npm rebuild
orrm -r node_modules
thennpm install
That didn't work, as the native addon is written on nan, which doesn't provide proper context awareness support (related issue comment). NAN_MODULE_WORKER_ENALBED
didn't solve. As I understood, making the support mainly consists of isolating the globals.
Any updates about this? I'm also trying to gl context in worker_threads and using cluster and messaging it to slow for 30 fps
@felicemarra initial work had begun in one of the PRs, but has been held up by a few blockers, and the substantial amount of work to refactor the library to be "thread safe". Because of this, it's unlikely this will be completed any time soon.
Further, there's no guarantee that utilizing worker threads will in fact bring dependable, performant rendering that you're expecting. It'd definitely be faster than cluster + messaging, though.
The headless-gl
module is super useful in jest and vitest unit tests. One motivation for supporting worker threads is that vitest enables threads by default.
Please add support of Worker Threads to native addon.
Currently, trying to use it inside Worker, it produces error:
Related docs: https://github.com/nodejs/node/blob/master/doc/api/addons.md#worker-support IMHO This is vital since the processing is synchronous and is most likely to be run inside separate thread.
Thanks