Closed ebickle closed 5 years ago
To solve some of the issues listed above, a low-level class that handles the lifetime of the napi_async_work is needed. Currently AsyncWorker creating and deleting the napi_async_work, but the design of AsyncWorker causes its lifetime to be shared across multiple threads in an unsafe manner.
My proposal is to create a new AsyncWork
class that handles the creation and deletion of napi_async_work and wraps the other related async_work functions (queue and cancel). Instead of using virtual functions and having consumers subclass, the new AsyncWork is designed to be self-contained.
It's expected that most consumers won't use AsyncWork directly. A separate, higher-level type (TBD) inside of node-addon-api will act as the main interface for consumers and provide support for Javascript callback functions, Promises, and/or C++ Lambdas.
class AsyncWork {
public:
typedef void (*AsyncWorkExecuteCallback)(void* data);
typedef void (*AsyncWorkCompleteCallback)(Napi::Env env, void* data);
explicit AsyncWork(Napi::Env env,
AsyncWorkExecuteCallback execute,
AsyncWorkCompleteCallback complete,
void* data = nullptr);
~AsyncWork();
// Async work can be moved but cannot be copied.
AsyncWork(AsyncWork&& other);
AsyncWork& operator =(AsyncWork&& other);
AsyncWork(const AsyncWork&) = delete;
AsyncWork& operator =(AsyncWork&) = delete;
operator napi_async_work() const;
Napi::Env Env() const;
void Queue();
void Cancel();
private:
static void OnExecute(napi_env env, void* data);
static void OnComplete(napi_env env, napi_status status, void* data);
napi_env _env;
napi_async_work _work;
AsyncWorkExecuteCallback _execute;
AsyncWorkCompleteCallback _complete;
void* _data;
};
@nodejs/n-api please review and comment.
@ebickle, sort of related, I illegally (I guess) tried to use Env()
within Execute()
, and dumped core. How should I do that? For example I want to create a new Buffer
object, and call a js function with it.
@ebickle you could simply allocate memory inside Execute(), pass it to the Complete() callback and then create an external buffer in the Complete() callback.
On Fri, Jun 8, 2018 at 5:23 PM, chad3814 notifications@github.com wrote:
@ebickle https://github.com/ebickle, sort of related, I illegally (I guess) tried to use Env() within Execute(), and dumped core. How should I do that? For example I want to create a new Buffer object, and call a js function with it.
— You are receiving this because you are on a team that was mentioned. Reply to this email directly, view it on GitHub https://github.com/nodejs/node-addon-api/issues/231#issuecomment-395894283, or mute the thread https://github.com/notifications/unsubscribe-auth/AA7k0SmPgx5j5eYfIOIcQGCQW0r2MXz1ks5t6utWgaJpZM4ScYXw .
@chad3814, you should not be interacting with JavaScript (either through node-addon-api calls or otherwise) in Execute() since it does not run on the main event loop. We are improving the documentation to clarify that.
@ebickle sorry for not having gotten to this yet. Our current focus is completing the docs for the existing classes in node-addon-api and then we can engage on future improvements.
Just to clarify, is using Promises with an AsyncWorker valid? I'm not sure exactly how to do this. It's okay to me if I have to resolve the Promise in the callback, but I'd rather not have to write a JS wrapper outside of C++ to wrap the value in a Promise.
Here's what I'm trying right now:
auto deferred = Napi::Reference::New(Napi::Promise::Deferred::New(info.Env()), 1);
Napi::Function cb = Napi::Function::New(
info.Env(),
[deferred=Napi::Reference::New(deferred.Value(), 1)] (const Napi::CallbackInfo & info) {
deferred->Resolve(Napi::String::New(info.Env(), "hello!"));
deferred.Unref();
},
"onStartComplete"
);
(new StartWorker(cb))->Queue();
It is very much unclear to me if I'm going with the right strategy with references, Promises, and AsyncWorker, so any advice is very much appreciated. I'd like to modify the Promise/AsyncWorker docs to reflect this usecase which I speculate is a fairly common one.
Edit: It looks like I can just copy the Napi::Promise::Deferred
, so I never needed to make a reference. Everything else worked properly.
@mhdawson so I ended up dropping n-api/node-addon-api and just doing everything in straight v8/node. This is unfortunate. I'd like to be able to use Napi::AsyncWorker like this gist: https://gist.github.com/chad3814/50d75d4f13054dc47de8c897f303580e
@chad3814 I'm not sure I understand why you could not have had the content of WorkAsyncComplete() in OnOK, and the content of WorkAsync in Execute() ?
Thanks @mhdawson, I'm not entirely sure how to do the emit()
using node-addon-api. How do I get an Env
, etc..
I'm running into a memory leak due to this complication (though the underlying issue is probably my lack of understanding of object lifecycle in V8 as well as some other stuff).
I'm trying to do a loop similar to setInterval
in C++. I believe the proper way to do this is:
// There's a bit of an issue in this code alone. Assume there are no linker errors
// and anywhere there's a stupid compiler issue I probably just forgot some boilerplate
class Worker : public Napi::AsyncWorker {
public:
void Execute() override {
// Body
}
void OnOK() override {
queueLoop(this->Env());
}
void OnError(const Napi::Error & e) override {
queueLoop(this->Env());
}
}
void queueLoop(Napi::Env env) {
Napi::HandleScope scope(env);
// I don't actually need a callback for business logic
Napi::Function cb = Napi::Function::New(
env, [] (const Napi::CallbackInfo & info) {}, "onLoopComplete"
);
(new Worker(cb))->Queue();
}
Now, this is actually working properly in terms of looping and such. :tada:
However, while the Worker gets destroyed on every loop (the destructor is called), the callback function cb
just isn't. After a minute or two of very fast looping (Execute
takes a variable amount of time, but that's irrelevant), the heap ends up at about a gigabyte with hundreds of thousands of copies of this callback not getting garbage collected.
HandleScope
might solve the issue, but otherwise I just don't know.AsyncWorker
didn't require a callback, I wouldn't be running into this issue.I tried replacing the queueLoop
calls in the OnOK
and OnError
with just this->Queue()
, but of course this resulted in a segfault.
edit: I have a solution for my particular need, but I don't think it addresses the multiple core issues that I'm perceiving. My solution is as follows:
std::optional<FunctionReference> emptyCallback;
// ...
void queueLoop(Napi::Env env) {
Napi::HandleScope scope(env);
if (!emptyCallback) {
Napi::Function cb = Napi::Function::New(
env, [] (const Napi::CallbackInfo & info) {}, "onLoopComplete"
);
emptyCallback = Napi::Persistent(cb);
emptyCallback->SuppressDestruct();
}
(new Worker(emptyCallback->Value()))->Queue();
}
@rivertam does it actually run out of memory, or does the heap just grow to that size and then keep running staying consistently at that value?
Trying to understand if there is a leak (in which case the heap should keep growing) or just that the heap grows to a larger than expected size.
@mhdawson Not sure what you mean. There was 100% a leak. Any more than one instance of that callback existing at a time (pending garbage collection) is a leak in my mind. In my case, once the Worker
is destroyed, the callback should have been marked as safe to GC. We were looking at nearly a million instances of the function after about 50 seconds despite no two Worker
s ever existing at the same time.
After some time (between 1 and 2 minutes), the garbage collection process would crash the program with a heap allocation error. I'm not at my work computer right now, but the error said something along the lines of the heap running out of memory.
@rivertam Ok, that answers my question as to whether you actually ran out of memory or just had a large heap. Which version of Node.js were you using and is the code that showed the problem somewhere we can easily install/run?
node version: 10.2.1
It's not and I don't have the time to make a minimal repro repo, unfortunately, but the relevant code was most certainly the first code in my most recent comment. If you feel as though you absolutely need a minimal repo to reproduce, remind me some time next week and I'll try to make it, but I think it's very clear what I was doing based on the first code sample.
@rivertam I'm fairly certain that V8 never garbage-collects JavaScript functions which run native code. Thus, there will always be a leak if you create such functions in a loop.
I can think of two solutions:
Worker
optionalWorker
in JavaScriptI'll take the action to add some additional documentation to the N-API/node-addon-api docs to highlight that those functions won't be collected.
@hashseed can you confirm that V8 does not garbage collect JavaScript functions which run native code?
@rivertam https://gist.github.com/gabrielschulhof/43abe2c9686a1c0b19fcf4784fa796ae has an example that illustrates the lack of GC-ing, even without N-API.
Instances created from function templates are cached in an instantiation cache. The assumption is that we have a fixed number of bindings, so this will not cause a leak.
I think what you want to do is to use v8::Function::New
instead of creating a v8::FunctionTemplate
and instantiating it. Function bindings created this way do not get cached in the instantiation cache.
v8::Function::New
is indeed the way to go.
This also means that we need to make N-API use v8::Function::New
whenever it dynamically creates a function.
@mhdawson I just submitted https://github.com/nodejs/node/pull/21688 which switches N-API to using v8::Function::New()
, and thus, to properly gc-ing functions, in all cases except napi_define_class()
, because there we need to use signatures to ensure that the prototype methods are called with the proper context which, in turn, requires that we use a function template.
Once that lands, we'll have to once more sync up the implementation of N-API provided by node-addon-api with the implementation provided by Node.js.
@gabrielschulhof Thanks for the advice. The static version that I'm working with right now seems like the optimal thing in all cases, though less ergonomic.
It would be nice if N-API could expose something along the lines of setInterval
and setTimeout
so you wouldn't have to manage this yourself. I was a little surprised I had to do a custom recursion. I ended up writing probably 50+ lines just to emulate this behavior, probably suboptimally.
@gabrielschulhof I'm pretty sure but just to be 100% sure, nodejs/node#21688 just makes things better (in terms of memory use) and we only need to backport to get getter memory use in the older version as well right?
@gabrielschulhof did you also check if we need changes in node-addon-api to ensure that Functions are collected when possible when using node-addon-api?
@mhdawson we definitely need changes to node-addon-api to make it possible to remove the data it associates with callbacks.
Whenever we change the function signature of the binding, we're essentially adding another layer of indirection where
This is unavoidable, unless we manage to shuffle the data type declarations such that the compiler recognizes a function with the new signature as being equivalent to a function with the old signature. I doubt this is possible.
So, the only choice that does not involve changes to N-API and which allows us to destroy the data associated with a function created by node-addon-api is to
napi_wrap()
each napi_value
representing a function we've created using node-addon-api, and provide a finalizer,This approach has the limitation whereby a function wrapped using node-addon-api cannot be unwrapped with plain napi_unwrap()
because the resulting data is not merely the data that was given by the addon maintainer, but the data added by node-addon-api when the function was created followed by the pointer to the data the addon maintainer provided.
Another undesirable(?) effect of this approach is that it would take node-addon-api beyond mere syntactic sugar by locking the user into using it for both wrapping and unwrapping.
The truth is though that node-addon-api is more than syntactic sugar even today, because it allocates this dynamic data which then needs to be freed.
Unless we can find some template syntax magic that explains to the compiler that
Napi::Value SomeFunction(const Napi::CallbackInfo& info) {
// Closing brace intentionally left out
is to be converted to
napi_value SomeFunction(napi_env env, napi_callback_info cb_info) {
Napi::CallbackInfo info(env, cb_info);
// Closing brace intentionally left out
the only clean choice seems to be to abandon the sugar for function, accessor, and class static callback signatures.
Class constructor and prototype methods are safe so far, but, technically, other JS engines may garbage-collect entire classes when they go out of scope.
@gabrielschulhof lets discuss this in the next N-API team meeting.
@ebickle I agree in almost all cases. I'm also having problems to make a clean impl with AsyncWorker
. Here are some comments to the "resolution" part.
Removing the Env() function from the AsyncWorker type and modifying OnOK() and OnError() to take a Napi::Env as a parameter instead. .
Totally agree, env should be passed iif we're allowed to use it.
Changing void Queue() to static void Queue(AsyncWorker* worker); to force it to be a pointer.
Many APIs have classes that needs to be heap allocated and I don't know whether it's possible to check that it lives on the heap. The provided solution would maybe just lead to AsyncWorkerSubclass worker; AsyncWorker::Queue(&worker);
and also not work. I think there are at least two better ways:
AsyncWorker* AsyncWorker::Create(...) { return new AsyncWorker(...); }
new
would leak, which might be expected), if cancellation is required, then addon implementer has to manage lifetime, e.g. with smart pointers.Removing the self-deleting behavior in favor of the caller managing lifetime. The idea would be to have a higher level class/construct that addon implementers would use, while the AsyncWorker would mainly be used internally for management of the napi_async_work resource.
As it's more a implementation detail, when memory can be freed, I think it should be managed by AsyncWorker
itself.
Remove the callback function from the type and separate out any callback and/or promise behavior to subclasses.
I think multithreading in C++ with NodeJS is the killer feature and I want to use it with less boilerplate as possible, so not sure if subclassing is a good pattern. I would expect a compact functional approach and I'm going to provide an impl later (I hope to have time :smile: )
Exploring options of bridging C++'s async types (std:future, std::promise, etc) with AsyncWorker. Probably not feasible, but newer versions of C++ are adding support for things like Future.then().
That brings me to another question, because I've read that we're using libuv threadpool with a few (4 or 8) computation threads (used e.g. in nodejs crypto lib), but what about bringing together old-fashion "IO done wrong idling threads" APIs with Node world? I would expect to have a fixed size threadpool for computation and dynamic pool for IO threads, but same interface (sth. like Rx
schedulers). At current I have no idea how to distinguish between computational hard work and low prio IO work. Worst case would be blocking threadpool because of all threads doing IO.
@ebickle @mhdawson I pushed a first implementation of Async
with tests without subclassing see: https://github.com/wildoak/node-addon-api/commit/b874cb4a4f25a704e3b06dead9c6a2c50835931d
I would expect it to work like this. What do you think?
@DaAitch You've taken it far further than I ever could, very impressive.
I like it; modern C++ and easy to use.
@ebickle thank you for motivation
There's a potential memory leak in napi-inl.h at line 3323 and 3326 of either of those exceptions are thrown. In those cases the async_holder_wrapper_t won't be deleted.
Correct, thank you! Fixed.
A bit of a nitpick, but consider having the overload at line 3292 call the full function instead of the another overload. In other words, add the Object::New(env) there as well to avoid a chain of function calls. This avoids std::forward happening multiple times.
I think there won't be an advantage changing this. It's "perfect forwarding", so with rvalues everything will boil down to "move" into holder
Can you explain how cancellation works? Line 3328-3330 in particular, where it returns holder.
In the test, I return a Promise
and put a member function cancel
to it. This is a bit hacky for the tests. As long as there is no concept in JS for cancelling promises, implementers have to implement their own wrapper, like CancellablePromise {cancel: Function, promise: Promise}
.
How it works:
Async
returns std::function<void()>
which is the cancellation function (thread safe)napi_cancel_async_work
which would only have an impact before queuing, but we immediately queuecancel
to trueAsyncExecute
function gets a std::function<bool()> cancelled
param, which returns true if cancel function invokedI already use the Async
-fork in my new project and you can do it like this:
Napi::Value CreateData(const Napi::CallbackInfo& info) {
Napi::EscapableHandleScope scope(info.Env());
auto&& item_count = info[0].ToNumber().Int32Value();
auto&& deferred = Napi::Promise::Deferred::New(info.Env());
auto&& cancel = Napi::Async(info.Env(),
[item_count](auto&& cancelled) {
std::unique<int[]> ptr(new int[item_count]);
int* data = ptr.get();
for (int i = 0; i < item_count; i++) {
if (cancelled()) {
return nullptr;
}
data[i] = heavyProcessing(i);
}
return ptr.release();
},
[deferred](Napi::Env env, int* data) {
Napi::HandleScope scope(env);
if (!data) { // cancelled
deferred.Reject();
return;
}
auto&& data_external = Napi::External<int>::New(env, data, [](Napi::Env env, int* data) {
delete [] data;
});
deferred.Resolve(std::move(data_external));
}
);
struct cancel_holder_t {
std::function<void()> cancel;
};
auto&& cancel_external = Napi::External<cancel_holder_t>::New(info.Env(), new cancel_holder_t{std::move(cancel)},
[](Napi::Env env, cancel_holder_t* holder) {
delete holder;
}
);
return scope.Escape(CancellablePromise::constructor.New({deferred.Promise(), std::move(cancel_external)}););
}
// in JS
const dataCP = obj.createData(1000); // CP = cancellable promise
const tid = setTimeout(() => {
dataCP.cancel();
}, 5000);
dataCP.promise.then(data => {
clearTimeout(tid);
// use your data
});
I've noticed a strange behaviour in the tests:
node test/async.js
and put a setTimeout(() => promise.cancel(), 50)
instead of cancelling synchronously and it also fails, because the noexcept
test with SyncSleep
are blockingI think it's not a good idea to run all tests sync. Better make everything async and only start next test if last one is finished.
Any potential compatibility concerns if this were to be merged? Usual GCC/Visual Studio/CMake suspects.
Just using C++11 lambdas and added 3 std
dependencies. I would say, there should be no compatiblity problems:
#include <memory>
smart pointers#include <atomic>
thread-safety#include <thread>
only for testing (sleep)Question is if the new experimental API for Asynchronous Thread-safe Function Calls would better implement Async
? Because I still think that napi_async is only good for computation but still not an option for old libraries blocking threads.
As an FYI, this will be a focus for the N-API team in the next iteration.
@ebickle @DaAitch sorry for the late notice but we are planning to discuss this in the N-API team meeting at 1:30 EST tomorrow. Would either of you be interested/able to attend?
@ebickle @DaAitch sorry for the noise but I should have said on Thursday
And I just realized that we are pushing out the meeting 1/2 hour on Thursday so it will be 2 EST.
@mhdawson I'm available and interested in attending.
@ebickle zoom link from the calendar entry: https://zoom.us/j/363665824. Looking forward to seeing you at 2 EST tomorrow.
@ebickle thanks for coming to the meeting today. Very helpful and helped us (at least me) better understand and think about how to start working on the issues.
@mhdawson sorry couldn't attend. Did you record it?
@DaAitch sorry I realized after it ended that I forgot to start the recording :(. I'm away for a few weeks but @gabrielschulhof might be able to catch you up, otherwise, I can do that when I get back.
I was cruising the web for solutions mixing AsyncWorker
and promises to no avail. So I made my own very hackish subclass and I thought I'd put it here for reference:
using namespace Napi;
typedef Promise::Deferred Deferred;
class PromiseWorker: public AsyncWorker {
private:
static Value noop( CallbackInfo const & info ) {
return info.Env().Undefined();
}
static Reference< Function > fake_callback;
static Reference< Function > const & get_fake_callback( Napi::Env const & env ) {
if ( fake_callback.IsEmpty() ) {
fake_callback = Reference< Function >::New( Function::New( env, noop ), 1 );
}
return fake_callback;
}
Deferred deferred;
public:
PromiseWorker( Deferred const & d ): AsyncWorker( get_fake_callback( d.Env() ).Value() ), deferred( d ) {}
virtual void Resolve( Deferred const & deferred ) = 0;
void OnOK() {
Resolve( deferred );
}
void OnError( Error const & error ) {
deferred.Reject( error.Value() );
}
};
Reference< Function > PromiseWorker::fake_callback;
The idea is to feed an unused callback to AsyncWorker
. OnError
rejects with the given exception while OnOK
simply calls a new virtual method called Resolve
to which the Deferred
is passed.
So you implement Execute
as usual, then implement Resolve
where you build the resolve Value
and feed it to Deferred::Resolve
. The idea of passing the Deferred
to Resolve
is to avoid scope issues and make it possible to reject the Deferred
in case some application-level condition arises. Could be just as easy to provide a PromiseWorker::Deferred
method that would return the deferred so that one could just implement the logic in OnOK
directly.
I'm just getting back to coding in C++ so the code may reflect that. And I started using NAPI like a week ago and I'm still learning, very obviously.
Anyway, hope this can help.
Discussion in latest N-API meeting - Unsafe Self-Destructing Behavior
To be consistent with nan we should add
virtual void Destroy() {
delete this;
}
This allows the same heap to be used for destroy as alloc and the doc points people in the right direction. It's also SemVer minor.
Use Ref count, set to 1 on construction, increment with suppress destruct. In new Destroy method decrement and only destroy if 0, call this from were we cleanup after onWorkComplete Assert if class is destroyed and it is not 0.
I was cruising the web for solutions mixing
AsyncWorker
and promises to no avail. So I made my own very hackish subclass and I thought I'd put it here for reference:
Here is my adaption of your class, just some minor refactoring and namespace clean up:
#include <napi.h>
class PromiseWorker : public Napi::AsyncWorker {
public:
PromiseWorker(Napi::Promise::Deferred const &d) : AsyncWorker(get_fake_callback(d.Env()).Value()), deferred(d) {}
virtual void Resolve(Napi::Promise::Deferred const &deferred) = 0;
void OnOK() override {
Resolve(deferred);
}
void OnError(Napi::Error const &error) override {
deferred.Reject(error.Value());
}
private:
static Napi::Value noop(Napi::CallbackInfo const &info) {
return info.Env().Undefined();
}
Napi::Reference<Napi::Function> const &get_fake_callback(Napi::Env const &env) {
static Napi::Reference<Napi::Function> fake_callback
= Napi::Reference<Napi::Function>::New(Napi::Function::New(env, noop), 1);
return fake_callback;
}
Napi::Promise::Deferred deferred;
};
* Discussion in latest N-API meeting - Unsafe Self-Destructing Behavior * To be consistent with nan we should add ```c++ virtual void Destroy() { delete this; } ``` * This allows the same heap to be used for destroy as alloc and the doc points people in the right direction. It's also SemVer minor. * Use Ref count, set to 1 on construction, increment with suppress destruct. In new Destroy method decrement and only destroy if 0, call this from were we cleanup after onWorkComplete Assert if class is destroyed and it is not 0.
@mhdawson I believe there's a problem with this. The Destroy()
method cannot perform any logic other than delete this
, because it's virtual. So, if somebody overrides it, it will no longer perform that logic, and if somebody chains up, it will unconditionally run delete this
, which is what we're trying to avoid. So, if we implement SuppressDestruct()
as the reffing of a counter, we must supply a corresponding Unref()
so that the user may call Unref()
to undo the effect of SuppressDestruct()
before calling Destroy()
, otherwise the reference counter will be 1 during destruction.
Hard-coded Javascript Function Callbacks, solution discussed:
- Static member in AsyncWorker that define Function that is empty
- OnOK and OnError check if function is empty and if not, then don't try to call it
- Create new contructors that don't take parameter and calls the others using the empty function
Hey @gabrielschulhof / @mhdawson , I've been working on this implementation. I think there may be an issue here. If we remove the Function
callback from all constructors, then there would leave an empty constructor, and we have no reference to a valid napi_env
. This means we cannot create a "default" async_resource
object and async_resource_name
string.
If we remove the Function
, we would at least need to have an Env
parameter in the constructor. Thoughts?
My first thought was that we should add the Env as a parameter. Any downsides to that which you can think of? I assume when you say "remove Function callback from all constructors" you mean for the new ones we are adding since we won't be removing the existing ones right?
Hi @mhdawson ,
Yes, I do mean the ones we are adding newly. I don't see any downside... I guess the reason it wasn't there before is because we could get it from the required Function
callback, so it make sense if the Function is gone, that we'd need an Env.
Furthermore, I think the receiver
Object can also be removed from these new constructors, as its only purpose was as the this
context when calling the callback -- if no callback is present, no receiver is needed, so then the three constructors are:
explicit AsyncWorker(napi_env env);
explicit AsyncWorker(napi_env env,
const char* resource_name);
explicit AsyncWorker(napi_env env,
const char* resource_name,
const Object& resource);
The current design of napi::AsyncWorker has a number of defects and limitations with it's current design
Issues
Unsafe Self-Destructing Behavior
Managing object lifetimes across multiple threads is a challenging problem. In the case of AsyncWorker, the caller that instantiates a new AsyncWorker instance has no control over when the work will be completed. To work around this problem, the AsyncWorker class currently has a "self-destructing" behavior - the last step of the OnWorkComplete callback function causes the object to delete itself. (
delete self;
). This behavior is unsafe and causes a number of problems:The AsyncWorker class does not know how the AsyncWorker memory was allocated. Imagine the following code as an example:
void runWorkOnThread() { AsyncWorkerSubclass worker; worker.Queue(); }
In this case,new
was never called and the AsyncWorker (unfortunately) lives on the stack.the work_complete callback executes asynchronously from the caller's lifetime of the object. If the caller holds a reference to the AsyncWorker and later calls Cancel() asynchronously from the main node.js event loop, a race condition will occur. Cancellation is always an asynchronous operation and must not fail in a dangerous manner when called after execution of the asynchronous work has already completed. Imagine this scenario:
Hard-coded Javascript Function Callbacks
The AsyncWorker currently takes a callback napi::Function and optional napi::Object receiver as required parameters. The intent behind the callback function is to provide an easy mechanism to automatically call a user-defined callback function outside of the addon, but this makes an incorrect assumption that the implementing addon will use a callback design pattern.
Consider a "legacy-free" addon that wishes to return a Promise for direct use with async and await. The addon would need to implement a Javascript function as a callback, wrap C++ code inside the Javascript Function that marshals their return value and/or errors, then have that C++ code signal a deferred.
The requirement for a "Javascript round-trip" to execute C++ code or implement Promises is a major headache.
No Suppose for Promises
As discussed above, there is no way to directly bridge an AsyncWorker to a Promise.
No Support for Error objects
The AsyncWorker currently only supports Javascript Error objects instantiated with a message; there is no way to return a "richer" Error object or different error type from within an AsyncWorker. In my own case, I want to return additional, important error information returned from the proprietary encryption engine I'm wrapping - there's no way to do this presently, even from within OnOK() or OnError(), which execute on the main Node.js event loop.
No Return Values
There is no mechanism to pass a return value out of the AsyncWorker, whether to the callback function or a Promise.
Unsafe Access to Napi::Env
The class contains a public reference to a napi_env retrieved from the callback function passed to the constructor. While it's ultimately up to implementers to write their code correctly, the public Env() function is a dangerous temptation - if used from within the Execute() function, multiple threads will be unsafely using the same v8 Isolate.
Resolution
I've been exploring a number of potential solutions to the problems listed above, but I haven't solved all of the issues in a single solution I can pitch to the node-addon-api community. The intent behind this issue is to create a single place to list all of the potential problems with the current type and to keep track of the various design options and discussion for resolving the defects and limitations.
Potential ideas I've been exploring include:
void Queue()
tostatic void Queue(AsyncWorker* worker);
to force it to be a pointer.virtual napi_value OnComplete(Napi::Env env)
that could return a value back from the asyncronous operation. Note this still has the problem of getting the value or error safely out of the execute function.No slam-dunk options yet - just quite a few ugly trade-offs. The idea of a higher level type that takes a Callable (lambda/etc) is very tempting, but the only clean way to do that and still keep our sanity is to use std::invoke. That creates a dependency on C++17 - in the case of Windows, Visual Studio 2015 or so.