Open WebReflection opened 6 months ago
A related question: why does the watcher notify callback need to be called at an unsafe time before the node graph is fully dirtied?
Unless there's an important reason for this, it seems like a foot gun because a library has to be very careful not to expose any API directly via the watcher that might result in a signal read/write.
@sorvell I think this answers already: https://github.com/tc39/proposal-signals/issues/185
in a graph where signals are shared and everyone is an owner of the value (as it can change it) those just reading the value should be updated after the .set(newValue)
happens or the program relying other computed or references that indirectly trust a synchronized graph would fail at that.
silly example: don't do this at home
const items = signal(0);
let lastDetails = {};
effect(() => {
const xhr = new XMLHttpRequest;
xhr.open('GET', `/db/?items=${items.value}`, false);
xhr.send(null);
lastDetails = JSON.parse(xhr.responseText);
});
addItem.addEventListener('click', () => {
items.value++;
// currently fails
render(lastDetails);
});
In this (still silly ... but ...) example there's no way if not by trying to fix via queueMicroTask
that lastDetails
is in sync with the changed signal and the effect doesn't have any side effect or possible seppuku possible, it's just a greedy synchronoizer of some other shared state that should be instantly updated / available at its last state whenever 3rd party code plays around the signal value.
As implementor of various signals like libraries I still think batch or async effects are actually better and more desirable, but I can do that sync out of the box unless I want to hook into an async effect explicitly ... and this is missing, or better, a watcher incapable of doing anything and always requires a detached way to operate (see the queueMicrotask guard) looks like a bad API or actually something no user should implement as it's going to always look or be the same ... either we say in the API proposal that Watcher callback can't ever do anything synchronously and always require a queued detached callback, or we find a way to make my use case possible, thanks.
P.S. previous art work, the MutationObserver doesn't ask users to guard its operations, these are already queued internally and fired "at the best time" ... here the API really impose to orchestrate a queue but it doesn't allow the queue to be skipped and known changes to be already disposed / performed before the very next line of code outside that callback is reached ... I don't think this is a huge blocker but I do think this is a shenanigan approach or an API that can't work in synchronous ways yet it's triggered synchronously for whatever reason so I find this a bug in the current specs/proposal as there's no use for that watched queue API to do anything at all when its callback is invoked. I hope this makes sense, despite being it a bad thing to do or not ... I am questioning why the Watcher(callback)
is invoked synchronously if there's apparently nothing synchronous I can do in there beside adding a guard to detach the consuming of the queued dirty changes to apply.
I tried to make a JSBin of the problem with the current implementation, to see if there is a way to solve it: https://jsbin.com/rojozis/edit?html,output
I have a hunch that only a sync effect (I think it needs a different name though) can help -- but maybe there is some data management technique I don't know about. :thinking:
I've been writing an implementation mimicking Solid.js' signals on top of the proposed spec, I've been bit by this since Solid.js expects synchronous, but I've realized that so long as the reads/writes happen outside of the watcher callback, then it's fine, it just means that writes has to be batched and that it's from a signal that you explicitly control (unless you wrap said external writes to a batch
function)
Here's how I've set up the watcher: https://gist.github.com/mary-ext/2bfb49da129f40d0ba92a1a85abd9e26/61835cdbb271318615905655b307a13e799e7535#file-solid-signals-ts-L237-L248
Signal writes are wrapped in runComputation
:
https://gist.github.com/mary-ext/2bfb49da129f40d0ba92a1a85abd9e26/61835cdbb271318615905655b307a13e799e7535#file-solid-signals-ts-L64-L71
Writes to multiple signals can be batched via batch
where it just passes it to runComputation
, same deal.
The only unfortunate part is that the watcher doesn't specifically tell you which computed is now dirty, so my solution right now involves creating a watcher for every effect, which doesn't seem ideal.
At least for me, I think the tradeoff that it's only applicable for our own managed signals or that writes has to be wrapped in a closure is fine, and we could still prohibit doing reads/writes inside the watcher, but we could at least address the above issue of needing to have a dedicated watcher for each effect.
EDIT: Might be doable to have one watcher just by checking .getPending()
right at the end, but I think my problem with this is that updating dirty effects are being done separately from creating new effects. (not sure if that'll be a problem or not, yet.)
No idea how it'd go for externally-managed states though, .getPending()
won't return the signals we've processed during the synchronous batch but that also just means we fired off a microtask for nothing if it's all coming from a state we manage.
This comment is a whole mess, I'll try to clean it up later.
I am not sure this is a duplicate of https://github.com/tc39/proposal-signals/issues/136 but it looks to me there's no way to have blocking effects, whenever that's desired.
If I have just this it throws an error:
As soon as I detach via microtask that for loop everything is fine ... even the example suggests that implementation is about batched effects, but what if I want to have greedy effects instead? ... is that fully discouraged or something else I am not seeing? FWIW with setters one can do anything desired synchronously and that's the case with Proxies intercepting
set
operations too, here it looks like if no watcher is used, the effect won't be triggered ever again, but whatever else I do either break or become yet another batch instead of an instant effect ... effects also runs ASAP when defined, I'd love to have a way to let them run ASAP also when anything changes, not always batched ... does it make sense? Thanks!