solidjs / solid

A declarative, efficient, and flexible JavaScript library for building user interfaces.
https://solidjs.com
MIT License
31.73k stars 898 forks source link

Redefining `batch`, and the concept of a new batched effect #879

Closed trusktr closed 2 years ago

trusktr commented 2 years ago

Describe the bug

import { createSignal, batch } from "solid-js";

const [count, setCount] = createSignal(0);

batch(() => {
  console.log('set:', setCount(123));
  console.log('get:', count()); // logs 0 instead of 123
});

Your Example Website or App

https://playground.solidjs.com/?hash=-937748346&version=1.3.9

Steps to Reproduce the Bug or Issue

See link

Expected behavior

Reading and writing signals should be atomic operations.

Screenshots or Videos

No response

Platform

n/a

Additional context

I spent a weekend debugging an issue I thought was in LUME, because I never expected reading a signal after setting it would return an old value. Essentially the write is not atomic.

The issue in my case also wasn't obvious because the read was far removed (several methods deep) from where the write happens.

The reason I wanted to use batch was to group the write of a signal with some method calls after it, so that reactivity would be triggered after the write and subsequent method calls.

trusktr commented 2 years ago

To add some reasoning as to why it would be ideal for writes to be atomic, consider how it would be if EcmasScript had a dependency-tracking reactivity feature including a batch syntax:

signal count@ = 0

batch {
  count@ = 123
  console.log(count@) // 0, confusion
}

effect {
  console.log(count@)
}

Or if ES ever gains decorators for variable declarations, and we implement reactivity as accessors, consider how the modified semantics of the existing language feature would cause confusion:

@signal
let count = 0

batch(() => {
  count = 123
  console.log(count) // 0, expected 123
})

In my particular case, I have a signal-backed accessors on objects, and this was confusing:

obj.count = 0

batch(() => {
  obj.count = 123
  console.log(obj.count) // 0
})
edemaine commented 2 years ago

FWIW, the docs (at least now) correctly reflect this:

Unless you're in a batch, effect, or transition, signals update immediately when you set them. — https://www.solidjs.com/docs/latest/api#createsignal

I think I've heard Ryan talk about changing this behavior, by revealing the "current" signal value even when it hasn't propagated into derived memos etc.

But at some level your proposal is at odds with batch, which aims to delay updates to be done altogether. With batch, there will always be some inconsistency. For example:

[signal, setSignal] = createSignal(0);
double = createMemo(() => 2 * signal());
batch(() => {
  setSignal(1);
  console.log(signal()); // currently 0; you're proposing 1
  console.log(double()); // must be 0
});

For batch to be able to do its batching, we can't expect the derived memo double to have updated yet in the second console.log. So you could argue that the current behavior is "better", because it satisfies the invariant that double() === 2 * signal().

On the other hand, I agree that the current behavior makes it hard to write code (especially library code), because you generally don't know whether you're in a batch. But just fixing signals that are set directly won't fix the problem in general; memos won't have updated. Though it's maybe more intuitive that memos "take time" to update...

Another idea I have: in dev mode, we could issue a warning if the user reads signal() when it has queued changes that haven't landed yet. This would have made your debugging life easier, without changing actual behavior. I think it's good to encourage mostly writing within batch...

ryansolid commented 2 years ago

Yeah this one is very much intentional and is important to keep consistency, ie be glitchfree. And comes into play with async consistency as well. I have wondered if there are other options but the repercussions would be wide spread and impacting and the current behavior gives very important guarantees.

edemaine commented 2 years ago

Do you think the dev-mode warning ("reading from signal that has pending changes") would be useful, or are there legit uses that might be annoying?

ryansolid commented 2 years ago

I'm ok with that I think. Hmm... In Marko 6 right now we throw an error in this scenario. In React they just assume it's always in the past, and in React 18 they batch everywhere now so it's consistent. Hmm.. Yeah maybe this could promote better behavior. The one place it gets weird I suppose is if an Effect sets a value and then before it gets resolved other effects read from it. My concern is that this warning ends up just being a red herring.

edemaine commented 2 years ago

I'm not sure I understand how that can happen, but I believe you that it can. :-) Perhaps the warning could be restricted to writing and reading within the same computation, which seems likely a bug. But I'm not sure how much overhead that would incur... (only in dev mode, but potentially an annoying amount of tracking to do)

trusktr commented 2 years ago
createMemo(() => 2 * signal());

What if, when in a batch, the memo getter is simply executed just as if it were a plain function? I would assume that, being in a batch, I'm not in an effect (at least that's how I think of code design working out), so just give me (calculate) the latest value.

important to keep consistency, ie be glitchfree

I believe always-deferred effects (not just the first time) are the mechanism for mostly glitch-free reactivity (the "batched everywhere" idea mentioned above).

The widely adopted native framework, Qt, defers all computations (I can't find the article at the moment, but it was a breaking change in a major version release), and so does Knockout.

In any case, it is possible to make a primitive like createBatchedEffect or createDeferredEffect (we chatted about it in Discord not too long ago) where the idea can be tested, outside of Solid core.

I will use this concept in LUME for all effects exclusively and report back on how it works out. I'm already imagining the benefits:

el.rotation.x = 10 // one effect
el.rotation.y = 20 // another effect, etc.
// and I don't want end users to have to manually `batch` everything, it's not as good of a DX

one place it gets weird I suppose is if an Effect sets a value and then before it gets resolved other effects read from it.

That's totally fine. In one solution, another microtask task is scheduled for the future, and the effect will run and see the new value.

Another way to solve this is just allow values to be read from always, and only worry about effect scheduling as the performance mechanism (i.e. memos always re-evaluate if needed, lazily). Successive effects in a microtask will read the latest state, which effectively also treats them like sub tasks of the microtask they're in compared to the other solution. The only time deferring is needed is in the original macrotask. So essentially one effect is like a batch relative to a following effects in the same microtask where the list of effects are being ran.

Or in other words: if a variable changed, and an effect that depends on it is coming up ahead, there's no need to defer, just let the effect read the current value (with lazy evaluation for memos).

TLDR pseudo idea:

This would be quite a big change to Solid though, especially after 1.0. For end users, it would be breaking for createEffect unless a primitive with a new name is released instead (but keeping the same name makes the library better in general). Implementation wise, it may impact SSR, Suspense, Transition, etc.

trusktr commented 2 years ago

Here's the Discord conversation:

https://discord.com/channels/722131463138705510/780502110772658196/942598067444678727

And here's what we landed on for testing the idea implemented outside of core:

function createBatchedEffect(fn) {
  let initial = true
  let deferred = false

  createComputed(() => {
    if (initial) {
      initial = false
      fn()
      return
    }

    if (deferred) return
    deferred = true

    queueMicrotask(() => createBatchedEffect(fn))
  })
}

It always queues a microtask, because as far as I know, only Solid core would have the ability to iterate on effects (faster) instead of queuing new microtasks for each one (slower).

Maybe we would want it to be deferred on the first run too? Right now it runs immediately the first time. Will play with it.

trusktr commented 2 years ago

Here's a playground example:

https://playground.solidjs.com/?hash=-518047525&version=1.3.9

Interesting, I didn't know regular effects were "deferred" relative to a component (i.e. they fire after the JSX effects, despite that the JSX effects are defined later in source order).

We can see the regular effect runs twice, one for each set, the first set shows a reactive glitch. The batched effect runs once after setting both signals, after everything else, no glitch as we'd expect.

ryansolid commented 2 years ago

To be clear that isn't the definition of a glitch. Having a setter run to completion twice is consistent. Having it observable at any point that a derivation doesn't reflect its source signal is a glitch. Basically if in this example at any point you can see not all 3 logs being identical then it isn't glitch-free: https://playground.solidjs.com/?hash=-1355040064&version=1.3.9

Not all reactive systems are but it is something we value.

trusktr commented 2 years ago

To be clear that isn't the definition of a glitch

According to who? Depends on the point of reference.

Here is one definition:

A glitch is a temporary inconsistency in the observable state.

Here's the example from Wikipedia demonstrated in Solid, which I'd consider as having an inconsistent temporary state as described by both of those sources:

https://playground.solidjs.com/?hash=-283178106&version=1.3.9

You mentioned that using createEffect that way is essentially the wrong way to use Solid, and that the solution is to use createComputed for that purpose, like so:

https://playground.solidjs.com/?hash=-140464327&version=1.3.9

I'd still say the first example counts as a glitch (be it from incorrect but accidentally-easy usage of Solid), and the reason is due to Solid's current effects running for every signal modification instead of batched.

Here's the first example using a custom batched/deferred version of createEffect for all three (instead of a combination of createComputed and original createEffect). The usage is the most intuitive because we just write our code, glitch free, without having to pick one API or another, just describing effects for what they are:

https://playground.solidjs.com/?hash=-1105853944&version=1.3.9

The code here for reference. ```js import { render } from "solid-js/web"; import { createSignal, createComputed } from "solid-js"; function Counter() { const [seconds, setSeconds] = createSignal(0) setInterval(() => setSeconds(s => s+1), 1000) const [t, setT] = createSignal(0) const [g, setG] = createSignal(false) // t = seconds + 1 // g = (t > seconds) createEffect(() => { setT(seconds() + 1) }) createEffect(() => { setG(t() > seconds()) }) createEffect(() => { console.log('t:', t()) console.log('t > seconds?', g()) }) return (

see console

); } render(() => , document.getElementById("app")); function createEffect(fn) { let initial = true let deferred = false createComputed(() => { if (initial) { initial = false fn() return } if (deferred) return deferred = true queueMicrotask(() => createEffect(fn)) }) } ```

Implementing it in core would be more efficient of course.

I believe this is a lot better than having a batch API, and it eliminates points of cognitive load from this set of points:

Now we're left with one point of cognitive load, simplified:

Maybe much more rare cases that most people may not ever need to care about will require createComputed or some similar synchronous effect, but I don't know of any of top of my mind, and probably may not even need it if signals always return the last value that was set, including lazy memos.

ryansolid commented 2 years ago

You are correct that Effects being batched leads to a behavior where they apply the changes in groups. To be clear I consider that implementation not equivalent to the Wikipedia example. They are describing derivations not effects. The choice of scheduling to affect the outside world. createMemo is actually what you want.

But look at it this way. I moved the console.log in your original example: https://playground.solidjs.com/?hash=-137682011&version=1.3.9

In any case removing batch on effects call also makes it go away without microtask and since it is scheduled after it has the same stabilization. So why did I add batching to effects you ask? Transitions. A Transition must be batched. And if we start one during an effect queue it entangles. Mind you I did start queuing Transitions so may be worth revisiting. I'll give this a look.

trusktr commented 2 years ago

To be clear I consider that implementation not equivalent to the Wikipedia example. They are describing derivations not effects

That article was meant to be generic, and does not really care if we're using observables, dep-tracking effects, or even just event emitters. I think you're constraining the definition by thinking about it too specifically. In Solid.js, "derivations" can be performed via effects, which is what my examples did, so the concept in the Wkipedia example applies (just under different terms, they had to use some terminology to even describe anything).

createMemo is actually what you want.

But that's also an effect, with some added caching and a signal. That's another way to avoid glitches.

I knew that, but I intended to show the issue only with effects and signals (terms the Wikipedia article is not using, but nonetheless one way to achieve what it describes). Even with the batched/deferred effects, memos would still be a useful alternative.

But look at it this way. I moved the console.log in your original example: https://playground.solidjs.com/?hash=-137682011&version=1.3.9

That's yet another way.

In any case removing batch on effects call also makes it go away without microtask and since it is scheduled after it has the same stabilization

Outside of components too?


Circling back to the original but related topic, I think all signals should simply hold their last set value, irrespective of effect scheduling or batching. And, I'd say that someone triggering signal-dependent code inside a batch, side-stepping effect scheduling, should consider that bad practice, but I think it'll be a rare case. Plus memo's should re-evaluate if they need to, lazily when called (and even the next effect can skip evaluation since the memo evaluated already, and only trigger dependents).

With that set of changes, at least there will be no surprises in signal values, and we have a scratch pad.

ryansolid commented 2 years ago

The problem is that signals don't have dependencies so we can't ensure things only run once synchronously (important for glitch-free). So strictly if you see a reactive statement with an equal sign it is a createMemo. There is a huge difference between assignment and declaration here.

Effects are showing something inconsistent which isn't great. I had my reasons, a desire to finish change sets. As without batch it can enter the pure execution part mid queue execution. But for consistency, its probably fine, but with computed/effects can never guarantee not running twice anyway. Combined with other scheduling batching also can eliminate infinite loops easier.. where this will just keep going.

In any case Signals still shouldn't show the last value in a batch for consistency reasons I have stated before. But at least we can avoid opting people in unintentionally. Although it is fair to point in this special execution zone losing consistency may be ok.

trusktr commented 2 years ago

The problem is that signals don't have dependencies so we can't ensure things only run once synchronously

I'm not sure what you mean. Can you make an example using the version of createEffect from https://playground.solidjs.com/?hash=-1105853944&version=1.3.9 that shows the issue?

It would be really helpful if you can show working/broken examples; a picture painted through code really helps me see.

trusktr commented 2 years ago

In any case Signals still shouldn't show the last value in a batch for consistency reasons I have stated before

I disagree, for me personally it's the most confusing part of Solid that I've encountered. (EDIT: but I love Solid, which is why I'm here, and only trying to be objectively truthful). I spent hours debugging my project before I realized it was that. I wouldn't have intuitively guessed that to be it.

trusktr commented 2 years ago

As without batch it can enter the pure execution part mid queue execution

I'm not sure what you mean here, but the createEffect I demo'd doesn't need batch, and makes things easy. It can run more than once occasionally, but that's totally fine (solvable other ways), and any successive run will always be in consistent state.

The main idea is it can improve end user experience in some cases.

trusktr commented 2 years ago

The main difference is in the output of these two examples:

  1. https://playground.solidjs.com/?hash=-2052756987&version=1.3.9 glitches (inconsistent state)
  2. https://playground.solidjs.com/?hash=1834217109&version=1.3.9 no glitches (consistent state)

The second one nicely dodges inconsistent state.

The goal with this idea is to prevent a user from seeing inconsistent state in that sort of scenario, or from receiving stale values from a reactive variable; but leaving effects still being entirely useful for reacting to data changes for them; and also eliminating the need for batch().

trusktr commented 2 years ago

I know this would be a big change to make, and possibly breaking if done properly (rather than compounding new scheduling on top of old for backwards compat), and we have other things to focus on right now. The benefits of Solid are already huge. :heart:

I often end up describing what I think might be a good idea motivated by issues I'm experiencing.

ryansolid commented 2 years ago

Most of this is motivated by correctness rather than intuitiveness because I have to safeguard other potential features. That's the main reason why I'm very careful on the batching behavior in terms of keeping the value in the past.

The current state of Effects is a different thing and is intentional. Not that it might seem intuitive it works the same clocks work in S.js. As weird as it might seem it isn't strictly inconsistent. It finishes all updates until completion until applying the next change. In fact, in S.js all computations work this way. I did change the behavior in Solid to be more like MobX because of the confusion there. But around the time I added concurrent transitions I switched it back just for Effects to keep things isolated. That is no longer necessary though so I look forward to restoring that.

I'm not sure what you mean. Can you make an example using the version of createEffect from https://playground.solidjs.com/?hash=-1105853944&version=1.3.9 that shows the issue?

While not strictly that version createComputed has the same characteristics. There are a lot of reactive cycle examples I have in Solid's tests. Here's one: https://playground.solidjs.com/?hash=-2026677841&version=1.3.9

As I said it just over-executes slightly. It isn't typically a big deal. But just the sort of things that memo's knowledge of the graph helps prevent. Which is part of why I ultimately want to get rid of createComputed and essentially encourage people to never write signals from effects.

trusktr commented 2 years ago

I hear you, but I don't think the S.js technique is the only way to implement reactivity while still having performance.

All that you just described, would not be necessary with always-batched effects, and would not have values-in-the-past. Win win.

But the thing is, it would break just about everything. All APIs built on top of createEffect would be impacted.

I will make a concept once I get a chance.

trusktr commented 2 years ago

encourage people to never write signals from effects.

Regardless of any changes to Solid, this would be great to highly encourage in a best practices section. Always memos for derived values, and erasing the prose around createComputed without actually removing it so it isn't a breaking change yet in 1.0, and maybe even marking it @deprecated if the intent is to actually remove it later.

ryansolid commented 2 years ago

encourage people to never write signals from effects.

Regardless of any changes to Solid, this would be great to highly encourage in a best practices section. Always memos for derived values, and erasing the prose around createComputed without actually removing it so it isn't a breaking change yet in 1.0, and maybe even marking it @deprecated if the intent is to actually remove it later.

Yeah I want to at least know that this will happen. I've been exploring a lot of things, but like when I introduced it, I still feel there are cases that it is needed. When I know with more certainty I'd definitely move to deprecated. What is clear is that there are tradeoffs. Like maybe any cases I can't eliminate from createComputed do need to move to createEffect. I'm not happy about that though so I want to see what the options are.

trusktr commented 2 years ago

I made some concepts. Keep in mind these examples are essentially reactivity built on top of reactivity, so performance is off the table, they only serve to show the conceptual behavior:

Here is your first example modified slightly in order to show how it executes:

https://playground.solidjs.com/?hash=-1825045236&version=1.3.9

Note the output at the end is bcddcc

Here is createEffect re-created (using createComputed as an implementation detail for sake of demo) to use microtask batching:

https://playground.solidjs.com/?hash=802237130&version=1.3.13

Note that the final output is bcdc, eliminating two unnecessary effect runs. Also note that each effect runs in its own separate microtask.

The following demo works the same, with the same bcdc output, but now all effects run in a single microtask using a custom queue:

https://playground.solidjs.com/?hash=-358673099&version=1.3.13

Those examples are a bit small compared to real-world scenarios in which there will be a higher number of effects, plus more dependencies used within effects. With that in mind, the following example re-creates dependency tracking for the purpose of being able to move not-yet-executed effects to the end of the queue if any effects prior to them in the queue write to any of the dependencies of the not-yet-executed effects. In the previous examples, an effect is moved to the end of the queue only once upon the its first dependency change (not exactly what we want), whereas in the following example any dependency change moves a not-yet-ran effect to the end of the queue. However, the example is not complex enough to show any difference, and the output is still bcdc:

https://playground.solidjs.com/?hash=1889548591&version=1.3.13

I need to make a more complex scenario to show how it will eliminate even more unnecessary effect run, but I think you may already be able to picture it.

These examples didn't how glitches are eliminated, but basically the effects you saw that were eliminated are the ones where glitches would have happened.

ryansolid commented 2 years ago

The execution of reducing extraneous running can happen without microtask queueing. Right now they are there intentionally because of the desire to apply all the changes in steps. But as I mentioned it is probably safe to removing the batching and not have that behavior. And that basically resolves everything that caused this issue being reported in the first case. I need to verify a couple of things but I believe there is a synchronous solution here that doesn't introduce your definition of glitches.

So to me the microtask queing is a separate issue and whether we want to introduce async. This has impact on things like performance even without causing any extra execution. My gut is to avoid this simply from inconsistent observability, ie my definition of glitches. We've looked at more advanced approaches with Marko here as well including breaking chained updates over frames etc essentially turning an infinite loop into an animation but even then we don't initially schedule effects async.

Part of my hesistance might be coming from simply microtask queing initial effects has a negative performance on benchmarks. I've tried it. And since the goal is to not apply effects in batches but have it just free for all resolve I see no benefit of introducing microtasks beyond that. Like it would be different if we were like apply each batch after the first later. Ie. first batch is synchronous, next batch is basically setImmediate. But that has todays behavior of batches. If we want consistency we have to apply the complete changeset and if that is the world of all possible changes that happen throughout the process well then further scheduling is off the table (from my definition of glitchfree).

trusktr commented 2 years ago

it is probably safe to removing the batching and not have that behavior. And that basically resolves everything that caused this issue being reported in the first case.

I happened to rename it before I saw the reply.

I don't see how it is possible for effects to be batched without a microtask and without the user having to explicitly opt into batching, apart from the user having to use an API like batch, but that batches relative to a particular piece of code rather than all code in the current macro task.

Another thing is end users by default aren't encouraged to use batch for everything, so it is an opt-in optimization that will be a missed opportunity they aren't thinking about (and good thing in Solid they usually won't need to anyway!).

As far as benchmarks, I shall try to see what I can do.

My hunch is that the cost of a single microtask is only notable for small cases, and js-framework-benchmark is nice but also not the most representative of all app use cases, and that the cost of a microtask will be negligible the bigger an app gets, especially if it has many moving parts like an animated graphical scene; similar to my theory that the cost of WebGL DOM bindings for WebAssembly will be negligible compared to the savings for running a bunch physics and matrix math in Wasm with help from SIMD, or similar. My goal is to get some measurements in place.

As for glitches, or inconsistent state, I imagine it leading to some perform pitfalls: f.e. if an effect is temporarily showing a false that causes a component to be destroyed then immediately re-created once the boolean turns back to true.

What we need is more examples of more scenarios. The ones in my previous comment were just simple starters. I'll work on making a collection of them so we can better understand the implications.

React 18 just came out, and is moving to default batching for all state changes

ryansolid commented 2 years ago

React 18 just came out, and is moving to default batching for all state changes

To be fair React already did this in 90% of places they just finished the story. And again they stay consistent by showing all values in the past.

If it isn't clear I think batching should work the same way as today. Effects just don't need to be necessarily. We already schedule them into their own queue that runs synchronously at the end of changes.

trusktr commented 2 years ago

they (React) stay consistent by showing all values in the past.

Values-in-the-past are one of the absolute worst things I dislike about React. They are a bad developer experience. I've had to deal with it a many times at work, and just don't like them.

Coupling the render cycle to the observability of state values leads to people writing brittle code, especially if they don't come from a strong React background, as I've seen in real-world React projects.

Things should just be as intuitive as possible.

I don't believe values-in-the-past are required for good performance (proof needed).

I think batching should work the same way as today. Effects just don't need to be necessarily. We already schedule them into their own queue that runs synchronously at the end of changes.

That only works within Solid's framework, not outside of it. Changes can happen outside of effects, for example in a setTimeout, as we'll show in the following examples. Microtasks are the only mechanism to batch synchronous JavaScript work in a way that is inclusive of all JavaScript features, not just Solid's.


Here are a few more examples (with more to come). The first one shows an issue, in that the scene re-loads during the calculation of a variable's value every second, resulting in an unexpected and poor performing behavior (try to scroll to zoom):

https://playground.solidjs.com/?hash=1025549064&version=1.3.13

As the comment in there implies, we can solve it by usingbatch, but the main point I am making is that users don't use batch by default, and thus have an additional cognitive overhead thinking about when they need to use it:

https://playground.solidjs.com/?hash=-394158961&version=1.3.13

Now here it is solved without having to think about batch, a simpler dev experience, and arguably better macro-level performance:

https://playground.solidjs.com/?hash=-54762955&version=1.3.13

In these examples, I purposefully made it clear what the problem is, because the scene takes some time to load so the issue is very visually apparent.

However, in many real-world scenarios, the issue will not be visible, and will go unnoticed until the performance is bad enough. In the following example, the same problem exists as in the first example, but we can't see it:

https://playground.solidjs.com/?hash=-32468437&version=1.3.13

Actually we can see the issue if we try to select the "Page 1" text: it will undo our selection.

It is very possible for these sorts of issues to go unnoticed, especially in cases where selection or user interaction doesn't change the state of visuals, but still re-creates the DOM.


Deferred/batched effects so far,


Upcoming examples will work on showing improvements in examples that have more effects and more signals per effect; examples that are more representative of real applications.

trusktr commented 2 years ago

No new examples yet, but some fixes and better types:

https://playground.solidjs.com/?hash=-375386698&version=1.3.13

Of course this is totally not ideal the way it is implemented, being essentially a hack on top of Solid, and it only works with the special createSignal in that example.

ryansolid commented 2 years ago

I'm moving this into discussions because that is what this has become and trying to follow this a bit hard. There are 3 things being discussed from my perspective.

  1. Should values be kept in the past while batching?
  2. Should effects run in the same microtask or should it defer to a future microtask?
  3. Should effects batch all signal writes during execution?

All these can be talked about independently. While there is a lot of discussion about implementation I want to answer these questions because I think there is too much conflation between different parts.

My take is:

  1. Definitely. Keep in mind batching is just a way of collecting values to set, once all things are set it runs with the updated values. Inconsistency of derived values from signals losing consistency seems unacceptable.

What I'm hearing from this thread is the desire to only batch effects and update everything else immediately. It is a different meaning for batching. To me batching is a guard against expensive computations as well and a mechanism that only batched effects would not protect us here.

  1. Same microtask is better for performance, avoids potential of executing in between, but for updates from events or setTimeouts they run to completion unless you explicitly batch. Getting support for delegated event handlers would be relatively trivial and there is an open issue on DOM Expressions, but I feel the message right now is more consistent about when you need to batch or not. Of course always deferring is consistent as well.

  2. I don't know. It makes a ton of sense to keep things clean between "executions" but it also is arbitrary to make things an "execution" and it isn't intuitive to people, plus we force them into batches they don't expect. It ensures predictability. You write new signals, those depending will execute twice. Not batching may or may not run effects twice depending. With pure things like computations we wouldn't care but maybe we care here.

None of these are actually easy to answer and they are very fundamental to the reactive system. So discussion here is worth having. But I'd start here before worrying about implementation.