Open devmachiine opened 6 months ago
- Increased performance
- Higher than an order of magnitude?
- How much CPU does signal-related processing consume on a website?
- Easier for a few web frameworks and "Some coauthors" to use signals
Benefits being part of javascript, the same being true if it is part of the DOM api instead
- Less code to ship
- Better debugging posibilities
I can imagine some cons being
- Not being generally useful for the vast majority of javascript programmers
- Changes and improvements take longer to implement across all browsers
- Not being used after a decade, when we discover other approaches to reactive state mangement
Q: Isn't it a little soon to be standardizing something related to Signals, when they just started to be the hot new thing in 2022? Shouldn't we give them more time to evolve and stabilize? A: The current state of Signals in web frameworks is the result of more than 10 years of continuous development. As investment steps up, as it has in recent years, almost all of the web frameworks are approaching a very similar core model of Signals. This proposal is the result of a shared design exercise between a large number of current leaders in web frameworks, and it will not be pushed forward to standardization without the validation of that group of domain experts in various contexts.
I think in the very least a prerequisite for this as a language feature should be that
almost all of the web frameworks
use a shared library for theircore model of Signals
, then it would be proven that there is a use-case for signals as a standard, and much easier to use that shared library as an API reference for a standard implementation.If anyone could please elaborate more on why signals should be a language feature instead of a library, this issue could serve as a reference for motivation to include it in javascript. :upside_down_face:
@devmachiine @mlanza You might be surprised to hear that there is a lot of precedent elsewhere, even including the name "signal" (almost always as an analogy to physical electrical signals, if such a physical signal isn't being processed directly). I detailed an extremely incomplete list in https://github.com/tc39/proposal-signals/issues/222#issuecomment-2119369345, in an issue where I'm suggesting a small variation of the usual low-level idiom for signal/intereupt change detection.
It's slow making its way to the Web, but if you squint a bit, this is only a minor variation of a model of reactivity informally used before even transistors were invented in 1926. Hardware description languages are likewise necessarily based on a similar model.
And this kind of logic paradigm is everywhere in control-oriented applications, from robotics to space satellites. And almost out of necessity.
To allow for external monitoring in physical circuits, you'll need two pins:
Then, consuming circuits can detect the output's rising edge and handle it accordingly.
Haskell's had similar for well over a decade as well, though (as a pure functional language) it obviously did signal composition differently: https://wiki.haskell.org/Functional_Reactive_Programming
And keyboard/etc events are much easier to manage performantly in interactive OpenGL/WebGL-based stuff like simple games if you convert keyboard events to persistent boolean "is pressed" states, save mouse position updates to dedicated fields to then handle deltas next frame, and so on. In fact, this is a very common way to manage game state, and the popularity of just rendering every frame like this is also why Dear Imgui is so popular in native code-based games. For similar reasons, that library also has some traction in highly interactive, frequently-updating native apps that are still ultimately window- or box-based (like most web apps).
If anything, the bigger question is why it took so long for front end JS to realize how to tweak this very mature signal/interrupt-based paradigm to suit their needs for more traditional web apps.
As for other questions/concerns:
Signal performance and memory usage both would be improved with a native implementation.
Set
data anyways).set
doesn't need to go through the ceremony of either set.forEach
or set.values()
. If you split between small array and large set, you could even use the same loop for both and just change the increment count and start offset (and skip on hole), a code size (and indirectly performance) optimization you can't do in userland.queueMicrotask(onNotify)
. You don't even need to allocate resolver functions at all, just the promise and its associated promise state.As for utility, it's not generally useful to most server developers. This is true. It's also of mixed utility to game developers. It is somewhat niche. But there's two points to consider:
async
/await
became widely supported.The proposal is intentionally trying to stay minimal, but still good enough to make direct use possible. Contrast this with URL routing, where on the client side, HTML spec writers only added a very barebones history API and all but required a client-side framework to fill in the rest.
I think in the very least a prerequisite for this as a language feature should be that almost all of the web frameworks use a shared library [...]
A single shared library isn't on its own a reason to do that. And sometimes, that library idiom isn't even the right way to go.
Sometimes, it is truly one library, and the library has the best semantics: async
/await
's semantics came essentially from the co
module from npm, and almost nothing else came close to it in popularity. Its semantics were chosen as it was the simplest and soundest, though other mechanisms were considered. (The syntax choice was taken from C# due to similarity.) But this is the exception.
Sometimes, it's a few libraries dueling it out, like Moment and date-fns. The very heavy (stage 3) temporal
proposal was created to ultimately subsume those with a much less error-prone framework for dates and times that's clearly somewhat influenced by the Intl
APIs. This is still not the most common case, though.
Sometimes, it's numerous libraries offering the same exact utility, like Object.entries
and Object.fromEntries
both being previously implemented in Lodash, Underscore, jQuery, Ramda, among so many others, I gave up years ago even trying to keep track of the "popular" ones with such helpers. In fact, both ES5's Object.keys
and all the Array
prototype methods added from ES5 to today were added while citing this same kind of extremely broad library and helper precedent. CoffeeScript of old even gave syntax for part of that - here's each of the main object methods (roughly) implemented in it:
Object.keys = (o) ->
(k for own k, v in o)
Object.values = (o) ->
(v for own k, v in o)
Object.entries = (o) ->
([k, v] for own k, v in o)
Object.fromEntries = (entries) ->
o = {}
for [k, v] in entries
o[k] = v
o
Speaking of CoffeeScript, that's even inspired additions of its own. And there's been many cases of that and/or other JS dialects inspiring additions.
private
reserved word, a sigil was used instead.a ?? b
and a ??= b
where CoffeeScript uses a ? b
and a ?= b
, and JS uses ?.
for calls and bracketed accesses as well (to avoid ambiguity with ternary expressions). No a?
shorthand was added for a == null
, though - that was explicitly rejected.There's also other cases (not just temporal
) where existing precedent was more or less thrown away for a clean sheet re-design. For one glaring example, iterables tossed most existing precedent. Anything resembling symbol-named methods are used by nobody else. Nobody had .throw()
or .resume()
iterator methods. .next()
returns a special data structure like nobody else. (Most use "has next" and "get next and advance", Python uses a special exception to stop, and many others stop on null
.) Library precedent centered around .forEach
and lazy sequences, which was initially postponed with some members at the time rejecting it (this has obviously since changed). JS generators are full stackless coroutines able to have both yield and return values, but do did Python about a decade prior, so that doesn't explain away the modeling difference.
It's not that I don't think signals are a staple of development. They most certainly are, at least for me. I just think you might get some pushback on what makes sense for the primitives.
For context, I myself coming in was hesitant to even support the idea of signals until I saw this repo and dug deeper into the model to understand what was truly going on.
And yes, there's been some pushback. In fact, I myself have been pushing back on two major components of the current design:
Signal.subtle
namespace exists pretty flimsy, and I feel it should be flattened out instead. (TL;DR: the crypto.subtle
analogy is weak, and all other justifications I've seen attempted are even less persuasive for me.)Watcher
class) has a number of issues, both technical and ergonomic. (I'll omit specifics here for brevity - they are very nuanced.)I also pushed back against watched
/unwatched
hooks on computeds for a bit, but since backed off from that
I've also been pushing hard for the addition a secondary tracked (and writable) "is pending" state to make async function-based signals definable in userland.
@mlanza Welcome to the world of the average new stage 1 proposal, where everything is wildly underspecified, somehow both hand-wavy and not, and extremely under flux. 🙃
https://tc39.es/process-document/ should give an idea what to expect at this stage. Stage "0" is the wildest dreams, and stage 1 is just the first attempt to bring a dose of reality into it.
Stage 2 is where the rubber actually meets the road with most proposals. It's where the committee has solidified on a particular solution.
Note that I'm not a TC39 member. I happen to be a former Mithril.js maintainer who's still somewhat active behind the scenes in that project. I have some specific interest in this as I've been investigating the model for a possible future version of Mithril.js.
A single shared library isn't on its own a reason to do that. And sometimes, that library idiom isn't even the right way to go.
Good point, I agree. I can see the similarity between game designers and gamers who propose balances/changes which wouldn't benefit the game(rs) as a whole and/or have unintended consequences.
everywhere in control-oriented applications .. very common in hardware and embedded
Interesting. Especially the circuitry example! Because X exists in Y isn't enough justification on its own to include X in Z. I don't think javascript is geared towards those use cases, it's more in the domain of c/zig.
As for utility, it's not generally useful to most server developers. This is true. It's also of mixed utility to game developers. It is somewhat niche. But there's two points to consider: It would be far from the first niche proposal to make it in, and there's functionality even more niche than this. Atomics are very niche in the world of browsers.
Good point! I found the same to be true with the Symbol
primitive. For years, I didn't really get it, but once I had a use case, I loved it.
I reconsidered some of the pro's of signals being baked into the language(or DOM api) which I stated
Less code to ship
Even if signals usage is ubiquitous, technically less bytes are sent over the wire, but practically not.
Performance
Technically yes, but practically? If a substantial portion of compute is taken up by signals processing, its probably a simulation or control-oriented application, and here I think it's out of domain scope for javascript again.
recall there is a proposal for adding Observables to the runtime, which is directly related to signals https://github.com/tc39/proposal-observable
There were similar concerns regarding the conclusion of not moving forward with the Observable proposal
I think there will be a lot of repeat discussion:
Why does this need to be in the standard library? No answer to that yet.
Where does this fit in? The DOM
Are there use cases in Node.js?
(examples of DOM apis that make sense in DOM and Node, but not in language)
Concerns about where it fits in host environments
Stronger concerns: will this be the thing actually _used_ in hosts?
I can appreciate the standardization of signals, but I'm not convinced that tc39 is the appropriate home for signals. The functionality can be provided via a library, which is much easier to extended and improve across contexts.
Technically yes, but practically? If a substantial portion of compute is taken up by signals processing, its probably a simulation or control-oriented application, and here I think it's out of domain scope for javascript again.
@devmachiine You won't likely see .set
show up on performance profiles, but very large DOM trees (I've heard of trees in the wild as large as 50k elements, and had to direct someone with 100k+ SVG nodes to switch to canvas once) using signals and components heavily could see .get()
showing up noticeably.
But just as importantly, memory usage is a concern. If you have 50k signals in a complex monitoring app (say, 500 items, with 15 discrete visible text fields, 50 fields across 4 dropdowns, 20 error indicators, and 5 inputs), and you can shave off an average of about 20 bytes of each of those signals by simply removing a layer of indirection (2x4=8 bytes) and not allocating arrays for single-reference sets (2x32=64 bytes per impacted object, conservatively assumes about 20% are single-listener + single-parent), you could've shaved off around entire entire megabyte of memory usage. And that could be noticeable.
To me, the biggest advantage of this being part of the language is interoperability.
If you want multiple libraries and UI components to interoperate by being able to watch signals in a single way, a standard feature is the only way to accomplish that. It's infeasible to have every library use the same core signals library, and eliminate all sources of duplication (from package managers, CDNs, etc) which would bifurcate the signal graph.
So to restate, perhaps a little differently, just give us the optimized primitives, not a replete, one-size-fits-all signal library.
Well, I also think that the built-in APIs should be as ergonomic as reasonably possible. We shouldn't require that a library be used to get decent DX.
I personally think the current API is near a sweet spot because it makes the common things easy (state and computed signals), and the complex things possible (watchers). Needing utility code for watchers makes sense, but IMO basic signal creation and dependency tracking should be usable with the raw APIs.
In this way, even the current library on offer could be built from these primitives
I struggle to think of what lower-level primitives could be useful. You need centralized state for dependency tracking. Maybe you could separate a signals local state from its tracked state - say, State doesn't contain it's own value - but you still need objects of some sort to store dependency and dirtiness data. I don't know what a lower-API would even get you over the current State and Computed.
@mlanza To be concrete, what would your Atom implementation look like under the current API, vs an alternative that may be lower-level? Is State a hinderance?
First, what do you mean by is state a hinderance?
I mean, is the class Signal.State
more difficult to use to implement your Atom than whatever a lower-level primitive might be? Can we compare the current API against a hypothetical one?
I'm just trying to get concrete here. Are these primitives in the current proposal minimal and complete? What would be lower level?
You seemed to be saying that the current API could be built from lower-level primitives. So I'm asking, what are those lower-level primitives? And how does your library look like implemented with the current proposed API vs those lower-level primitives?
If we were talking significant performance improvements I'd have a significantly different opinion on this, but we aren't. I don't see any of the examples given here as convincing of the need for baking a concept into the language as a feature.
Sure, some languages have signals - many don't. Many that don't have their own messaging/queue systems.
When I look at proposals to any language, framework, etc., there's two criteria that I consider:
This proposal fails on both criteria, IMO. Signals can already be implemented - as they have been in so many languages that all have their own pros and cons. Sure, they may adopt native Signals internally, but now they both have to work around the limitations that are not intrinsic to the language to provide the same experience and their value add no longer is signals but /their/ view of signals which may already have countless in agreement on as being optimal.
So if/when they get added, the JavaScriptCore, SpiderMonkey and V8 teams will have to implement them in a compatible way. The Chromium, Safari and Firefox teams will have to validate and ensure their functionality within their browser environments. The Bun, Deno and NodeJS teams will have to validate and ensure their functionality within their non-browser environments. If even one of these gets something wrong, it leads to fragmentation, which leads to poly fills, which leads to additional complexity for something that already exists today and can be used with relative ease even without the use of third party libraries.
Performance is quite frankly the only argument here and it's quite weak (again, just my opinion). A better approach would be to identify the specific issues that signals libraries face and try to solve for those - which often leads to resolving issues outside of the topic at hand while keeping ECMAScript from becoming a bloated mess of hundreds of reserved keywords and features that lead to the last 5 lines of code looking like a completely different language than the next 5.
The thing that's missing for me in those two criteria is interop. Many language's standard library features could be done in userland, but gain a lot of ecosystem value when packaged with the language. Maps can be implemented in libraries of course, but if you have a standard interface then lots of code can easily interact via the standard Map.
Signals has an even bigger interop benefit by being built-in than utility classses because of its shared global state. It would be difficult to bridge multiple userland signals graphs correctly. By being built in code can use standard signals and be useful to anyone working in any library or framework that uses standard signals. You can see this already with the signal-utils
package.
Additionally on the web a very big benefit of being a standard is being able to be relied on by other standards. There's a lot of potential in having the DOM support signals directly in some way, but that's only possible if signals are either built into JS or built into the DOM.
@justinfagnani then where do you draw the line? JSX? Type checking? JavaScript shouldn’t be an “ecosystem,” we shouldn’t be looking to add libraries as core language features unless there’s significant challenges that can’t otherwise be solved and that simply isn’t the case here.
Map solves significant limitations in the language that can’t be solved in userland without significant drawbacks. The memory cost of a userland approach alone justifies its existence as a language feature.
Memory, speed, and interop are the three huge benefits i'm expecting with built-in signals.
Memory can only be so optimized in a generalized system (hence my call to reconsider signals as a whole opposed to adding features that better facilitate signals)
Speed can only be optimized so far as making certain assumptions, which frameworks can often make better assumptions.
Interoperability isn’t particularly an issue and I’ve yet to see good examples of how this could provide better interoperability.
Reactive data interoperability has been a huge issue, in my experience. Unfortunately, I can't give the details around most of that due to NDAs. But it's a very serious issue. Standard signals would be worth it to me and my stakeholders even if the only thing it delivered was interoperability and none of the memory or performance hopes were realized.
@EisenbergEffect signals isn't synonymous with reactivity, signals enable reactivity but it's not everything that encompasses signals, as do many proposals that IMO were far better approaches to solving this issue than in-built signals.
Signals are reactivity, messaging, concurrency, data management, so on and so forth. Using signals just to address reactive processes is overkill.
Interoperability isn’t particularly an issue
I think there are a lot of people who would disagree with that.
signals isn't synonymous with reactivity, signals enable reactivity but it's not everything that encompasses signals, as do many proposals that IMO were far better approaches to solving this issue than in-built signals.
Signals are reactivity, messaging, concurrency, data management, so on and so forth. Using signals just to address reactive processes is overkill.
@jlandrum
@dead-claudia this still doesn't address the key point, and if anything only makes things worse for signals.
Signals - the concept - can use concurrency. Any person implementing this spec could choose to use concurrency at the engine level. Some signals libraries use concurrency. My point is that even in the proposal, native Signals do not aim to implement but a small fraction of what many libraries offer, and the idea that "It's not intended for end users but library developers" is a massive red flag.
The point everyone keeps landing on is messaging - and yes, we do need better messaging. Many proposals mentioned were designed to address this but abandoned. Adding signals is not how we get there IMO.
As for making DOM APIs exposed to signals, that is a massive and drastic reworking of much of the API that can have some serious repercussions. Making various APIs return Signals would most certainly have performance penalties. If a library wants to do this - that's fine, the person using the library has accepted the ramifications. Most of these can easily - with minimal code - be handled with event listeners.
I've yet to see a compelling example of built-in signals being anything more than a QOL feature only for some people and nothing more.
@jlandrum Replying in multiple parts.
@dead-claudia this still doesn't address the key point, and if anything only makes things worse for signals.
Signals - the concept - can use concurrency. Any person implementing this spec could choose to use concurrency at the engine level. Some signals libraries use concurrency.
In theory, yes. In practice, the spec and entire core data structure precludes implementation concurrency. In fact, the very in-practice lack of concurrency also makes async
/await
integration rather awkward.
My point is that even in the proposal, native Signals do not aim to implement but a small fraction of what many libraries offer, [...]
About the only thing this doesn't natively offer 1. that frameworks commonly do and 2. isn't deeply specific to the application domain (like HTML rendering) is effects, and that's by far one of the biggest feature requests in the whole repo.
But even that is very straightforward:
function effect(fn) {
const tracker = new Signal.Computed(() => { fn() })
// Run this part in your parent computed
return function track() {
queueMicrotask(() => tracker.get())
}
}
[...] and the idea that "It's not intended for end users but library developers" is a massive red flag.
It's not that signals are not intended for use by end users. End user usage is being considered, just not as a sole concern.
Syntax is being considered in the abstract to help simplify usage by end users. It's just been deferred to a follow-on proposal, like how async
/await
was a follow-on rather than immediately part of ES6/ES2015.
The point everyone keeps landing on is messaging - and yes, we do need better messaging. Many proposals mentioned were designed to address this but abandoned. Adding signals is not how we get there IMO.
Keep in mind, signals aren't a general-purpose reactivity abstraction. You can't process streams with them, for one. (That's what async generators and callbacks are for.)
They're specifically designed to simplify work with singular values that can change over time, and only singular values.
As for making DOM APIs exposed to signals, that is a massive and drastic reworking of much of the API that can have some serious repercussions. Making various APIs return Signals would most certainly have performance penalties. If a library wants to do this - that's fine, the person using the library has accepted the ramifications. Most of these can easily - with minimal code - be handled with event listeners.
This is incorrect. Browsers can avoid a lot of work:
And much of the stuff I listed before can uniquely be accelerated further:
- Input value
The input's value can be read directly from the element, and a second "scheduled value" and a third "external input" value can be used to track the actual external state.
input.value
write, the user input value would be updated. Then, rhe "scheduled value" would be swapped with the user input value, and if it's null, a microtask would be scheduled to mark that signal as having an update. No UI locks needed.input.value
read, the scheduled value is swapped with null, and the input value is set to that.Combined with preventDefault
being a boolean property on the signal, it can fully bypass the JS event loop.
- Drag and drop state
- Pointer state
- Hover state
Pointer and mouse button/etc. state could be read directly, using double buffering. Unless coalesced values are needed, this is just a simple ref-counted object.
- Location state ([...])
Near identical situation to inputs, just the source of updates (navigation rather than user axtion) is different.
- DOM ready state
Near identical situation to inputs, just the source of updates (navigation rather than user axtion) is different and it only needs to double-buffer a 1-byte enum primitive rather than allocate a whole structure.
@dead-claudia perhaps I need a more concrete example of what this might look like for these examples.
Will it replace existing properties? If so, this would potentially break code not looking to use signals if they're trying to get a property of a DOM element as a string and instead get a Signal.
Will they be additional properties? If so, we already have a lot of property bloat to handle legacy code.
Will the properties be somehow convertible to Signals? If so, what about mixed use code bases? And how do they internally know that a non-reactive property needs to update on change?
What's being described sounds like a layer on top of the current ecosystem too - in which it most certainly would increase resource usage not reduce it, especially in many cases where a simple object with a getter/setter/subscription model would suffice as well as offer the flexibility of asynchronous updates.
I greatly appreciate the write up and explanations but this still sounds like something that promises one thing but will in reality be something completely different.
@jlandrum First and foremost, these would never replace existing properties. New properties would necessarily need to be used in browsers.
There's not much property bloat in JS, but the ship for treating the property bloat problem sailed before we even had the problem in the first place. There's been very few things removed from browsers after being once broadly available, and only because of one of a few reasons:
arguments.caller
come to mind here.<ruby>
and charset
on <script>
are two examples of this.<blink>
is unique in that it not only rendered any text in it unreadable, it also presented a potential safety risk to users with certain disabilities like epilepsy. But this is the only such case I'm aware of of this rationale.What's being described sounds like a layer on top of the current ecosystem too - in which it most certainly would increase resource usage not reduce it, especially in many cases where a simple object with a getter/setter/subscription model would suffice as well as offer the flexibility of asynchronous updates.
Mouse location is currently not directly queryable. A hypothetical pointerState = document.primaryPointer.get()
signal could return that information right away, and inside a watched signal, it could then be registered for updates.
Such a signal would simplify and accelerate a couple needs that currently rely extensively on highly advanced use cases for pointer events:
pointer.x
and pointer.y
in the main game loop so they know what angle to render the camera at in the scene.elem.style.transform = `translate(${start.x + pointer.x - click.x}, ${start.y + pointer.y - click.y})`
on every frame until pointer up. It's far simpler than the current drag and drop API.@dead-claudia new properties would be such a massive undertaking given that when such events would need to be triggered varies so much. Plus they would absolutely need to be lazy to avoid the issue of overhead caused by unsubscribed signals.
As for mouse position, there's no need to poll for it; the event gets fired every time it changes and doesn't get fired if it doesn't change. Yes - the event system is very inefficient given the amount of Event instances that get created but that's a separate issue entirely - and you can't just make it a signal, there would be a reasonable amount of pushback to have to call get() when you want this information without using signals.
@dead-claudia new properties would be such a massive undertaking given that when such events would need to be triggered varies so much. Plus they would absolutely need to be lazy to avoid the issue of overhead caused by unsubscribed signals.
The subscription would obviously be lazy, and signals as proposed today already have such semantics. The .get()
would be eager, and read state the browser already needs to be able to efficiently access from the main JS thread anyways.
As for mouse position, there's no need to poll for it; the event gets fired every time it changes and doesn't get fired if it doesn't change. Yes - the event system is very inefficient given the amount of Event instances that get created but that's a separate issue entirely - and you can't just make it a signal, there would be a reasonable amount of pushback to have to call get() when you want this information without using signals.
Evidence? My experience differs on this one.
localStorage
. The API is gigantic, but the internals of what it replaced were even heavier, despite having far fewer methods.@dead-claudia Why would WebGPU have influenced off web APIs? They're completely different use cases and environments. Is it possible individuals have applied WebGPU semantics to their projects? Absolutely - but this furthers my point of signals being best made possible through changes to ECMAScript opposed to putting them in directly, it should be up to those implementing their flavor of signals to implement them.
Reporting API isn't something that changes how developers would interact with things, I'm not sure what the point of mentioning it is.
Mutation observers are a questionable feature that were the result of being a product of their time; they likely wouldn't exist as they do if we had many of the alternatives we could have with modern language capabilities.
WebSQL wasn't standardized for a reason, but it too lives on it's own and doesn't exist within the DOM or other APIs unless explicitly used.
@jlandrum Responding in parts.
Reporting API isn't something that changes how developers would interact with things, I'm not sure what the point of mentioning it is.
The API surface is small, but it involves a lot of work on browsers' part.
Similarly, a theoretical changedSignal = elem.attr("changed")
, while having a small API surface, would require significant changes in how attributes work, if browsers want to make it more efficient than essentially a mutation observer wrapper.
Mutation observers are a questionable feature that were the result of being a product of their time; they likely wouldn't exist as they do if we had many of the alternatives we could have with modern language capabilities.
Mutation observers serve important roles in page auditing, and they enable userscripts and extensions to perform types of page augmentations they wouldn't be able to do otherwise.
WebSQL wasn't standardized for a reason, but it too lives on it's own and doesn't exist within the DOM or other APIs unless explicitly used.
But IndexedDB was. And my point for bringing that (and WebGPU) up is to raise awareness that browsers don't shy away from highly complex APIs as long as they actually bring something to the table.
Anyways, this is starting to get off-topic, and I'm starting to get the impression confirmation bias may be getting in the way of meaningful discussion here. So consider me checked out of this particular discussion. 🙂
@dead-claudia you make assertions, I question the viability of your assertions, then respond with more assertions while refusing to give meaningful answers to any of the concerns raised. On top of that you've made sure to give my comments a thumbs down every step of the way.
You've not once truly addressed any of my concerns and mostly replied with "well what if it works this way" - often times proving my concerns are very much valid.
Please refresh yourself on the guidelines of contributing which generally disallows negative reactions to things you don't like and passive aggressive dismissals to opinions you disagree with.
Gentle reminder that all participants are expected to abide by the TC39 Code of Conduct. In summary:
I think its great that there is a drive towards standardization of signals, but that it is too specialized to be standardized here
(
please let us know in an issue
- that's what this issue is for)Signals is a interesting way to approach state management, but it is much more complicated of a concept compared to something like a PriorityQueue/Heap/BST etc, which I think would be more generally useful as part of javascript itself.
What problem domains besides some UI frameworks would benefit out of it? Are there examples of signals as part of a language feature in other programming languages ?
What would be the benefit of having signals baked-in as a language feature over having a library for doing it?
When something is part of a standard, it's more work & time involved to make changes/additions, than if it was a stand-alone library. For signals to be a part of javascript, I think there would have to be a big advantage over a library.
I can imagine some pros being
Benefits being part of javascript, the same being true if it is part of the DOM api instead
I can imagine some cons being
I think in the very least a prerequisite for this as a language feature should be that
almost all of the web frameworks
use a shared library for theircore model of Signals
, then it would be proven that there is a use-case for signals as a standard, and much easier to use that shared library as an API reference for a standard implementation.If anyone could please elaborate more on why signals should be a language feature instead of a library, this issue could serve as a reference for motivation to include it in javascript. :upside_down_face: