tc39 / proposal-record-tuple

ECMAScript proposal for the Record and Tuple value types. | Stage 2: it will change!
https://tc39.es/proposal-record-tuple/
2.5k stars 62 forks source link

Records/tuples as record keys and sets. Defining an implicit sort for both #392

Open sirisian opened 1 week ago

sirisian commented 1 week ago

I realize this concept of records/tuples as keys and unordered tuples (or sets #377 ) are all connected to the same problem of sorting. (Sorting string keys seemed to have been a quick decision).

#{
  #{ a: 0 }: true,
  #{ b: 1 }: true
}
#^[
  #{ a: 0 },
  #{ b: 1 }
]

Essentially to make this work in both cases they'd need to be implicitly sorted by some defined algorithm.

Right now it seems to require the user to manually sort sets to utilize the comparison. This is cumbersome and lacks standardization. Even something as simple as converting a Set of records to a tuple is verbose.

For future consideration it would be nice to have one or both of these features if sorting was defined. To me this seems like something that should be planned as an extension for shortly after the proposal lands if it can't go into the proposal. Maybe it'll get more feedback as people use record/tuple more in projects.

acutmore commented 5 days ago

As you say, an issue that arises here is sorting. And the issue of sorting is Symbols.

const s1 = Symbol();
const s2 = Symbol();

const t1 = #[s2];
const t2 = #[s1];

When deciding how to order t1 compared to t2, the only information available is the order these values were created. And having the order change based on which code executes first is something not everyone is comfortable with.

Sorting aside this would add extra complexity to the proposal so this does sound more of a follow on area. I think the majority of initial use cases can be served by Map and Set objects. I believe there have been proposals looking at having immutable versions of these.

mhofman commented 5 days ago

When talking about an immutable Map/Set there are 2 properties to consider given the semantics of Record/Tuple in this proposal:

The first point is actually ambiguous, and slightly dependent on the second one. While we can all agree and immutable collections should not gain and lose entries, it's less clear what those entries should consist of. Should the content of the collection only accept R/T or also accept other immutable collections. Would that be only on the key side, or both keys and values.

For the comparison by equality, as mentioned there is no way to specify a global order of all primitives without introducing a "creation time order" of unique symbols. The alternative is to maintain insertion ordering and exclude ordering from the equality semantics for immutable collections, which has the unpleasant consequence of having observably different values be "equal".

So my question is: what are the use cases requiring equality of immutable collections? Can we get away with immutable collections that are simply collections that don't allow new entries to be added / removed? Or maybe something slightly more constrained like immutable collections that only accept R/T and primitives as keys and R/T, primitives and other immutable collections as values so that you can still build a stable equality predicate?

Maxdamantus commented 4 days ago

My feeling on this suggestion itself is that it would be weird for records (or their temporary ToObject representations) to support different sets of keys:

to the set of keys supported by every other type of value:

so I suspect if #{ [#[]]: 0 } is allowed, { [#[]]: 0 } should be allowed too, meaning records/tuples would be a third/forth type of object key. I don't see this as particularly useful. The way I see it, symbols as keys are useful because they can be guaranteed to not conflict with existing keys (since unlike strings, symbols are "unforgeable"), but I don't see a strong reason for arbitrary forgeable keys.

As for the ordering issue, my feeling has always been that order should be preserved even for record values, but I haven't felt strongly about it, partly because the order is ultimately only important in weird (often bad) code, and because I don't find myself needing to use symbol keys very often.

The alternative is to maintain insertion ordering and exclude ordering from the equality semantics for immutable collections, which has the unpleasant consequence of having observably different values be "equal".

imo this would be the desirable behaviour (different orderings are "equal" (===) but not the "same" (!Object.is)). As long as -0 and +0 are preserved but treated as "equal" within R/T/immutable collections there will be "equal" but "different" values. Similarly to above, the difference between -0 and +0 is only important in weird (often bad) code, particularly code involving division by zero. Even conversion to string or JSON treats them the same way (eg, String(-0) === "0").

If other people have a good reason for normalizing the order, it probably doesn't affect me too much, though I don't see a very good reason for it. An interning-based polyfill would likely normalize the order out of necessity (as a sham rather than a shim), but a native implementation wouldn't need to. This also applies to handling the -0 and +0 distinction, where the correct behaviour can't be fully emulated.

I remember reading through some old issues where it seemed like records preserved their order, though I think at that time they had different equality semantics (eg, #{} === #{} but !Object.is(#{}, #{})).