Closed bakkot closed 2 years ago
Yes, I intend to bring up this precise possibility during the update.
As mentioned during the plenary, I'm wondering if a well-known symbol is actually not acceptable to allow usage in toJSON
. The main issue I see around this is that I don't know what the value passed to the reviver would be then:
toJSON
as for any other return valueThere was a preference for use of a function in the TC39 plenary.
@mhofman See #19 for reusability. As for the mechanism, if we do choose a function, I see at least two possibilities:
raw
returns an object with a property keyed by a symbol that is privileged by the JSON.parse
algorithm.raw
returns an exotic object with an internal slot that is privileged by the JSON.parse
algorithm.There seemed to be a slight preference for option 1.
There seemed to be a slight preference for option 1.
@michaelficarra expressed that preference, but I prefer option 2, and I think I convinced him of my position in matrix.
+1 from me for 2 though I remain unconvinced by the per-JSON.stringify
behavior. Specifically, I remain unconvinced that there's a real problem with signaling intent that requires re-signaling explicitly per JSON.stringify
invocation. There's the added con that a fresh kind of opaque object per-JSON.stringify
means keeping an extra slot of an index per such exotic object.
I raised this point in matrix, but want to ensure its captured here as well: If we have a mechanism to define a custom serialization result (be it a per-invocation function, symbol, or a global JSON.rawString
), we need a way to test whether the value is a custom serialization result. Without a way to test, it becomes much harder to write a custom serializer, as there's no easy mechanism to determine if an object you receive is a normal object or an exotic object/special-symbol-keyed object:
function serializer(value) {
if (typeof value === "object" && value !== null) {
if (Array.isArray(value)) { ... }
// naive serializer, doesn't have a way to test whether `value` is a custom serialization result
const result = {};
for (const key of Object.keys(value)) { ... }
return result; // uh oh, we've lost the custom serializer result
}
...
}
I suggested on matrix that we consider a JSON.isRawString
function to test an object. @bakkot suggested such a test could be performed in user code:
// assumes custom serialization result is an exotic object
// with a null prototype, no members, and an internal slot
const isRawString = o =>
typeof o === "object" &&
o !== null &&
Object.getPrototypeOf(o) === null && // do not serialize objects w/prototypes
Object.getOwnPropertyNames(o).length === 0 && // do not serialize arbitrary properties
Object.getOwnSymbolNames(o).length === 0 && // do not serialize arbitrary properties
!['{', '['].includes(JSON.stringify(o)[0]);
My concern is that such a test is fairly complex and easy to get wrong, and having the specification include a helper would be more reliable (since we can simply check if the value is an Object that has the required internal slot).
In https://github.com/tc39/proposal-json-parse-with-source/issues/12#issuecomment-704441889, it's proposed that replacer functions would be provided a unique-per-invocation
rawTag
symbol which would be used likeI like the design goals but the design itself is somewhat clunky to use. Can I propose instead providing a per-invocation function which would perform the marking? That is:
I think that ends up being a lot nicer to use while accomplishing the same goals. It's also more in line with what I've seen in the ecosystem when the problem of marking particular values arises in other context - e.g. the
tag
function in this library.Under the hood I'm fine if the implementation of
raw
is to return an object with a particular unique-per-invocation symbol-named property, though it's probably nicer to have it return some new opaque object (and have theJSON.stringify
throw if it encounters such an opaque object produced from something other than the current invocation'sraw
).