Open mhofman opened 10 months ago
I made a mistake in my original reading of the proxy steps (I missed the IsCompatiblePropertyDescriptor steps). They do enforce that own property descriptors are stable, and the checks are actually more stringent than the object invariants. I updated the description to reflect this.
- Should an object that satisfies all the transitive frozen conditions pass an
isHardened()
predicate?
- If no, is it ok if an object is considered hardened only if
harden()
was applied on it (directly or indirectly)- If yes, would it be acceptable for the object to gain the hardened state as a side effect of such a check?
IMO, no. With "yes", freezing an almost unrelated object Y elsewhere can change the semantics of frozen object X wrt the proposed semantic changes. It also make these semantic changes much more expensive when many objects are already frozen. In fact, for a large graph that is almost frozen, you'd either have to do the transitive check each time, or cache the failure of the check, in which case you'd have an unpleasant cache invalidation problem.
With "no", nothing is significantly more expensive than the status quo. Semantic changes are more local, understandable, and intentional. But OTOH it perhaps makes the proxy questions harder.
- Should the target of a proxy be part of the set of objects being put in a hardened state
- By the proxy invariants, the target has to be frozen if the proxy is frozen, and its own property descriptors have to be the same as the ones returned by the proxy, both for data values and for accessors.
- Note: while the object invariants do not require the stability of the accessor values in the own property descriptor for non-configurable properties, the own property built-ins do not allow those to change over time. In particular, the proxy steps enforce the stability through the target object.
First, I agree with your subtext here: We should propose to add to the object invariants the stability of non-configurable accessors. I suspect that no implementation is currently in violation of that rule, so it might be an easy sell.
Given that the answer to the previous question is "no" (my preference), then
isHardened
by itself must not accidentally cause hardening.isHardened
transparently through a membrane is the trap is invoked before the (shadow) target's harden status is checked. That way, if the (real) target was already hardened, this given the trap the chance to harden the (shadow) target before the proxy invariants check it the (shadow) target is hardened.
- Should a proxy be able to deny the application of the hardened state if it allowed a frozen state
- If the target is itself hardened (see above), it would be the only way the proxy can deny the hardening of the target
- Given the invariants enforcement of proxy, a hardened proxy can only answer own property related traps according to the target, throw or cause the proxy steps to throw.
Given my suggestion above that isHardened
must cause a trap in the handler, so that the handler can engage in the shadow target technique to harden the shadow target if necessary, I see only two options:
isHardened
on a proxy, in which case it can throw early, preventing hardening.How is a relaxation of hardened semantics described ...
(I have not yet absorbed the new material there.)
we would like to make the hardened integrity level an explicitly cached state of objects, and disallow exotic proxy objects from observing when that state is checked.
Why is it important that proxy objects cannot observe that they are being tested on their hardened status? Proxies can intercept isExtensible
and isSealed
, so it makes sense for them to also intercept a hypothetical isHardened
check.
In-line with @erights 's first option above https://github.com/endojs/endo/issues/1912#issuecomment-1872616124 I think the consistent design is to have isHardened
always trap on proxies.
If "hardened" is presented as a new integrity level, then it makes sense to only let isHardened(obj)
return true
iff harden(obj)
was called successfully (for non-exotic objects obj
).
You can draw parallels with the other integrity levels if you squint hard enough ;-) For instance: isExtensible(obj)
also only returns false
iff preventExtensions(obj)
was called successfully, even if a program would otherwise treat an extensibleobj
as if it were non-extensible (by never adding a property to it during its lifetime).
Why is it important that proxy objects cannot observe that they are being tested on their hardened status?
As I mentioned, currently a sealed / frozen check can only be triggered directly by a predicate the user code explicitly invokes.
I think it would be surprising if an operation like stamping a private field would trigger a trap. Currently that does not trigger any user code and it would likely result in security bugs in JS engines if changed.
- Always invoke that trap on
isHardened
on a proxy, in which case it can throw early, preventing hardening.
That's the thing though, throwing on a trap that can supposedly only answer according to its target is surprising, as a non-exotic target would not throw.
Since I'm hearing the revoked proxy argument coming, this is another thing that bothers me. A hardened plain object that has a property whose value is a revoked (previously hardened) proxy, should it still be considered hardened? What conceptual property do we want harden to mean if an hardened object can stop being inspected all the sudden?
As I mentioned, currently a sealed / frozen check can only be triggered directly by a predicate the user code explicitly invokes.
Are we certain about that? A quick look at the spec shows several places where the spec calls the IsExtensible operation beyond the Object.isExtensible user code: https://tc39.es/ecma262/#sec-isextensible-o (even as part of variable binding semantics it seems)
If you treat hardening as 'deep-freezing' an object, then just like a deep-frozen object can still refer to an object with stateful behavior that can change over time, I see nothing exceptional about a deep-frozen/hardened object holding onto a ref to a revoked proxy that will throw when accessed. (isn't this analogous to hardened objects with accessor properties that throw?)
It is only in contexts where user code can ensure that a graph of objects is both non-exotic and deep-frozen that one can make additional assumptions about the behavior (e.g. when the set of objects was created through unmarshalling or some other vetted process that is guaranteed not to generate exotics).
As I mentioned, currently a sealed / frozen check can only be triggered directly by a predicate the user code explicitly invokes.
Are we certain about that? A quick look at the spec shows several places where the spec calls the IsExtensible operation beyond the Object.isExtensible user code: https://tc39.es/ecma262/#sec-isextensible-o (even as part of variable binding semantics it seems)
IsExtensible
is indeed checked by some spec steps, but TestIntegrityLevel
is not.
If you treat hardening as 'deep-freezing' an object, then just like a deep-frozen object can still refer to an object with stateful behavior that can change over time, I see nothing exceptional about a deep-frozen/hardened object holding onto a ref to a revoked proxy that will throw when accessed.
A freeze guarantees that the object own properties will not change. Harden conceptually extends that to the properties' values.
That said a frozen proxy can throw on own property lookups, and I suppose that is similar.
(isn't this analogous to hardened objects with accessor properties that throw?)
It is only in contexts where user code can ensure that a graph of objects is both non-exotic and deep-frozen that one can make additional assumptions about the behavior (e.g. when the set of objects was created through unmarshalling or some other vetted process that is guaranteed not to generate exotics).
Defensive code worried about reentrancy can avoid invoking accessors, but it cannot avoid proxy traps. I suppose I was trying to avoid more cases where proxy traps can surprisingly trigger, but maybe I have to resign myself that hardening will not provide us that. I am still considering a "make inert" operation that would effectively replace the proxy with its target, disabling all traps (maybe except call/construct, and maybe get/has/set/delete for non-own properties).
Defensive code worried about reentrancy can avoid invoking accessors, but it cannot avoid proxy traps.
Presumably that would require such defensive code to work at the level of inspecting and using property descriptors rather than direct property access. When going through such pains one can - with similar effort - avoid proxy traps by querying the object for its own property descriptors ahead-of-time and then interact with the object only through the resultant descriptors. In fact I just reviewed the implementation for harden()
in Endo and it looks like this is one example of defensive code that does just this when traversing the objects to harden.
That said, if you're arguing for harden()
to be a genuine new integrity level with stronger properties than can be attained from transitive freeze()
-ing, then of course it can be argued that the semantics for proxies should not necessarily follow those of isExtensible/sealed/frozen.
If the desired end-goal is to rid an object graph of all exotic behavior keep in mind that this would also break transparent interposition with membranes
I am still considering a "make inert" operation that would effectively replace the proxy with its target, disabling all traps
I'm not sure this is a good idea: replacing a proxy with its target would effectively "dissolve" any defensive membranes wrapped around the target, and potentially leak access to otherwise encapsulated state. I guess what you really want is to create a kind of "safe copy" (essentially query the object for its own property descriptors again and re-constructing a plain object from this data). This would satisfy both the needs of the defensive code (that wants to protect against exotic proxy objects) and any membrane code (that wants to protect target objects against untrusted clients)
We discussed this further and arrived at the following design:
Responding to a few open questions here:
Does a proxy whose target is already stabilized get its traps triggered?
Stabilization only stabilizes the object's "own" properties, but some proxy traps (most notably get/set/has) virtualize the entire prototype chain. I think we should apply the principle: "what operations still make sense for a proxy to a frozen object to intercept?"
Should the "isStable" trap of a proxy trigger if the proxy is not (yet) stable, and we attempt to stamp a private field on it.
I think yes, in order to preserve membrane transparency (if a membrane proxy wraps an object that dynamically transitioned from being non-hardened to being hardened, then in order for the proxy to "deny" the private field stamping, it first needs to be able to intercept "isStable" to update its shadow target, before answering true
)
A final observation: while I like clear terminology during the design phase to distinguish current "freezing" from the new "stabilizing", let's be mindful that the committee (and the JS community at large) may not like the proliferation of integrity-level terminology (We already have non-extensible, sealed, frozen. Now add 'stable'. There is no natural analogy to hint that "stable" is stronger than "frozen".) Mathieu's earlier API design of passing an optional flag to Object.freeze
might make this more palatable (i.e. Object.freeze(obj, { stabilize: true })
). But we can park that discussion until later.
Does a proxy whose target is already stabilized get its traps triggered?
Stabilization only stabilizes the object's "own" properties, but some proxy traps (most notably get/set/has) virtualize the entire prototype chain. I think we should apply the principle: "what operations still make sense for a proxy to a frozen object to intercept?"
I am not sure I follow. From our discussion, we want the stabilize traps to not trigger if the proxy is stable. The question is whether to consider the proxy stable just based on its target, or based on a previous answer to its traps (which was verified against the target). I am wondering about the use cases where the target would be marked stable without the proxy's knowledge, and whether the proxy would expect to have a chance to discover it. All other traps would remain triggered in this proposal, so I'm not sure how relevant is the prototype chain reference.
I think yes, in order to preserve membrane transparency
Yeah that's what I expected. That also means we couldn't remove the risk of user code running during private fields stamping, which we've identified as a potential implementation security hazard.
may not like the proliferation of integrity-level terminology
I agree 100%, and likely would still want to propose this as a freeze option to avoid intrinsic proliferation as well (we'd have the new Reflect intrinsics anyway).
I am not sure I follow. From our discussion, we want the stabilize traps to not trigger if the proxy is stable. The question is whether to consider the proxy stable just based on its target, or based on a previous answer to its traps (which was verified against the target). I am wondering about the use cases where the target would be marked stable without the proxy's knowledge, and whether the proxy would expect to have a chance to discover it. All other traps would remain triggered in this proposal, so I'm not sure how relevant is the prototype chain reference.
My bad, I thought you meant that a proxy whose target is stable would no longer be able to intercept certain operations.
For the specific case of membranes that use a shadow target, if the shadow target is already stable at proxy-creation-time, then indeed it would not be strictly necessary to trigger the stabilize
trap. One would be able to infer that the proxy is stable based on its target already being stable. There would be nothing left for the proxy handler to sync-up if the shadow is already stable.
The only situation that a membrane proxy must be able to trap is when it is born wrapping a non-stabilized object which then later gets stabilized during its lifetime. In that setup, the membrane proxy would be initialized with a non-stabilized shadow target, and so when stabilize
or isStable
is called on the proxy, we must trap, giving the membrane a chance to sync the state of the shadow target.
I'm not sure I understand. Perhaps there is still some confusion due to the ambiguity of the "target" terminology. I'll speak below purely in terms of "shadow" and "emulated object". The proxy itself only knows about the shadow (what is called "target" in the spec). A handler implementing a membrane also knows what object it is emulating (what is unfortunately called the "actual target" in the membrane literature).
Let's start with extensibility, as our precedent for an explicit and shallow integrity level. In fact we decided that both preventExtensions
and isExtensible
always traps, but let's examine whether we needed to do that. If the answer is "no", then let's see if that reasoning translates into stabilize
and isStable
. I believe it does.
First, there is no distinction between "the proxy is non-extensible" and "the proxy's shadow is non-extensible". As with many other state queries about proxies, the shadow is the bookkeeping mechanism for tracking the actual state. The proxy has extremely little state of its own beyond a pointer to its shadow and a pointer to its handler. Even for a revocable proxy, the state change can be represented by nulling these two pointers. https://tc39.es/ecma262/multipage/ordinary-and-exotic-objects-behaviours.html#sec-proxy-object-internal-methods-and-internal-slots .
If the shadow is extensible, then both preventExtensions
and isExtensible
must trap, so that a handler implementing a membrane can consult its emulated object at that time and update the shadow -- potentially making it non-extensible -- before answering.
If the shadow is already non-extensible, then I propose we would have lost no functionality we actually care about if preventExtensions
and isExtensible
on the proxy did not trap, but answered immediately, without interleaving user code, that the proxy is non-extensible.
An important edge case is a revoked proxy. Whether or not the shadow was non-extensible before the proxy because revoked, once it is revoked it has no memory of which that is since it has no other state. On a revoked proxy, both preventExtensions
and isExtensible
immediately throw, without interleaving with any user code and without any other effect. This is strangely consistent with the constraint that non-extensibility is monotonic, because the throw prevents a revoked proxy from claiming either that it is extensible or that it is non-extensible. Its extensibility becomes unobservable, and so does not need to be further represented.
I propose that stabilize
and isStable
follow exactly these hypothetical decisions we could have made wrt extensibility.
stabilize
and isStable
throw without any user code and without any other effectstabilize
and isStable
trapstabilize
and isStable
on the proxy do not trap, but immediately claim that the proxy is stableAs an example of how beautiful it is to use the shadow for the proxy-state bookkeeping, we do not need any additional checks in the proxy mechanism to ensure that the handler can only claim the proxy is stable if it would also claim that the proxy is non-extensible, and would claim that all own properties and non-configurable and non-writable. These checks are already enforced on any attempt to stabilize the shadow, which is all the mechanism we need to enforce these invariants on the proxy.
Does this agree with what everyone was saying above?
For isHardened
as an emergent deep integrity level, this does raise the difficult issue of what happens if there is a stable-but-revocable proxy in the reachable subgraph. The only resolution I see is that such revocability causes the containing graph to fail the emergent isHardened test. But this is unpleasant.
Does this agree with what everyone was saying above?
This follows my second option. However there is a subtlety with checking stability
- if the shadow is not stable, then both
stabilize
andisStable
trap- if the shadow is already stable,
stabilize
andisStable
on the proxy do not trap, but immediately claim that the proxy is stable
The subtlety is when the shadow is also a proxy. If we trap, we should trap the outermost proxy first, and leave the trap of the target to be triggered either by the handler, or by the invariant check. Aka this would be a IsProxyPiercingStable
check, which returns true by following a chain of target for any proxy encountered.
For
isHardened
as an emergent deep integrity level, this does raise the difficult issue of what happens if there is a stable-but-revocable proxy in the reachable subgraph. The only resolution I see is that such revocability causes the containing graph to fail the emergent isHardened test. But this is unpleasant.
Yeah for other reasons I am not yet satisfied with harden being an emergent property. But I also can't reconcile how an explicit atomic integrity level on a set of object could be reconciled with membrane transparency.
Does this agree with what everyone was saying above?
@erights Yes it does.
As for revokable proxies, there's another option we might consider:
Going back to non-extensibility first: what if we had specified originally that Object.isExtensible(revokedProxy)
would just return false
rather than throw? Would we have lost anything by specifying the behavior as such? Returning false
is always safe given that: a) it obviously is not possible to add more properties to a revoked proxy and b) it respects monotonicity (regardless of whether the proxy was extensible or non-extensible before revocation, stating that it is non-extensible after revocation is a valid state transition)
Similarly, for Object.isHardened(revokedProxy)
do we lose anything by simply returning true
rather than throwing? Analogously: a) there is nothing you can observe about a revoked proxy that would contradict it being a hardened object and b) it is always a valid terminal state regardless of any earlier observed answers.
Returning a boolean would probably be a more pleasant behavior not only for developer-facing APIs, but would also keep a revoked proxy from "un-hardening" an otherwise hardened subgraph.
(similarly, Object.stabilize(revokedProxy)
could just be a no-op under this alternative design)
But I also can't reconcile how an explicit atomic integrity level on a set of object could be reconciled with membrane transparency.
I had an idea based on shared context and hooks as part of the hardening process. Drafted an (untested) pseudo implementation here: https://gist.github.com/mhofman/3c85b5d82f7ec9245336ddd0e38da870
Regarding revocability, I kept the throwing behavior around for now. As an explicit integrity level, there is no regression once hardened. It simply means a hardened state does not guarantee non-throwing behavior on walk of the object, but in the face of proxies, it never did.
Edit: I just realized, the hook system may provide a convoluted way to test whether an object is in the proxy target chain of another object. Not sure if this would be considered a problem for proxy transparency.
Mark and I discussed, and we reached the following conclusions:
stabilize
is not trapped, and backwards compatibility with existing revocable membranes, especially when compared to Reflect
as the base implementation for traps.Mark remarked that with these stable semantics, a isHardened
or more interestingly for us, passStyleOf
returning "passable" can only guarantee that future traversal will not trigger any proxy traps. In particular it cannot guarantee that it didn't trigger any proxy traps while checking the pass-style, even if that pass-style is found to be "passable" (passStyleOf invocation could be a source of reentrancy).
We also discussed the possibility of a non prototype recursive harden (https://github.com/endojs/endo/issues/1686), and how that meshes with an emergent harden integrity level. To summarize the problem:
harden
usable without lockdown, and possibly propose it independentlyharden
to work both pre and post lockdown: if it works pre-lockdown, it must also work post lockdown (if it fails post lockdown, it must fail pre-lockdown). if it works post-lockdown, it must also work pre-lockdownIt transpires that the pre-hardened prototype check would also apply to harden pre-lockdown, with the exemption of intrinsics used as prototypes, which would be hardened during lockdown.
The justification is that a user land prototype can always be explicitly hardened before or during an instance hardening, even if that prototype was defined by a library unaware of harden. An explicit hardening of prototypes before instances is good code hygene, and the current behavior of automatically hardening prototypes has been masking cases of prototypes mistakenly not hardened.
The problem is with the definition of intrinsics, and how to virtualize that (aka how to support shims). A shim (or a harden aware shim adapter) would need to "register the shim's intrinsics" for them to be exempted pre-lockdown. This effectively extends the scope of the "Get Intrinsic" proposal to be a mutable registry, which is worrisome. All registered intrinsics would be hardened at lockdown, which does provide a basis for integrating trusted shims with lockdown.
For now in our non standardized implementation of harden, we can skip the virtualization concern, and require shims (or their adapter) to harden any of their objects used as prototypes.
During our most recent SES call, we agreed that if going for a Stabilize that makes proxies inert, we cannot break existing proxy usage that expect to remain in a position to observe of cause side effects. One option is to require an explicit opt-in with the presence of a custom stabilize
trap (aka it not being absent or same value as %Reflect.stabilize%
). Without a custom stabilize
trap, unlike existing traps, it would act as if a trap returning false had been provided.
There is still a concern of erosion of proxy transparency with this capability. The original sin is that the power to stabilize (or even freeze / re-configure) an object is an implicit power any holder of the object reference possesses, instead of the object creator.
Good summary Mathieu.
I’ll take the opportunity to document some of the arguments in more detail, and also propose a potentially safer and less problematic design for adding the ability to make proxies inert, below.
Kris made an important observation when he indicated that harden()
was initially conceived of and used as a way for the creator of an abstraction to harden their own (‘trusted’) objects to defend against unintended tampering by clients.
But with the new proposal of having harden()
making proxies inert (to defend against reentrancy hazards), the use case is the exact opposite: the caller aims to deliberately lock-down a set of untrusted objects for their own defense.
We should consider whether these opposite uses are both equally served by the same function and the same behavior w.r.t. making proxies inert.
I would argue that in the first case (hardening your own objects), chances are you want any proxies to stay active - because they’re usually there for a reason! (a simple/benign case could be for logging/tracing purposes).
As mentioned, from the proxy creator's point of view, allowing harden()
to make the proxy inert is a potential attack vector to switch off critical interposition logic.
With a stabilize trap having to give explicit consent, the proxy creator does retain some control.
But there is a wrinkle for the simple case of revocation, which in the current API design is supported as a built-in feature by calling {proxy,revoke} = Proxy.revocable(target, handler)
.
This gives the caller a revoke()
capability that is guaranteed to both shut down access to target
and to drop the reference (potentially enabling GC).
In today’s API, the authority to revoke is not dependent on the trap logic of the handler
.
With the proposed design a party calling Object.stabilize(proxy)
with a handler
that opts-in will deny the caller’s capability to revoke.
We discussed this in the meeting, where the argument was made that from a software-engineering point-of-view the party calling Proxy.revocable
is usually aligned with the party defining handler
. After giving it more thought, I find this to be a shaky assumption: it probably holds true in most cases, but may not hold true in all cases precisely because revocation is logically separated from handler trap logic.
Here’s an alternative API that makes “consent” of a proxy creator to make the proxy inert more explicit:
Introduce a new Proxy constructor Proxy.stabilizable(target, handler)
This creates a proxy that can be made inert through Object.stabilize(proxy)
assuming that Object.stabilize(proxy)
will first call Object.freeze(proxy)
, and this call succeeds.
This design has a number of benefits compared to a new stabilize
trap:
Note 1: as stabilization implies freezing, a stabilizable proxy would still be able to reject freezing through thefreeze
trap and in doing so may still prevent stabilization.
Note 2: I dislike the name stabilizable
, but it’s a strawman that is consistent with the current Proxy.revocable
naming.
What is the Problem Being Solved?
Currently
harden
is defined as transitively freezing the object, its properties (whether data values, or accessor functions), and each of their prototypes. This can be considered as a new integrity level.However we would like to attach new behavior to this integrity level:
Currently JavaScript supports 2 integrity levels: Sealed and Frozen. The specification defines those as a non-extensible objects (which is specified as a flag on the object) plus checks on the own property descriptors (non-configurable only for sealed, and also non-writable for data properties of frozen objects).
The specification does not itself check for the integrity level state of an object outside of the 2 intrinsics
Object.isSealed
andObject.isFrozen
. While these checks can be memoized / cached for regular objects, they are observable by Proxy exotic objects (the traps are called during the check).Description of the Design
To make harden efficient, and to allow the spec to check an object for its hardened state as a side effect of other operations, we would like to make the hardened integrity level an explicitly cached state of objects, and disallow exotic proxy objects from observing when that state is checked.
This state can only be applied atomically to a set of objects once they all have been transitively frozen.
Open questions
isHardened()
predicate?harden()
was applied on it (directly or indirectly)isHardened()
check post-lockdown, but not before). Reverting integrity levels during lockdown does not seem appropriate.harden()
to build a weak list of the prototypes encountered, and havelockdown()
fail if not all prototypes have been hardened by then to ensure that lockdown does not rollback the claimed integrity level. More details in https://github.com/endojs/endo/issues/1686#issuecomment-1871319421