Closed hober closed 4 years ago
Cleaned things up a bit, caught a bug, and adopted Infra's implementation-defined
. Please take another look.
Thanks for adding the addition of a global state to represent the access grants. Now that we have a concept of a map storing the storage access flags does it make sense to just integrate with permissions that are keyed to the top-level and embedder origins rather than introduce a storage access map that's unique to this spec?
I could see the [=storage access map=]
being a permission for Storage Access set with the [=partitioned storage key=]
properties and the state "granted" or "denied" representing a success/failure of the API. The presence of the permissions "prompt" state and the concept of potentially prompting the user for access if no implicit grant is made may also tie nicely into this.
@annevk is it possible for two instances of a top-level origin to be in separate agent clusters? This part of the spec change had me curious about the ability to implement:
When an [=agent cluster=] is created, its [=agent cluster/storage access map=] is initialized with a [=map/clone=] of the [=global storage access map=].
Taking cookies being set or retrieved alongside a network request there would be context as to the [=partitioned storage key=]
and the Storage Partition may have an up to date copy of the [=global storage access map=]
but it may be problematic to have the Storage Partition know of multiple point-in-time copies of the [=global storage access map=]
if two instances of a [=partitioned storage key=]
may belong to two separate agent clusters and potentially require semantically different branches of the [=global storage access map=]
.
Thanks for adding the addition of a global state to represent the access grants. Now that we have a concept of a map storing the storage access flags does it make sense to just integrate with permissions that are keyed to the top-level and embedder origins rather than introduce a storage access map that's unique to this spec?
I've filed #32 to consider this.
@annevk is it possible for two instances of a top-level origin to be in separate agent clusters?
Certainly. If you have two browsing context groups they'll always have distinct agent clusters. The way I see it you'd almost never use the global map directly. You only use that for pushing a change in policy and when minting new agent clusters.
The way I see it you'd almost never use the global map directly. You only use that for pushing a change in policy and when minting new agent clusters.
Right, that's what I've attempted to define.
Okay, I've addressed all of @annevk's second batch of comments. Please take another look.
Certainly. If you have two browsing context groups they'll always have distinct agent clusters. The way I see it you'd almost never use the global map directly. You only use that for pushing a change in policy and when minting new agent clusters.
Perhaps the resolution of https://github.com/privacycg/storage-access/issues/31 may help sort this out, but I'm wondering if the agent cluster is the right granularity for keeping the copy of the [=global storage access map=]
that's meant to be accessed to unblock storage access. This may also be diving too deep into the impl details but in a world where both the storage access and networking are performed from a different process than the renderer I see two potential pain points:
[=partitioned storage key=]
. Since a block decision could be made lower down in the StoragePartition being shared by multiple agent clusters and then reversed at a higher layer that has context about the agent cluster accessing it there may exist some time where the result is inconsistent. The StoragePartition thought it blocked access but the agent cluster requesting access unblocked it vs enlightening the StoragePartion about the access map and making the right decision to start with.Clarifying the actual intersection with Fetch and other specs as to how the storage access on the agent cluster is then propagated and used would be helpful in resolving these issues and concerns.
If you have A1 which embeds B1 and A2 which embeds B2 and As are same-site, Bs are same-site, and A and B are cross-site, you get (A, A) and (A, B) as keys. If B1 gets storage access it gets the key (B, B). But that doesn't mean B2 changes from (A, B). The parent process ought to keep track of these keys and attach them to state lookups as appropriate.
@Brandr0id wrote:
Clarifying the actual intersection with Fetch and other specs as to how the storage access on the agent cluster is then propagated and used would be helpful in resolving these issues and concerns.
I think "clarified" is understating it here; this bit is totally undefined at the moment. #31 is tracking the task to do this.
Do you think this PR should wait until we've resolved #31?
@Brandr0id wrote:
Clarifying the actual intersection with Fetch and other specs as to how the storage access on the agent cluster is then propagated and used would be helpful in resolving these issues and concerns.
I think "clarified" is understating it here; this bit is totally undefined at the moment. #31 is tracking the task to do this.
Do you think this PR should wait until we've resolved #31?
I don't think you should necessarily block on this. However depending how we clarify interactions with different types of storage it may make sense to change where the map or copy/cache of the map lives.
For example Fetch itself doesn't necessarily have an implicit relationship with the Agent Cluster that I'm aware of. I'd like to see spec language identify how we go from Agent Cluster map -> Fetch -> Storage Access in a manner that doesn't rely on specific arch nuances of any one implementation. Ideally this would rely on primitives available in Fetch if the state is to be passed in, or the map/state would be retrieved from somewhere other than the agent cluster if meant to be on dealt with inside the Storage Partition.
FWIW, currently fetch (at least when invoked) very much has access to the agent cluster through request's client. (That's not to say I don't think you have a point about some of this potentially being problematic depending on how you go about implementing it, but it's not as simple as that.)
@Brandr0id wrote:
Do you think this PR should wait until we've resolved #31?
I don't think you should necessarily block on this. However depending how we clarify interactions with different types of storage it may make sense to change where the map or copy/cache of the map lives.
Oh, yeah, for sure. I expect the current text to drastically change as we start to spec out the interactions with different storage types.
I've re-requested review from you, @Brandr0id, now that 6c1e4b4 is in place. I'd like to land this before the F2F if possible.
I've re-requested review from you, @Brandr0id, now that 6c1e4b4 is in place. I'd like to land this before the F2F if possible.
Added a comment about the update. Please feel free to merge as-is and we can always address with a follow up change; there are a bunch of other great changes worth getting in here.
Okay, I'm going to go ahead and merge this then. @Brandr0id, please file a followup! :)
Great, and thanks so much for all of the suggested improvements. I think I've taken care of all of them.
@johnwilander, what do you think?