storacha / w3up

⁂ w3up protocol implementation
https://github.com/storacha-network/specs
Other
62 stars 22 forks source link

Aligning plans with SLA #839

Open Gozala opened 1 year ago

Gozala commented 1 year ago

This thread https://github.com/web3-storage/RFC/pull/2/files#r1270991627 got me thinking whether our setup can support mapping Plans to SLA which would in turn affect which aggregation queue upload pieces land.

I think we think of w3up as a multi-tenant system where web3.storage and nft.storage are the tenants. Tenants can have "capability providers" identified by DID e.g. did:web:free.web3.storage and did:web:basic.web3.storage. When you provision space you basically link "capability provider" with it.

Thinking had been that plans map 1:1 with "capability providers", meaning that space can be enrolled with at most 1 plan per tenant. This implies that going from "free" plan to "basic" plan is an upgrade, not an addition.

Space can have multiple providers across the different tenants however, that is you could provision same space with both did:web3.storage and nft.storage. When w3up receives a store/add request it can decide which provider / tenant it is for based on the invocation aud.

Things that do not exactly align right now

  1. Right now we only support one tenant / aud that is configured. In the future we would either need to extend config to support multiple tenants statically, or make it dynamic and track tenants in DB.
  2. When store/add comes in we do space/allocate, but information about tenant is not considered

    https://github.com/web3-storage/w3up/blob/30820287c12ecd4c3cabbeec6f31b9742c444296/packages/capabilities/src/space.js#L71C1-L76

    I thin we need to

    1. Pass tenant information so it is considered.
    2. Response should include the plan currently associated with the subscription at a time
    3. We should capture plan while writing to store table, so that we'll be able to know which queue derived piece should go.

⚠️ If you have upgraded the plan items that were already added would not automatically move to corresponding queue, because item might be already in pipeline and it would be too complicated and likely error prone to try and move. We could however (either automatically or per user / API request) enqueue all of the pending pieces into higher priority queue in which case piece may end up in multiple aggregates / deals.

⚠️ It's not obvious to me how would we deal with same piece been added to spaces with different plans. Ideally we would associate it with highest ranking plan

Gozala commented 1 year ago

Alternatively we could lookup plans associated with a piece before submitting to for an aggregation, however that would mean doing bunch of lookups

  1. Finding all the spaces & providers for the piece
  2. Mapping ☝️ to whatever the plan they are associated with

This approach would account for plan changes that may have happened between write and when the piece is been queued, but would introduce overhead & I personally don't think it's worth it, given that plan could still change while piece is in pipeline.

vasco-santos commented 1 year ago

Reading through this, it looks like original proposal looks great. It is definitely fine that swapping plan does not have interference with on the fly operations. Otherwise, there will always be required a lot of extra read ops for it