nucypher / protocol

Upstream research, development and discussion relating to the NuCypher protocol and economic design
GNU General Public License v3.0
13 stars 5 forks source link

Pricing structure (what is paid work?) #7

Open arjunhassard opened 5 years ago

arjunhassard commented 5 years ago

Let's assume that all remuneration calculations discussed in this issue incorporate the following inputs in precisely the same way: 1) The number of Ursulas assigned to the policy (n) 2) The number of recipients (which equals the number of policies until we have multi-Bob policies)

Primary Calculation Inputs

[Input 1: Policy duration] Our current calculation takes one further input, the policy duration, measured in periods of 24 hours. This enables the total cost of a policy to be calculated up front (value), paid into an escrow, the sum split into the number of periods within the policy's duration, each of which is paid out to participating Ursulas every time they confirm activity.

[Input 2: Access requests] The number of access requests sent to a policy becomes the variable determining the policy's price, replacing duration.

[Input 3: Users] The number of users is the primary input into the cost calculation. This can be calculated as the sum of Alices, sum of Bobs, or sum of both, or the sum of unique keys (i.e. users/devices can be Alices and Bobs without incurring extra costs). This would make it easier for network adopters to budget for access control, since for many applications, revenue (and/or the venture's fundraising potential) is a function of the total user population.

Additional Logic, Calculation Modifiers & Combinations

Real-world scenarios

Which calculation/combination we settle on hinges to some extent the nature of adopter use of the network. In a scenario where hardly any adopters/users require highly frequent access requests, then policy duration may suffice as the key input variable / resource. However, this would imply that a major selling point of the NuCypher network – that it can scale to handle high volume / frequent sharing, is not being leveraged. The only exception to this would be an application with a very high number of policies, but low numbers of requests per policy (can we think of a real-world application with this characteristic?).

Excluding that particular type of application, this may also imply that we need to change our pricing significantly – since in basic terms, revenue is a function of a. price of service and b. frequency of usage – if we are assuming a low frequency of usage, then we will have to increase the price significantly to generate enough revenue. The problem with this approach, from a product perspective, is if we are targeting applications with very low-throughputs but strong need for trustlessness, is that we then start competing with client-side PKI, which is free. Our value proposition is reduced to "Alice can go offline".

derekpierre commented 5 years ago

The only exception to this would be an application with a very high number of policies, but low numbers of requests per policy (can we think of a real-world application with this characteristic?).

^ Could this be the patient-controlled medical record app scenario? Large number of patients each with policies issued for their respective doctors, but the number of requests made by doctors would be small and concentrated (once every 6 months for example)...unless I'm misunderstanding what you mean.

Pricing model similarities with major KMS services, i.e. based on users/number of keys and requests, could be positive by simplifying pricing decisions for applications i.e. a direct comparison could be made between NuCypher and the alternatives - assuming the vulnerabilities you expressed can be mitigated.

Which calculation/combination we settle on hinges to some extent the nature of adopters' use of the network.

It would be hard to make assumptions about the nature of future adopters. Our pricing would need to be well thought out for a variety of scenarios. Presumably, I would think we would end up with a variety of policy usage patterns by applications - some high and some low. It is also possible that usage patterns for an application can fluctuate over time eg. increased requests for photos on a photo sharing app during Christmas.

One thought I had and based on Prysm's key rotation comment, we could offer an optional (?) feature for re-issuing/turning over a policy after a specific period of time. AWS charges for this functionality - see https://aws.amazon.com/kms/pricing/; that being said they charge to hold on to the old keys, we don't need to do that. However, perhaps an app would like to re-issue policies every so often to different Ursulas for security reasons? This may help with better distribution over time of variable usage (high/low) policies across the network by repeatedly re-issuing a policy to different Ursulas over the length of the policy. Of course, some gas cost would be incurred here. If nothing, it could be a premium priced feature that we provide that supplements revenue, assuming it is possible.