Open noamnelke opened 4 years ago
I can't emphasize enough the importance of what Noam describes as the usability factor in the motivation sector. It is a top goal for Spacemesh mainnnet MVP cryptocurrency to delight home smeshers by giving them flexibility to contribute as much storage as they like with few mouse clicks or cli commands using only one node. We already know from usability testing and testnet feedback that each user wants to contribute a different amount of storage based on his/her resources. It is likely the #1 feature request we've heard consistently so this well beyond an untested product hypothesis. From the user experience perspective, we want to give users options to choose post size as part of the smesher setup to make the setup experience as seamless and easy as possible. We already have full design for that and we definitely want to implement this for the sm 0.2 milestone.
Enabling variable post size is also super important for lowering the barrier to entry to Spacemesh, a major project value, and for enabling a long tale of many home semshers who contribute storage based on their ability. Without variable post, smeshers with limited resources will be in disadvantage compared to ones with more resources because they don't have resources to run and maintain more than 1 full node at home - which they must allocate if they want to contribute more to the network's security and get more rewards in a fixed post world.
We also need to also keep in mind that there are electricity costs involved in running a 24x7 Spacemesh node which we want to minimize as not consuming a lot of energy is also a major project value. So having variable post helps reduce the energy consumption of the network which is great.
From the user experience perspective what we want to enable with this feature is:
Add to open questions section:
Re: UX, I think a couple of interesting questions are:
What's the UX for changing one's commitment size later? Presumably a new ATX would have to be published, which means waiting an epoch, right? I suppose the disk allocation change can happen more or less immediately but the weight wouldn't change for one epoch.
Can a single user consolidate space across multiple drives? This could be, say, two hard drives connected to one computer, or possibly even connected to two separate systems/nodes.
- What's the UX for changing one's commitment size later? Presumably a new ATX would have to be published, which means waiting an epoch, right? I suppose the disk allocation change can happen more or less immediately but the weight wouldn't change for one epoch.
Yes, we need to figure out the UX and the API for modifying existing commitment size and how it should be handled ATX-wise but for now it is good that there's consensus that we need to support this in core PoST design.
- Can a single user consolidate space across multiple drives? This could be, say, two hard drives connected to one computer, or possibly even connected to two separate systems/nodes.
Right now, the design is that user specifies one system path to a storage volume where a commitment should be created - likely one or more files per commitment. I guess that consolidating free space across storage devices can be supporting by letting the user specify one or more paths to the post commitment but we'll also need to ask the user for the storage size he wants to commit on each path which is a bit more involved. I recommend we'll support 1 path for now for the common use case of having 1 drive with free space that home users want to use to store the commitment file(s). I believe that the number of drive per users decreases pretty exponentially and rapidly. e.g. very few home users with 3 drives so this use case is pretty minor.
I recommend we'll support 1 path for now for the common use case of having 1 drive with free space that home users want to use to store the commitment file(s). I believe that the number of drive per users decreases pretty exponentially and rapidly. e.g. very few home users with 3 drives so this use case is pretty minor.
Fully agree, this is something of an edge case and I don't think we need to support it now, or even soon, but to the extent that we want to encourage smeshers to consolidate accounts, ATXs, etc., it is something we may want to consider adding in the future if it becomes feasible!
Overview
Goals and Motivation
Our current activation mechanism only allows users to commit a preset amount of storage to mining Spacemesh. The direct implication of this is a usability and security issue:
Users can't easily benefit from committing all of their free harddrive space to mining Spacemesh, reducing the amount of storage space securing the network. A sophisticated attacker can make an effort to commit more storage that an honest miner may not be capable of or it may require an effort they're not willing to make.
A secondary effect is that users who insist on committing more storage will run multiple miners. This puts unnecessary additional load on the system that could be avoided with a more efficient implementation of variable size storage commitments.
Additional optimizations can be applied once variable size space commitment is implemented like consolidating multiple block eligibilities in a single layer into a single large block (with proportional reward) - this collapses the block overhead, like votes, from multiple copies into one (in the mesh and on the wire).
Multiple Hare eligibilities in the same round will be collapsed into messages with stronger votes, saving communication bandwidth and processing time.
Timing
Testnet participants are eager to contribute more than the pre-set amount of storage to the network and are making an effort in the form of using VMs to run multiple instances of the spacemesh node. This makes it harder for them and less efficient for the network. It's also a missed opportunity to test the feature that will have to eventually be in the mainnet launch.
The TN2 milestone is also the ideal time to add new major functionality to the node and network.
Weights
While this change is implemented we want to also consider PoET ticks for a total ATX weight (
CommittedStorage * PoetTicks
). This change is a necessary part of the PoET incentive scheme. For now a stub of PoET ticks will be implemented, but the number of ticks will be considered for the ATX weight.Proposed Solution and Alternatives
Note: The current, tree-based PoST implementation only supports balanced trees, so we'll have to initially restrict miners to committing space that results in a balanced tree. We have a new tree-free PoST in the works that will be able to support any storage size, so this limitation will be removed.
Blocks
The proposed solution is to make each miner eligible to a number of blocks based on the weight of their ATX. Blocks will be weighted by the following calculation:
blockWeight
will be in the range1<<20 <= blockWeight 1<<21
which means that all calculations should fit in auint64
.The Tortoise will multiply each block vote by its weight and thresholds will be updated accordingly.
Layer rewards will be divided proportionally by block weight.
An alternative that was considered was to create a separate "sub-id" for each space unit committed. Each sub-id would be eligible to the same number of blocks and each block would have voting power and rewards proportional to the number of ticks provided by the PoET. This alternative was dropped because it would limit us to only ever support whole space units.
Hare
This is best explained with a draft implementation:
Implementation breakdown
This concludes the scope of the initial release.
Further development
Because this is a major change to the activation flow we want to strip it down to the bare minimum to de-risk the project. For this reason, the following will be deferred to follow-up tasks:
Dependencies and Interactions
Pending Unknowns
Interaction with Existing Components
Major changes are expected in the following components:
post
repo)Stakeholders and Reviewers
The following people should be aware of the changes in their respective parts: Tortoise - @y0sher PoST, Hare - @moshababo Rewards - @antonlerner Inc/dec PoST size (API) - @avive @IlyaVi
I'm not sure yet who's the best person to review this change as a whole. When I see where most of the tricky changes are happening I'll decide.
Implementation Details
Testing and Performance
I anticipate that the impact on performance will be minimal. The additional calculation per block is a few multiplications and divisions. The same data will be retrieved as today with the addition of the cached total weight per epoch.
After implementing the "further development" block consolidation, I expect an improvement in performance, as fewer blocks will have to be processed per layer.
Testing