Closed SebastianElvis closed 3 years ago
Should we remove time
time
will be used in difficulty addjustment
time can be chosen by nodes, giving nodes opportunity to parallelise mining
You are right, It's my mistake. A better way is to exclude time
from "h_g" and "h_state"
But we need to think of a way: how do we decide on 2 blocks with but only with different time
Should we remove time
time
will be used in difficulty addjustmenttime can be chosen by nodes, giving nodes opportunity to parallelise mining
You are right, It's my mistake. A better way is to exclude
time
from "h_g" and "h_state"
For difficulty adjustment, we can use iterations
.
Given the limited processor frequency, we can ensure that for x
iterations, it takes at least kx
seconds.
With more nodes, iterations
is likely to be smaller.
Then, we can increase the difficulty.
But we need to think of a way: how do we decide on 2 blocks with but only with different time
Given the limited processor frequency, we can ensure that for x iterations, it takes at least kx seconds.
To me with hardware evolution it's a bit idealistic. We cannot forsee how long it takes to compute an iteration in the future. It's fine for an experiment but this assumption is not true for a robust project. Think about it this way: if the hash power won't really evolve that much, then do we really need diifficulty adjustment?
I think maybe fork rust-ranchain to ranchain-demo, so that we can skip the difficulty adjustment? In an experiment we can assume hardware computing power always remain the same, and hence there's no need for difficulty adjustment.
With more nodes, iterations is likely to be smaller.
I am not sure.
If 1 key 1 cpu (node), then all nodes should mine out with similar iteratons, (because hashing is uniformly distributed)`
I agree with you that using iterations
can help, but the logic is more like higher-end hardware -> smaller iterations
, instead of related to the number of nodes.
So my conclusion is: we can remove bits
and use iterations
, and we'd better check on average/median iterations
of recent 2 hours.
Given the limited processor frequency, we can ensure that for x iterations, it takes at least kx seconds.
To me with hardware evolution it's a bit idealistic. We cannot forsee how long it takes to compute an iteration in the future. It's fine for an experiment but this assumption is not true for a robust project. Think about it this way: if the hash power won't really evolve that much, then do we really need diifficulty adjustment?
We can. For iteratively sequential functions, the execution time is bound by the processor's frequency. The world's fastest processor (~10GHz) can only be <5 times faster than processors (2-3GHz) in laptops. This is also evidenced by the VDF contest. See the analysis in the RandChain paper.
I think maybe fork rust-ranchain to ranchain-demo, so that we can skip the difficulty adjustment? In an experiment we can assume hardware computing power always remain the same, and hence there's no need for difficulty adjustment.
Is it difficult to add/remove difficulty adjustment? Intuitively it should be okay to keep them in a single project but two different branches.
With more nodes, iterations is likely to be smaller.
I am not sure.
If 1 key 1 cpu (node), then all nodes should mine out with similar iteratons, (because hashing is uniformly distributed)`
I agree with you that using
iterations
can help, but the logic is more like higher-end hardware -> smalleriterations
, instead of related to the number of nodes.So my conclusion is: we can remove
bits
and useiterations
, and we'd better check on average/medianiterations
of recent 2 hours.
I'd say so. In fact Bitcoin also requires every node is mined within 2 hours (by using timestamp).
We can. For iteratively sequential functions, the execution time is bound by the processor's frequency. The world's fastest processor (~10GHz) can only be <5 times faster than processors (2-3GHz) in laptops. This is also evidenced by the VDF contest. See the analysis in the RandChain paper.
As what I said, this is not always true in the future.
We can. For iteratively sequential functions, the execution time is bound by the processor's frequency. The world's fastest processor (~10GHz) can only be <5 times faster than processors (2-3GHz) in laptops. This is also evidenced by the VDF contest. See the analysis in the RandChain paper.
As what I said, this is not always true in the future.
It is true at least for 50 years. 10GHz is the limit of processors' frequency. Relationship between frequency and voltage is: frequency = f(voltage^3). That is, to double the frequency, one should increase voltage for 8 times, which will burn the processor. This is the fundamental limit that all ASICs cannot break, and the reason why parallel computing exists and stays as a hot research topic.
Is it difficult to add/remove difficulty adjustment? Intuitively it should be okay to keep them in a single project but two different branches.
Using a seperate branch for experiments is a good idea.
So we are going to remove "time", but keep "bits" for difficulty validation, and use "iterations" for difficulty adjustment.
All good?
Yeah this would be good.
Should we remove
time
andbits
from BlockHeader?time
can be chosen by nodes, giving nodes opportunity to parallelise miningbits
used in the protocol?