Open AuHau opened 6 months ago
Why do we "hash a hash" for getting the Proof challenge?
I still do not understand the "Period * 67" in calculation of the pointer: https://github.com/codex-storage/codex-contracts-eth/pull/75/
Why do we "hash a hash" for getting the Proof challenge?
My thinking at the time was that proof-of-work mining usually imposes some kind of structure on the block hash (e.g. the hash should start with 10 zeroes). That could be bad for the calculation of proof probability, so I just added another hash to make sure that there's no structure in the challenge.
I still do not understand the "Period * 67" in calculation of the pointer: #75
There is a small explanation in the commit that introduces it, but I'll try to elaborate a bit.
Every time that we go to a new period, we have to select a new block hash from the list of 256 block hashes that are available in the EVM. First we did the naive thing, which is to add the period to the block hash pointer: pointer = .... + period % 256
. But this had a problem; if we're in pointer downtime in a certain period, then it's very likely that we'll be in pointer downtime the next period as well, and the next period, and the next, etc. This is especially undesirable in integration tests that exercise the proof system, because then you have to take care to write your tests keeping in mind that at any point there could be a large amount of time where no proofs are required.
To alleviate this, we want to move the pointer by more than 1 every period. If we choose to move the pointer by 64 then we can be sure that we move out of the pointer downtime window should we be in it: pointer = ... + (period * 64) % 256
. However, this introduces a new problem. Because (4 * 64) % 256 == 0, we end up with the same pointer every 4 periods. We've basically created a cycle of length 4.
To avoid these cycles, we can choose a prime number to increase the pointer by every period. The closest prime number larger than 64 is 67, so we end up with: pointer = .... + (period * 67) % 256
Should Proof's probability == 0
be supported? From some lines in the code it seems it is currently, but what does probability == 0
means? That the user require proof every Period? Is this something we should support? Isn't this "use-case" provided when probability == 1
, shouldn't we hence revert on probability == 0
because that really means that the "requiring proofs is not started"?
I assume this is there in order to scale the probability to compensate for downtime?
Should we potentially emit an event for when Proof was marked as missed?
Should Proof's
probability == 0
be supported? From some lines in the code it seems it is currently, but what doesprobability == 0
means? That the user require proof every Period? Is this something we should support? Isn't this "use-case" provided whenprobability == 1
, shouldn't we hence revert onprobability == 0
because that really means that the "requiring proofs is not started"?
I chose to just support it and interpret it as probability == 1
, but we can also assert on it. We should then assert on it when a request is posted with probability == 1
, and maybe also before doing the % probability
, although solidity may already revert with a division by zero at that point.
I assume this is there in order to scale the probability to compensate for downtime?
Yes, indeed. See also: https://github.com/codex-storage/codex-research/blob/master/design/storage-proof-timing.md#pointer-downtime
Should we potentially emit an event for when Proof was marked as missed?
So far we've only added events that were needed for the code nodes to do their job. What would be the use of this event?
Should we check in the Periods._proofReceived()
that proof is required for the given Slot?
Should we check in the
Periods._proofReceived()
that proof is required for the given Slot?
I thought about it and decided not to. There's no harm in providing more proofs than needed. And adding a check for it could actually be harmful, because a proof requirement can be dropped at the last moment due to a wrap-around of the block pointer.
@AuHau are we ok to close this issue?
During writing Application Properties for Certora integration, I am slowly discovering a lot of small things that should be fixed, but maybe not necessarily need a whole issue. I will dump them here and afterwards fix them.