Closed lxfind closed 6 months ago
Has there been any work done on this issue?
Does this mean that there is no way to obtain the current time but we can only approximate by repeatedly calling: Time::is_after(u64): bool
function?
Is there an expectation of this before testnet or mainnet?
We are currently only focusing on using epoch
as time.
When can this change be applied on devnet?
Use-cases we care about such atomic swaps, auctions, tokens with an unlock delay, breeding cooldown period, etc. require time. Here's a fleshed out version of an idea I've heard floating around:
- orders include a
do_not_execute_before
timestamp (expressed as e.g., UNIX time in ms, though I'd be pleased if someone well-informed on time suggested alternatives)- an authority will not sign an order unless its local time is >=
do_not_execute_before
- add the value of
do_not_execute_before
as a field inTxContext
- add a
Time::is_after(u64): bool
function. This is what Move code can use for time-based logic. Note that contracts cannot ask "is it before time X", only "is it after time X". We should carefully document this [Also:Time
module should probably be a friend ofTxContext
+ the only one that can read the raw timestamp]- later (post-GDC), come up with a better story for ensuring that authority's local clocks are synchronized. For now, we accept the possibility of clock skew
The storage time of the Share Object is equivalent to moving the time in the blockchain head of the classic blockchain to a special reusable storage space. Compared with the head storage time of the classic blockchain: it saves a lot of storage space. In the classic block header, there are N blocks, and there must be storage accumulation for N block times. But Sui's Share Object only needs one Share Object's time storage.
According to my understanding, it is only necessary to provide users with a read-only interface to obtain the current time. When users use Move programming, they only need to get the value of CurrentTime from Context.
But the consensus of Share Object at this time needs to be like the classic blockchain, to ensure that the time is incremental, and the time interval cannot be higher than a certain error value. For example, the time difference between two blocks before and after Bitcoin will not be much greater than 10 minutes.
But won't having consensus of shared objects in a similar fashion to the classic blockchain structure (agreed consensus before moving forward) negate the advantages of owned objects and the TPS scaling available as a result?
Maybe I am understanding it wrong but wont this introduce a similar sort of bottleneck that ETH has with the merkle tree root with each block?
Would it be possible to use verkle trees with vector commitments with objects as leaves to show the current state of objects on Sui. This could be used to track state changes and mark timestamps of when changes occurred to retroactively timestamp transactions without having to form some sort of consensus?
Let me know if I am thinking about this all wrong. Just find this problem to be a really interesting one.
Thanks for your question @aryansheikh. Yes, requiring that transactions involving time go through consensus will mean you lose the throughput benefit of owned objects, but only for those transactions, not in general, and the situation is not quite the same as a traditional Blockchain with Blocks and Merkle Trees because our consensus works on a DAG, and is not probabilistic, so (a) ordering is done per-shared-object, rather than globally, and (b) you can be certain of finality much sooner.
Cc @gdanezis or @kchalkias on the question of Verkle Trees, as it isn't really my area, but from what I could gather, they are like Merkle Trees with a higher branching factor, which helps with reducing the size of a finality proof, but I don't see how it would help with avoiding consensus, maybe you could elaborate? Other than that detail, your suggestion is, I think, similar to what is proposed in this issue.
Use-cases we care about such atomic swaps, auctions, tokens with an unlock delay, breeding cooldown period, etc. require time. Here is a the proposed design for exposing time to transactions:
Clock
type, and an API:tx_context::time(clock: &Clock, ctx: &TxContext): u64
that fetches the creation unix timestamp from the header associated with the response from consensus that ordered the transaction being run.Clock
type is to request it as a parameter in an entry function. It cannot be stored, or duplicated.Clock
parameters can only appear at most once, by reference, immediately preceding the context parameter. An entry function with a non-conforming signature will fail to verify.Forcing a transaction through consensus
The existence of a
&Clock
entry function parameter signals that the transaction must go through consensus as that is the only way time can be passed in. This means that a transaction that otherwise involves only owned objects will go through consensus if it needs access to time.Initially, the plan is to introduce
Clock
as a shared object at a well-known address which SDKs must supply for you. The main upside to this approach is that it is quick to get up and running and adds no special cases, but it requires special SDK support, and all transactions that require time will be sequenced against the same object, which could lead to contention, hurting throughput. In future we may consider one of the following strategies:Clock
to the validator, which will recognise the type and force the transaction into consensus, but also supply it in the move adapter layer.Clock
objects and having SDKs spread the load between them.Clock
do not introduce data dependencies between each other.