kindelia / Kindelia

An efficient, secure cryptocomputer
https://kindelia.org/
603 stars 39 forks source link

`kindelia publish` improvement #228

Open kings177 opened 1 year ago

kings177 commented 1 year ago

if a rollback happens when there is a publish in the mempool, that publish will be removed from the mempool, and so, you need to publish the file again. Since it's the same file, it's bound to add already published funs and ctrs to the mempool, which is a pain in the ass, considering the fact that the node will receive everything, including "repeated" things, returning a wall of ERRORs, wasting a lot of time, just to get another rollback, and the cycle repeats.

That being said, i think it would be pretty useful to have a command like:

kindelia node publish --nodebug

or

kindelia node republish

that ignores already published functions and only adds to the mempool those that "didn't fail"

steinerkelvin commented 1 year ago

Hum, this would be complementary to the functionality of kindelia check which would do the same thing (checking if something was already published) but on the node/server side.

Also, we should make the mempool work nicely with rollback. One simple way to do this would be to re-add the transactions from the removed blocks back to the mempool.

(I think you have misdiagnosed the problem: it is that the relevant transactions are being removed from the chain itself, no the mempool.)

kings177 commented 1 year ago

Oh, that makes sense. If the kindelia check can check and publish the ones that "failed" the check, like a kindelia check file.kdl publish it would be massive.

dan-da commented 1 year ago

Could check be integrated into publish to provide quasi atomic semantics?

elaborating... it seems to me that the ideal behavior would be that publish works like atomic tx in a relational database: all statements are applied and executes without error, or else nothing is applied. Another way of looking at it is that all writes are conditional until committed.

Since "real" atomic behavior is probably asking for too much, that's why I wonder if check couldn't be used to fake it. ie, all code is first checked before allowed to publish for real.

I'm not an eth guy so I don't know how they do it, or other smart contract platforms. But it seems like this should be a mostly solved problem by now? I don't know.... eager to learn.

steinerkelvin commented 1 year ago

@dan-da That's exactly what I was thinking of doing in the future. :)

publish would check if all statements would succeed in the current known state of the chain. Already-published functions would be just ignored etc.

Yeap, we can't have an atomic publish. One publishes their transactions and "hope" they will not be mined after third-party transactions that wreck them.

As the unit of code is a statement and we can't compound them, the user would bind multiple statements to a single reward like this:

run {
  ask h0 = (GetStmHash0 (GetIdx 'SomeFunc'));
  ask h1 = (GetStmHash1 (GetIdx 'AnotherFunc'));
  ask h2 = (GetStmHash0 (GetIdx 'MyContract'));
  if h0 == ... && h1 == ... && ... {
    ask miner = (GetMiner);
    ask (Call 'KGT' {Send #123456 miner});
    (Done #0)
  } else {
    (Fail)
  }
} sign {
  my_signature
}

Of course, this will only work properly with a set of transactions that fit in a single block (otherwise the miner can't be sure it will be rewarded for the block space it's spending). So the tooling would also help the user to split the transactions into multiple rewards, each going into a different block.