Closed mandrigin closed 2 months ago
I'd agree yes, it won't save a batch if we've already moved on from it with the latest code changes
I disagree, the idea of this issue wasn't to improve the performance in terms of TPS but to maximize the use of Batches which currently I would say is below 60%, by implementing this we could even move to cheaper 2^24 provers. I would like you to reopen this issue
I see your point @eduadiez but we were asked to not wait for executors to seal a batch, so in a busy network the sequencer could be 2 or 3 batches ahead of the pending requests with executors, it's too late at this point to retrospectively add more to a batch. I think the way forwards is for virtual counters to more accurately represent what is happening in the executor.
Some questions to get more context on this topic:
Once a batch X is closed
is this the flow ?
Not quite @krlosMata :
So each block is sent to the executor in turn with a witness and DS bytes that covers the batch up to that point, so imagine batch 100 starts with block 1000, we send [1000] to the executor, then the next block we send [1000,1001] to the executor and so on until the batch is done, then we start on the next batch. So there is no "final" check at the end of a batch to make sure it's OK, we provide finality at the block level to keep the RPCs syncing as fast as possible from the sequencer. We effectively check the whole batch for each new block we add to it.
So because we don't wait at the end of a batch any more we can't retrospectively rebuild a batch because of a response from the executor. In faster / busier networks we could already be 2 batches ahead of the queue going to the executors. The aim is to not restrict the sequencer because of executors speed but to let it run as fast as possible and scale horizontally with more executors.
We have a lot of conflicting interests at the moment across different issues:
We can't have all 3, but hopefully can strike a balance.
Thanks for the detailed flow xD
I guess that virtual counters are tracked while you are sending blocks to the executor, like:
Open a new batch (X available vc
):
vc available
): send [A] to executor --> response_Avc available
): send [A, B] to executor --> response_Bvc
are exhausted just open another batchThe idea here is take the real vc
from response_A
and adjust them while adding blocks to the batch. This will only work on batches that are not yet closed, so busy networks may not benefit from it, but low traffic networks or networks that do not require very high throughput could benefit from it.
The flow will be like:
vc available
): send [A] to executor --> response_A --> includes real_Avc available
): send [A, B] to executor --> response_Bvc available
): send [A, B, C] to executor --> response_C
----> adjust cv
with real_Avc available
): send [A, B, C, D] to executor --> response_Dvc
are exhausted just open another batch
This seems deprecated now that we don't wait for executor responses.