Open kirugan opened 3 months ago
looks interesting, If no one has taken this in a two weeks or more. i'll take it up. @kirugan for now i'd just slowly document the process / gotchas for implementing the change.
Hey @obasekiosa! Thanks for showing interest. We've created an application for you to contribute to Juno. Go check it out on OnlyDust!
@obasekiosa Hey there! I was wondering if you wouldn't mind if I took on this task?
Hey @AnkushinDaniil! Thanks for showing interest. We've created an application for you to contribute to Juno. Go check it out on OnlyDust!
If earlier we processed all transactions sequentially, at the moment we must first collect txs
from txns_and_query_bits
, start tx_executor
, wait until it finishes and in for (txn_index, result) in res.iter().enumerate()
process the results. This is the main but not the only problem at the moment.
#[no_mangle]
pub extern "C" fn cairoVMExecute(
// ...
concurrency_mode: c_uchar,
) {
// ...
for (txn_index, txn_and_query_bit) in txns_and_query_bits.iter().enumerate() {
// ...
match transaction_from_api(
// ...
) {
Ok(t) => txs.push(t),
Err(e) => {
// ...
}
}
}
let mut tx_executor = TransactionExecutor::new(
state,
block_context,
TransactionExecutorConfig {
concurrency_config: ConcurrencyConfig {
enabled: concurrency_mode,
// TODO: make this configurable
n_workers: 4,
chunk_size: 64,
},
},
);
let res = tx_executor.execute_txs(&txs);
for (txn_index, result) in res.iter().enumerate() {
let minimal_l1_gas_amount_vector: Option<GasVector>;
match result {
Ok(mut t) => {
// ...
}
Err(e) => {
// ...
}
}
}
}
If earlier we processed all transactions sequentially, at the moment we must first collect
txs
fromtxns_and_query_bits
, starttx_executor
, wait until it finishes and infor (txn_index, result) in res.iter().enumerate()
process the results. This is the main but not the only problem at the moment.#[no_mangle] pub extern "C" fn cairoVMExecute( // ... concurrency_mode: c_uchar, ) { // ... for (txn_index, txn_and_query_bit) in txns_and_query_bits.iter().enumerate() { // ... match transaction_from_api( // ... ) { Ok(t) => txs.push(t), Err(e) => { // ... } } } let mut tx_executor = TransactionExecutor::new( state, block_context, TransactionExecutorConfig { concurrency_config: ConcurrencyConfig { enabled: concurrency_mode, // TODO: make this configurable n_workers: 4, chunk_size: 64, }, }, ); let res = tx_executor.execute_txs(&txs); for (txn_index, result) in res.iter().enumerate() { let minimal_l1_gas_amount_vector: Option<GasVector>; match result { Ok(mut t) => { // ... } Err(e) => { // ... } } } }
what is and why is it a problem?
We have to use more memory and we can't extract transaction trace in the provided solution.
We should allow users to use the latest Blockifier feature, which is concurrent execution. We should add a Go flag (boolean) vm-concurrency-mode that will trigger parallel execution in Blockifier. This will require restructuring the Rust code that handles transactions.
Keep in mind that concurrent transactions may modify shared data structures (maps, slices), and we should protect them with mutexes on the Go side (CGo).