Closed NachoPal closed 8 months ago
It's most likely new runtime is running some migration. Run with LOG_LEVEL=trace
and if it keeps fetching remote storage then it's running a migration. You can avoid this by clearing storage and add some dummy data if you want to test the migration.
Yes, you are right, it keeps logging:
TRACE (layer/35172): RemoteStorageLayer get
How long can it take? Are we talking about minutes or hours?
it depends on the migration. basically the number of keys * rpc latency is the duration. so having a local node will also help or at least use a node with minimal latency
Is it technically possible to bulk query the keys to reduce migration times? It is limiting not being able to override the wasm or doing runtime upgrades (while keeping production storage) if the downtime can take hours.
you can use decode-key command to find the storage its migrating and then use $removePrefix to clear that storage
you can use decode-key command to find the storage its migrating and then use $removePrefix to clear that storage
Yeah, I know, but I'd like to avoid clearing the storage. That's why I was asking if it is technically possible to bulk query the keys to speed up the migration process, or maybe there is another workaround.
currently it's not possible but we can add feature that prefetch a particular storage defined in config file but still user will have to define this storage
Instead of clearing the storage, I am trying to run a network providing chain_spec genesis
files with the local
(kusama-local
, asset-hub-kusama-local
) implementation.
The config files look something like this:
mock-signature-host: true
block: ${env.KUSAMA_BLOCK_NUMBER}
db: ./db.sqlite
port: 8000
runtime-log-level: 5
genesis: ./config/chopsticks/kusama/chain-spec-v1.0.0.json
mock-signature-host: true
block: ${env.STATEMINE_BLOCK_NUMBER}
db: ./db.sqlite
port: 8001
runtime-log-level: 5
genesis: ./config/chopsticks/asset-hub-kusama/chain-spec-v1.0.0.json
And running the command:
chopsticks xcm -r config/chopsticks/kusama/config.yml -p config/chopsticks/asset-hub-kusama/config.yml
However I am getting the following error:
.../@acala-network/chopsticks-core/dist/cjs/blockchain/inherent/para-enter.js:24
throw new Error('Missing paraInherent data from block');
^
Error: Missing paraInherent data from block
at ParaInherentEnter.createInherents (.../@acala-network/chopsticks-core/dist/cjs/blockchain/inherent/para-enter.js:24:19)
at async Promise.all (index 1)
at async InherentProviders.createInherents (/Users/nacho/Desktop/PARITY/Repos/parachains-integration-tests/node_modules/@acala-network/chopsticks-core/dist/cjs/blockchain/inherent/index.js:91:23)
at async TxPool.buildBlock (/Users/nacho/Desktop/PARITY/Repos/parachains-integration-tests/node_modules/@acala-network/chopsticks-core/dist/cjs/blockchain/txpool.js:312:23)
at async TxPool.buildBlockIfNeeded (/Users/nacho/Desktop/PARITY/Repos/parachains-integration-tests/node_modules/@acala-network/chopsticks-core/dist/cjs/blockchain/txpool.js:295:9)
AssetHubKusama parachain is not registered in Kusama's chain_spec genesis. Not sure if that is related with the error or if what I am trying to do is not possible.
Hmm, we have tested genesis only with Acala, seems like paraInherent doesn't handle genesis like setValidation https://github.com/AcalaNetwork/chopsticks/blob/a5dac9419552b1f7abb78560e47fe5b5ef8606d9/packages/core/src/blockchain/inherent/parachain/validation-data.ts#L72
Hmm, we have tested genesis only with Acala, seems like paraInherent doesn't handle genesis like setValidation
Should I close this issue and open a new one to track this ☝🏼?
I am using
@acala-network/chopsticks
-v0.9.5
After a runtime upgrade for Kusama, extrinsics stop going through and no new block is built.
I have
runtime-log-level: 5
in my config file, but I can not figure out what is causing the issue looking at the logs (I do not see any error)Any advice about how to debug it? Any idea about what could be going wrong?