AcalaNetwork / chopsticks

Create parallel reality of your Substrate network.
Apache License 2.0
133 stars 80 forks source link

Storage Items Skipped #599

Closed Dinonard closed 8 months ago

Dinonard commented 9 months ago

I was testing an extrinsic which executes some storage migration, and noticed that lots of storage items seem to be ignored. The main logic of the call looks like this:

match OldLedger::<T>::drain().next() {
                ... // do something here

OldLedger is just a simple storage map, and the above snippet is called in a loop - so many entries are deleted/migrated per call.

When I test this using the try-runtime, there are around 21k entries. But when I tested it with chopsticks, I noticed that the migration is done using way too few extrinsic calls, because it seems to omit lots of items. Only a few hundred entries got migrated. And it seems to be different each time I redo the migration.

It might be that this is some known shortcoming, but I checked the existing issues and didn't notice anything covering this.

xlc commented 9 months ago

would you be able to produce a wasm or something so we can reproduce the issue?

Dinonard commented 9 months ago

Of course!

When running the migration yesterday, it took hours for all the steps to complete, with roughly 21k entries to migrate, and 130k to delete. But when using the fork, it can take only a few calls to complete everything.

ermalkaleci commented 9 months ago

This is a limitation of chopsticks, in this scenario you can't drain more than 1000 keys. This is a low priority issue because you really shouldn't use chopsticks to drain your entire storage because it will take forever. What you should do is clearPrefix OldLedger and insert some entries for testing. https://github.com/AcalaNetwork/chopsticks/blob/77bf6207248372bf2e5dee4efda637b7d20b6c83/configs/kusama.yml#L22

Dinonard commented 9 months ago

Thanks for the explanation and the link, I'll keep it mind for the future. 👍

The goal of my test was to let the migration script run for some time, just to confirm everything works as I expect. I only raised the issue after seeing the discrepancy between try-runtime testing & amount of entries migrated/deleted.

Is the 1000 limit related to how many storage items can be read/written during a single call? Or is it related to a particular storage item (e.g. map)?