slimcoin-project / pacli

Simple CLI PeerAssets client (extended version).
GNU General Public License v3.0
0 stars 0 forks source link

Some issue with locator related commands #135

Open buhtignew opened 3 months ago

buhtignew commented 3 months ago

At the moment I'm not able to check whether the locator related commands are creating duplicates in my blocklocator.json file, so the issue #117 is at the moment in standby for me.

The issue I've got while I was trying to create duplicates is the following: I've run token cache 539839eb5c3a3dbf1eb9b942f4a8126b58a7733efaf9d1f3ccba86904b6fcee3, token cache 160679b53e6785664e75bc3cde5e1c41e88a9aacc8afcc92b641152f51dec959 and address cache mx5MdsenFDZufFuwT9ND7BKQBzS4Hy9YUP -s 445930 -b 10000 and after the blocks were processed I've got the following error message:

General error raised by PeerAssets. Check if your input is correct.

and no entry in the blocklocator.json file was created.

Thinking my blocklocator.json file was corrupted I've renamed it and run address cache mx5MdsenFDZufFuwT9ND7BKQBzS4Hy9YUP -s 445930 -b 1000 and got the same message while the new blocklocator.json file wasn't created.

However I've renamed my blocklocator.json file back and have run address cache mx5MdsenFDZufFuwT9ND7BKQBzS4Hy9YUP -b 100. In this case no error message was displayed and the new entry was created in my blocklocator.json file.

Thinking there was something wrong in what I've done so far I've tried to run token cache 160679b53e6785664e75bc3cde5e1c41e88a9aacc8afcc92b641152f51dec959 -b 100 and got the following message:

Storing blockheight locators for decks: ['160679b53e6785664e75bc3cde5e1c41e88a9aacc8afcc92b641152f51dec959']
First deck spawn at block height: 3339
Start block: 3339 End block: 3439 Number of blocks: 100
Processing block: 3400

        General error raised by PeerAssets. Check if your input is correct.

and nothing has changed in the blocklocator.json file.

Then I was able to successfully run address cache mx5MdsenFDZufFuwT9ND7BKQBzS4Hy9YUP -b 1000 twice. Although there was still no transactions in the blocks I've processed.

Probably I'm doing something wrong, but I don't know what.

Right now I'm running address cache mx5MdsenFDZufFuwT9ND7BKQBzS4Hy9YUP, as soon I'll get the result I'll publish it here.

UPDATE: The address scanning went smoothly. The output after the block's processing was:

Stored block data until block 52100 with hash 00000004713e6a69aeb30eb95aa96cd2595fad02357fe09ece6e4ca0c6f4a315 .
Block heights for the checked addresses: {'mx5MdsenFDZufFuwT9ND7BKQBzS4Hy9YUP': [21987, 22003, 23190, 23191]}
Storing new locator block heights.

I was expecting the block 454302 being mentioned in the mx5MdsenFDZufFuwT9ND7BKQBzS4Hy9YUP address section of the blocklocator.json file since the corresponding transaction bc8c92dfe02b6c9ed53c0add55a347318d7eba6a8cde138e386f1859763be6bc contains that address as receiver, but since the block is already mentioned in the mkwJijAwqXNFcEBxuC93j3Kni43wzXVuik address section I assumed the block would be being taken from there somehow.

So I've checked the time needed to get the output of the transaction list mx5MdsenFDZufFuwT9ND7BKQBzS4Hy9YUP -x -l -f 445930 -e 455622 command (after I've run address cache mx5MdsenFDZufFuwT9ND7BKQBzS4Hy9YUP as above) and compared it with the time needed to get the same output using the transaction list mx5MdsenFDZufFuwT9ND7BKQBzS4Hy9YUP -x -f 445930 -e 455622 command and the only difference was that the former command reports to be processing blocks like this

Processing block: 446000
Processing block: 446100
Processing block: 446200
Processing block: 446300
Processing block: 446400
Processing block: 446500
Processing block: 446600
Processing block: 446700
Processing block: 446800
Processing block: 446900
Processing block: 447000
Processing block: 447100
Processing block: 447200
Processing block: 447300
Processing block: 447400
Processing block: 447500
Processing block: 447600
Processing block: 447700
Processing block: 447800
Processing block: 447900
Processing block: 448000
Processing block: 448100
Processing block: 448200
Processing block: 448300
Processing block: 448400
Processing block: 448500
Processing block: 448600
Processing block: 448700
Processing block: 448800
Processing block: 448900
Processing block: 449000
Processing block: 449100
Processing block: 449200
Processing block: 449300
Processing block: 449400
Processing block: 449500
Processing block: 449600
Processing block: 449700
Processing block: 449800
Processing block: 449900
Processing block: 450000
Processing block: 450100
Processing block: 450200
Processing block: 450300
Processing block: 450400
Processing block: 450500
Processing block: 450600
Processing block: 450700
Processing block: 450800
Processing block: 450900
Processing block: 451000
Processing block: 451100
Processing block: 451200
Processing block: 451300
Processing block: 451400
Processing block: 451500
Processing block: 451600
Processing block: 451700
Processing block: 451800
Processing block: 451900
Processing block: 452000
Processing block: 452100
Processing block: 452200
Processing block: 452300
Processing block: 452400
Processing block: 452500
Processing block: 452600
Processing block: 452700
Processing block: 452800
Processing block: 452900
Processing block: 453000
Processing block: 453100
Processing block: 453200
Processing block: 453300
Processing block: 453400
Processing block: 453500
Processing block: 453600
Processing block: 453700
Processing block: 453800
Processing block: 453900
Processing block: 454000
Processing block: 454100
Processing block: 454200
Processing block: 454300
Processing block: 454400
Processing block: 454500
Processing block: 454600
Processing block: 454700
Processing block: 454800
Processing block: 454900
Processing block: 455000
Processing block: 455100
Processing block: 455200
Processing block: 455300
Processing block: 455400
Processing block: 455500
Processing block: 455600

while the later command wasn't reporting anything while scanning.

So this time my hypothesis was that the blocks that were not mentioned in the address cache mx5MdsenFDZufFuwT9ND7BKQBzS4Hy9YUP command output (as above) were not only not put into the blocklocator.json file for that address but were not retrieved from the other addresses scanning either, contrary to my previous assumption.

To verify I've run transaction list mx5MdsenFDZufFuwT9ND7BKQBzS4Hy9YUP -x -l -f 21986 -e 23192 and then transaction list mx5MdsenFDZufFuwT9ND7BKQBzS4Hy9YUP -x -f 21986 -e 23192 and the time needed to retrieve the output was very different. The first command needed about a couple of seconds, while the latter required about a minute.

So I'd say that the address cache mx5MdsenFDZufFuwT9ND7BKQBzS4Hy9YUP command hasn't stored all the blocks in which the address is present, but only those which were not already stored for another address (mkwJijAwqXNFcEBxuC93j3Kni43wzXVuik in my case) which doesn't let taking advantage of the previous scanning to get quicker results while running commands that are using the locator feature. _ UPDATE 09-07-2024 The same issue has the token init if used with the -c flag.

d5000 commented 1 month ago

The first bug was probably fixed with commit 1b65ff1.

The reason that the error appeared sometimes and sometimes not, is that it appeared always when the command "Provided start block is above the cached range. Not using nor storing locators to avoid inconsistencies." was thrown (which due to side effects threw the "red" error).

This was however not intended in the case of the first "caching process" of a new address, because you should always be able to start caching an addresses from an arbitrary block on - as long as it's the first time you cache that address, or after you erase the entry in blocklocator.json.

I'll look now through the rest of the post after the first of your "UPDATE"s and update the post accordingly if I find new bugs and fix them.


About your first update:

I was expecting the block 454302 being mentioned in the mx5MdsenFDZufFuwT9ND7BKQBzS4Hy9YUP address section of the blocklocator.json file since the corresponding transaction bc8c92dfe02b6c9ed53c0add55a347318d7eba6a8cde138e386f1859763be6bc contains that address as receiver,

As far as you reported your commands, you only seem to have cached the first 50000 blocks for the ...YUP address after you used a new blocklocator.json, and thus a block over 400000 of course would still not show up there.

since the block is already mentioned in the mkwJijAwqXNFcEBxuC93j3Kni43wzXVuik address section I assumed the block would be being taken from there somehow.

If the addresses are cached separately, no entry in the blocklocator.json influences another one. Only when you cache a whole deck or a list of addresses, they will update together.


Having read the update, I think the problem is simply that you haven't cached all the blocks but only the first 50000 as the standard option of this command was. If you cache the whole chain then you will benefit from the locators when doing transaction list queries for the whole chain.

Having thought a bit how to made some easy and useful improvements, I have made some changes to the address cache command in the last update:

The --force option for address cache allows something which was previously not allowed: You can now cache discontinuously, for example, from block 0 to 1000 and then from 2000 to 3000. This is meant for situations when you know the address was used at certain blocks.

Commit f375a49

buhtignew commented 2 weeks ago

I'm not receiving the mentioned error messages anymore.

I've tested address cache with the new -f flag and it seems like if there are two discontinuous scannings for the same address it's impossible to scan blocks in between afterwards. For instance right now I have the following line in my blocklocator.json:

"mx5MdsenFDZufFuwT9ND7BKQBzS4Hy9YUP": {"heights": [21987, 22003, 23190, 23191, 454248, 454268, 454287, 454292, 454297, 454300, 454301, 454302, 455371, 455622, 455646, 456250, 456263, 456278], "lastblock": "000000fb52d11c936dd3a8d4320b047c54347b5575dbf59af0c51c9a07350eb9"}

So there is a gap between the blocks 23191 and 454248.

If I try to run address cache mx5MdsenFDZufFuwT9ND7BKQBzS4Hy9YUP -s 58600 -b 300 -f I'm getting

Error: Starting caching before the last stored block is not supported, as this might lead to inconsistencies.

Which for instance doesn't let me to cache the transaction 3c8122dbffd98c846620c4236e84c8252c2633ca621b0d39d97dd8b3ca282a62 that is on the block 58692.

Maybe the discontinuous caching should be allowed from whatever point regardless what was the last caching point. _ I was not yet familiar with the new token cache and address cache commands so I've accidentally run token cache 160679b53e6785664e75bc3cde5e1c41e88a9aacc8afcc92b641152f51dec959 -s 13000 -b 1000 -f and got the following output:

Storing blockheight locators for decks: ['160679b53e6785664e75bc3cde5e1c41e88a9aacc8afcc92b641152f51dec959']
First deck spawn at block height: 3339
Start block: 4439 End block: 5439 Number of blocks: 1000
Processing block: 4500
Processing block: 4600
Processing block: 4700
Processing block: 4800
Processing block: 4900
Processing block: 5000
Processing block: 5100
Processing block: 5200
Processing block: 5300
Processing block: 5400
Stored block data until block 5439 with hash 0000000057d80a657eb4b44b58d5a64260e42612bd7287e52db5ce1c0b696d1c 
ERROR: Could not consume arg: -s
Usage: pacli token cache 160679b53e6785664e75bc3cde5e1c41e88a9aacc8afcc92b641152f51dec959 -s 13000 -
For detailed information on this command, run:
  pacli token cache 160679b53e6785664e75bc3cde5e1c41e88a9aacc8afcc92b641152f51dec959 -s 13000 - --help

Of course when I've run token cache 160679b53e6785664e75bc3cde5e1c41e88a9aacc8afcc92b641152f51dec959 -s 13000 - --help the scanning restarted again. Maybe we can consider improving this message. _ I've discovered that the blocklocator.json file stores the decks under their P2TH addresses. Thus the address cache TOKENS_P2TH_ADDRESS enables all the functionalities for the deck that are not provided with the token cache TOKEN command, for instant the discontinuous scanning (using the address cache's -f flag).

Update 27-09-2024: I've inquired better on this topic and further discovered that the token cache command doesn't only stores the blocks from the P2TH address (but also from the at_address and from derived_p2th_addresses in case of dPoD tokens), so the situation is more complex than that, but still I think there is the possibility to perform discontinuous scanning on decks using address cache command and even worth on just one address of the deck (i.e. P2TH and not at_address for instance).

_
Would it make sense to try aligning the `transaction list` `-f` flag with the `address cache`'s? To do so the `--force` flag can become `--arbitrary`.