Open sergeibadass opened 3 years ago
It would be helpful. I couldn't even start substrate-node-template working.
It's been long overdue, but we have updated the opensource version with latest substrate-interface and type registry changes and I succesfully tested the substrate-node-template config. A better README is definitely necessary and on the planning, but for now some pointers to get substrate-node-template working (after initial steps in README:
./harvester/app/type_registry/custom_types.json
docker-compose -p node-template -f docker-compose.substrate-node-template.yml up -d mysql
docker-compose -p node-template -f docker-compose.substrate-node-template.yml up --build
To truncate the tables in the database just perform TRUNCATE <tablename>
on localhost:33061
(according to port mapping in docker-compose file). For more information about SQL refer to the MySQL manual.
@arjanz Hi I am now running into "Could not add block" problem for my custom chain. I have updated all the submodule (e.g. py scale codec) to the latest. I had the same problem when trying to sync from Kusama as well. Any idea how to fix this?
harvester-worker_1 | [2021-01-22 07:46:16,373: INFO/ForkPoolWorker-1] Task app.tasks.start_sequencer[7174d822-09df-4809-84f2-e264404e5325] succeeded in 0.2196428949246183s: {'result': 'Chain not at genesis'} harvester-worker_1 | [2021-01-22 07:46:16,392: WARNING/ForkPoolWorker-2] ! ERROR adding 0x2c1c2f5d3f82cf222ed25e2339a206427c29008782dd754e2d6e341957d83609 harvester-worker_1 | [2021-01-22 07:46:16,393: ERROR/ForkPoolWorker-2] Task app.tasks.accumulate_block_recursive[f8ac7f88-11d8-4f90-80cb-b697cf9c165c] raised unexpected: HarvesterCouldNotAddBlock('0x2c1c2f5d3f82cf222ed25e2339a206427c29008782dd754e2d6e341957d83609') harvester-worker_1 | Traceback (most recent call last): harvester-worker_1 | File "/usr/src/app/app/tasks.py", line 116, in accumulate_block_recursive harvester-worker_1 | block = harvester.add_block(block_hash) harvester-worker_1 | File "/usr/src/app/app/processors/converters.py", line 489, in add_block harvester-worker_1 | self.process_metadata(parent_spec_version, parent_hash) harvester-worker_1 | File "/usr/src/app/app/processors/converters.py", line 218, in process_metadata harvester-worker_1 | runtime = Runtime.query(self.db_session).get(spec_version) harvester-worker_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/query.py", line 959, in get harvester-worker_1 | return self._get_impl(ident, loading.load_on_pk_identity) harvester-worker_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/query.py", line 1069, in _get_impl harvester-worker_1 | return db_load_fn(self, primary_key_identity) harvester-worker_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/loading.py", line 282, in load_on_pk_identity harvester-worker_1 | return q.one() harvester-worker_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/query.py", line 3292, in one harvester-worker_1 | ret = self.one_or_none() harvester-worker_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/query.py", line 3261, in one_or_none harvester-worker_1 | ret = list(self) harvester-worker_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/query.py", line 3334, in iter harvester-worker_1 | return self._execute_and_instances(context) harvester-worker_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/query.py", line 3355, in _execute_and_instances harvester-worker_1 | conn = self._get_bind_args( harvester-worker_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/query.py", line 3370, in _get_bind_args harvester-worker_1 | return fn( harvester-worker_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/query.py", line 3349, in _connection_from_session
What could be a good start is to make a Python file (e.g. ./harvester/test_decode.py
) which contains something like:
from scalecodec.type_registry import load_type_registry_file
from substrateinterface import SubstrateInterface
substrate = SubstrateInterface(
url='ws://127.0.0.1:9944',
type_registry_preset='substrate-node-template',
type_registry=load_type_registry_file('app/type_registry/custom_types.json'),
)
# block_hash = substrate.get_chain_finalised_head()
block_hash = '0x2c1c2f5d3f82cf222ed25e2339a206427c29008782dd754e2d6e341957d83609'
extrinsics = substrate.get_block_extrinsics(block_hash=block_hash)
print('Extrinsics:', [extrinsic.value for extrinsic in extrinsics])
events = substrate.get_events(block_hash)
print("Events:", events)
This way you easily step by step debug what exactly the issue is. Probably it is a missing or incorrect type defined in your runtime and not correctly reflected in https://github.com/polkascan/polkascan-pre-harvester/blob/c5f544ad631e3754ba1e818a26b7aac1ef11f287/app/type_registry/custom_types.json
Polkascan stack is great.
(1) Fan we have a README to run the harvester standalone? (2) Tried the FULL deployment and change the WS endpoint and types.json as instructed. no idea to how to truncate the tables. Is there a clearer README?
Thanks.