Closed michaelsutton closed 1 year ago
Tried running this command:
cargo run --release --bin kaspad -- --appdir ~/test-rusty/data --logdir ~/test-rusty/logs
(~/test-rusty/data
and ~/test-rusty/logs
are initially empty before running the command)
I get this error:
2023-05-21 18:05:06.481Z [INFO ] Processed 0 blocks and 0 headers in the last 10.00s (0 transactions; 0 parent references; 0 UTXO-validated blocks; 0.00 avg txs per block; 0 avg block mass)
thread 'tokio-runtime-worker' panicked at 'called `Result::unwrap()` on an `Err` value: InsufficientDaaWindowSize(0)', consensus/src/consensus/mod.rs:679:105
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Setting RUST_BACKTRACE=full
, the log goes:
RUST_BACKTRACE=full cargo run --release --bin kaspad -- --appdir ~/test-rusty/data --logdir ~/test-rusty/logs
Finished release [optimized] target(s) in 0.34s
Running `target/release/kaspad --appdir /home/dev/test-rusty/data --logdir /home/dev/test-rusty/logs`
2023-05-21 18:07:34.149Z [INFO ] kaspad v0.1.0
2023-05-21 18:07:34.149Z [INFO ] Application directory: /home/dev/test-rusty/data
2023-05-21 18:07:34.149Z [INFO ] Data directory: /home/dev/test-rusty/data/kaspa-mainnet/datadir
2023-05-21 18:07:34.149Z [INFO ] Logs directory: /home/dev/test-rusty/logs
2023-05-21 18:07:34.227Z [INFO ] P2P Server starting on: 0.0.0.0:16111
2023-05-21 18:07:34.227Z [INFO ] Grpc server starting on: 0.0.0.0:16110
2023-05-21 18:07:34.227Z [INFO ] Connection manager: has 0/8 outgoing P2P connections, trying to obtain 8 additional connections...
2023-05-21 18:07:35.501Z [INFO ] Registering p2p flows for peer [::ffff:96.91.245.193]:16111 for protocol version 5
2023-05-21 18:07:35.502Z [INFO ] P2P Connected to outgoing peer [::ffff:96.91.245.193]:16111
2023-05-21 18:07:35.745Z [INFO ] Registering p2p flows for peer [::ffff:65.21.199.58]:16111 for protocol version 5
2023-05-21 18:07:35.745Z [INFO ] P2P Connected to outgoing peer [::ffff:65.21.199.58]:16111
2023-05-21 18:07:35.745Z [INFO ] Connection manager: has 2/8 outgoing P2P connections, trying to obtain 6 additional connections...
2023-05-21 18:07:35.859Z [INFO ] IBD started with peer [::ffff:96.91.245.193]:16111
2023-05-21 18:07:36.023Z [INFO ] Starting IBD with headers proof
2023-05-21 18:07:37.264Z [INFO ] Registering p2p flows for peer 81.91.189.208:16111 for protocol version 5
2023-05-21 18:07:37.265Z [INFO ] P2P Connected to outgoing peer 81.91.189.208:16111
2023-05-21 18:07:37.751Z [INFO ] Connection manager: has 3/8 outgoing P2P connections, trying to obtain 5 additional connections...
2023-05-21 18:07:40.900Z [INFO ] Registering p2p flows for peer 174.93.41.239:16111 for protocol version 5
2023-05-21 18:07:40.900Z [INFO ] P2P Connected to outgoing peer 174.93.41.239:16111
2023-05-21 18:07:41.762Z [INFO ] Connection manager: has 4/8 outgoing P2P connections, trying to obtain 4 additional connections...
2023-05-21 18:07:41.907Z [INFO ] Registering p2p flows for peer 104.193.255.11:16111 for protocol version 5
2023-05-21 18:07:41.907Z [INFO ] P2P Connected to outgoing peer 104.193.255.11:16111
2023-05-21 18:07:42.764Z [INFO ] Connection manager: has 5/8 outgoing P2P connections, trying to obtain 3 additional connections...
thread 'tokio-runtime-worker' panicked at 'called `Result::unwrap()` on an `Err` value: InsufficientDaaWindowSize(0)', consensus/src/consensus/mod.rs:679:105
stack backtrace:
0: 0x55a3dc4ed50e - <unknown>
1: 0x55a3dba7637e - <unknown>
2: 0x55a3dc4e84c5 - <unknown>
3: 0x55a3dc4ed2e5 - <unknown>
4: 0x55a3dc4eed4f - <unknown>
5: 0x55a3dc4eeac4 - <unknown>
6: 0x55a3dbde3d81 - <unknown>
7: 0x55a3dc4ef3f1 - <unknown>
8: 0x55a3dc4ef186 - <unknown>
9: 0x55a3dc4eda3c - <unknown>
10: 0x55a3dc4eeec2 - <unknown>
11: 0x55a3db949383 - <unknown>
12: 0x55a3db949833 - <unknown>
13: 0x55a3dbc9952c - <unknown>
14: 0x55a3dc1ec83b - <unknown>
15: 0x55a3dbeda541 - <unknown>
16: 0x55a3dbed6ec3 - <unknown>
17: 0x55a3dbe37588 - <unknown>
18: 0x55a3dbee9f3f - <unknown>
19: 0x55a3dc51805e - <unknown>
20: 0x55a3dc517502 - <unknown>
21: 0x55a3dc523519 - <unknown>
22: 0x55a3dc516f80 - <unknown>
23: 0x55a3dc520120 - <unknown>
24: 0x55a3dc512fae - <unknown>
25: 0x55a3dc500def - <unknown>
26: 0x55a3dc50dd96 - <unknown>
27: 0x55a3dc50e9b1 - <unknown>
28: 0x55a3dc51c9c1 - <unknown>
29: 0x55a3dc4f2c85 - <unknown>
30: 0x7f3dbdf74609 - start_thread
at /build/glibc-SzIz7B/glibc-2.31/nptl/pthread_create.c:477:8
31: 0x7f3dbdd42133 - clone
at /build/glibc-SzIz7B/glibc-2.31/misc/../sysdeps/unix/sysv/linux/x86_64/clone.S:95
32: 0x0 - <unknown>
Exiting...
Code I'm compiling is at commit:
commit b49e43656b42b2d74a0ef3224a038ca3375ed001 (HEAD -> prune, msuttonorigin/prune)
Author: Michael Sutton <msutton@cs.huji.ac.il>
Date: Sun May 21 21:30:32 2023 +0000
monitor logs
Running on Ubuntu 20.04.5 LTS
Tried running this command:
cargo run --release --bin kaspad -- --appdir ~/test-rusty/data --logdir ~/test-rusty/logs
(
~/test-rusty/data
and~/test-rusty/logs
are initially empty before running the command)I get this error:
2023-05-21 18:05:06.481Z [INFO ] Processed 0 blocks and 0 headers in the last 10.00s (0 transactions; 0 parent references; 0 UTXO-validated blocks; 0.00 avg txs per block; 0 avg block mass) thread 'tokio-runtime-worker' panicked at 'called `Result::unwrap()` on an `Err` value: InsufficientDaaWindowSize(0)', consensus/src/consensus/mod.rs:679:105 note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Setting
RUST_BACKTRACE=full
, the log goes:RUST_BACKTRACE=full cargo run --release --bin kaspad -- --appdir ~/test-rusty/data --logdir ~/test-rusty/logs Finished release [optimized] target(s) in 0.34s Running `target/release/kaspad --appdir /home/dev/test-rusty/data --logdir /home/dev/test-rusty/logs` 2023-05-21 18:07:34.149Z [INFO ] kaspad v0.1.0 2023-05-21 18:07:34.149Z [INFO ] Application directory: /home/dev/test-rusty/data 2023-05-21 18:07:34.149Z [INFO ] Data directory: /home/dev/test-rusty/data/kaspa-mainnet/datadir 2023-05-21 18:07:34.149Z [INFO ] Logs directory: /home/dev/test-rusty/logs 2023-05-21 18:07:34.227Z [INFO ] P2P Server starting on: 0.0.0.0:16111 2023-05-21 18:07:34.227Z [INFO ] Grpc server starting on: 0.0.0.0:16110 2023-05-21 18:07:34.227Z [INFO ] Connection manager: has 0/8 outgoing P2P connections, trying to obtain 8 additional connections... 2023-05-21 18:07:35.501Z [INFO ] Registering p2p flows for peer [::ffff:96.91.245.193]:16111 for protocol version 5 2023-05-21 18:07:35.502Z [INFO ] P2P Connected to outgoing peer [::ffff:96.91.245.193]:16111 2023-05-21 18:07:35.745Z [INFO ] Registering p2p flows for peer [::ffff:65.21.199.58]:16111 for protocol version 5 2023-05-21 18:07:35.745Z [INFO ] P2P Connected to outgoing peer [::ffff:65.21.199.58]:16111 2023-05-21 18:07:35.745Z [INFO ] Connection manager: has 2/8 outgoing P2P connections, trying to obtain 6 additional connections... 2023-05-21 18:07:35.859Z [INFO ] IBD started with peer [::ffff:96.91.245.193]:16111 2023-05-21 18:07:36.023Z [INFO ] Starting IBD with headers proof 2023-05-21 18:07:37.264Z [INFO ] Registering p2p flows for peer 81.91.189.208:16111 for protocol version 5 2023-05-21 18:07:37.265Z [INFO ] P2P Connected to outgoing peer 81.91.189.208:16111 2023-05-21 18:07:37.751Z [INFO ] Connection manager: has 3/8 outgoing P2P connections, trying to obtain 5 additional connections... 2023-05-21 18:07:40.900Z [INFO ] Registering p2p flows for peer 174.93.41.239:16111 for protocol version 5 2023-05-21 18:07:40.900Z [INFO ] P2P Connected to outgoing peer 174.93.41.239:16111 2023-05-21 18:07:41.762Z [INFO ] Connection manager: has 4/8 outgoing P2P connections, trying to obtain 4 additional connections... 2023-05-21 18:07:41.907Z [INFO ] Registering p2p flows for peer 104.193.255.11:16111 for protocol version 5 2023-05-21 18:07:41.907Z [INFO ] P2P Connected to outgoing peer 104.193.255.11:16111 2023-05-21 18:07:42.764Z [INFO ] Connection manager: has 5/8 outgoing P2P connections, trying to obtain 3 additional connections... thread 'tokio-runtime-worker' panicked at 'called `Result::unwrap()` on an `Err` value: InsufficientDaaWindowSize(0)', consensus/src/consensus/mod.rs:679:105 stack backtrace: 0: 0x55a3dc4ed50e - <unknown> 1: 0x55a3dba7637e - <unknown> 2: 0x55a3dc4e84c5 - <unknown> 3: 0x55a3dc4ed2e5 - <unknown> 4: 0x55a3dc4eed4f - <unknown> 5: 0x55a3dc4eeac4 - <unknown> 6: 0x55a3dbde3d81 - <unknown> 7: 0x55a3dc4ef3f1 - <unknown> 8: 0x55a3dc4ef186 - <unknown> 9: 0x55a3dc4eda3c - <unknown> 10: 0x55a3dc4eeec2 - <unknown> 11: 0x55a3db949383 - <unknown> 12: 0x55a3db949833 - <unknown> 13: 0x55a3dbc9952c - <unknown> 14: 0x55a3dc1ec83b - <unknown> 15: 0x55a3dbeda541 - <unknown> 16: 0x55a3dbed6ec3 - <unknown> 17: 0x55a3dbe37588 - <unknown> 18: 0x55a3dbee9f3f - <unknown> 19: 0x55a3dc51805e - <unknown> 20: 0x55a3dc517502 - <unknown> 21: 0x55a3dc523519 - <unknown> 22: 0x55a3dc516f80 - <unknown> 23: 0x55a3dc520120 - <unknown> 24: 0x55a3dc512fae - <unknown> 25: 0x55a3dc500def - <unknown> 26: 0x55a3dc50dd96 - <unknown> 27: 0x55a3dc50e9b1 - <unknown> 28: 0x55a3dc51c9c1 - <unknown> 29: 0x55a3dc4f2c85 - <unknown> 30: 0x7f3dbdf74609 - start_thread at /build/glibc-SzIz7B/glibc-2.31/nptl/pthread_create.c:477:8 31: 0x7f3dbdd42133 - clone at /build/glibc-SzIz7B/glibc-2.31/misc/../sysdeps/unix/sysv/linux/x86_64/clone.S:95 32: 0x0 - <unknown> Exiting...
Code I'm compiling is at commit:
commit b49e43656b42b2d74a0ef3224a038ca3375ed001 (HEAD -> prune, msuttonorigin/prune) Author: Michael Sutton <msutton@cs.huji.ac.il> Date: Sun May 21 21:30:32 2023 +0000 monitor logs
Running on
Ubuntu 20.04.5 LTS
This problem is unrelated to this branch, but nonetheless it is fixed now
This PR adds support for full header and block data pruning. Supporting this required implementing new reachability sub-algorithms which were never implemented in go-kaspad (which does not support on-the-fly header pruning).
The result of this PR is a node running with nearly-constant DB size (
4-6
GB in current mainnet). Having this functionality is especially important before going for higher BPS in testnet/mainnet in order to keep node maintenance steady and affordable.The PR includes the following additions and changes:
B, C
blocks still∈ G
, all DAG/chain queries return the same results as before the deletionConsensusStorage
andConsensusServices
structs