Closed lyfsn closed 1 month ago
could you also post your reth.toml?
@joshieDo looks like this is thrown here
This reth.toml
was auto-generated by the reth node and appeared in my --datadir
path:
[stages.headers]
downloader_max_concurrent_requests = 100
downloader_min_concurrent_requests = 5
downloader_max_buffered_responses = 100
downloader_request_limit = 1000
commit_threshold = 10000
[stages.bodies]
downloader_request_limit = 200
downloader_stream_batch_size = 1000
downloader_max_buffered_blocks_size_bytes = 2147483648
downloader_min_concurrent_requests = 5
downloader_max_concurrent_requests = 100
[stages.sender_recovery]
commit_threshold = 5000000
[stages.execution]
max_blocks = 500000
max_changes = 5000000
max_cumulative_gas = 1500000000000
max_duration = "10m"
[stages.prune]
commit_threshold = 1000000
[stages.account_hashing]
clean_threshold = 500000
commit_threshold = 100000
[stages.storage_hashing]
clean_threshold = 500000
commit_threshold = 100000
[stages.merkle]
clean_threshold = 5000
[stages.transaction_lookup]
chunk_size = 5000000
[stages.index_account_history]
commit_threshold = 100000
[stages.index_storage_history]
commit_threshold = 100000
[stages.etl]
file_size = 524288000
[prune]
block_interval = 5
[prune.segments]
sender_recovery = "full"
receipts = "full"
[prune.segments.account_history]
distance = 10064
[prune.segments.storage_history]
distance = 10064
[prune.segments.receipts_log_filter]
[peers]
refill_slots_interval = "5s"
trusted_nodes = []
trusted_nodes_only = false
max_backoff_count = 5
ban_duration = "12h"
[peers.connection_info]
max_outbound = 100
max_inbound = 30
max_concurrent_outbound_dials = 15
[peers.reputation_weights]
bad_message = -16384
bad_block = -16384
bad_transactions = -16384
already_seen_transactions = 0
timeout = -4096
bad_protocol = -2147483648
failed_to_connect = -25600
dropped = -4096
bad_announcement = -1024
[peers.backoff_durations]
low = "30s"
medium = "3m"
high = "15m"
max = "1h"
[sessions]
session_command_buffer = 32
session_event_buffer = 260
[sessions.limits]
[sessions.initial_internal_request_timeout]
secs = 20
nanos = 0
[sessions.protocol_breach_request_timeout]
secs = 120
nanos = 0
[sessions.pending_session_timeout]
secs = 20
nanos = 0
receipts = "full"
did ou manually edit this?
can't reproduce this entry on a new run with --full
@mattsse
Hey, this is a script to start a Reth+Lighthouse node on the Endurance network.
I encountered this problem when using this script, and I can reproduce it every time I use this script to start a new node.
So if you can't reproduce the problem, could you please try this script to start a node?
https://github.com/OpenFusionist/mainnet-reth-lighthouse
This problem didn't appear in the beta version before.
When I use v1.0.0
here, everything is ok.
image: ghcr.io/paradigmxyz/reth:v1.0.0
https://github.com/OpenFusionist/mainnet-reth-lighthouse/blob/main/compose.yaml#L5
But when I use latest
, this problem appears.
To be more precise, version 1.0.3
is fine, but v1.0.4
has issues.
I'm also seeing this issue. It doesn't happen on first boot with a fresh db, it only occurs when you restart reth with --full. I'm using op-reth but I can reproduce it with:
❯ ./op-reth node --chain optimism --full
2024-08-29T19:32:16.835291Z INFO Initialized tracing, debug log directory: /Users/zach/Library/Caches/reth/logs/optimism
2024-08-29T19:32:16.835747Z INFO Starting reth version="1.0.5 (603e39ab)"
2024-08-29T19:32:17.035902Z INFO Opening database path="/Users/zach/Library/Application Support/reth/optimism/db"
2024-08-29T19:32:17.051809Z INFO Saving prune config to toml file
2024-08-29T19:32:17.051974Z INFO Configuration loaded path="/Users/zach/Library/Application Support/reth/optimism/reth.toml"
2024-08-29T19:32:17.061399Z INFO Skipping storage verification for OP mainnet, expected inconsistency in OVM chain
2024-08-29T19:32:17.061429Z INFO Database opened
2024-08-29T19:32:17.155478Z INFO
Pre-merge hard forks (block based):
- Frontier @0
- Homestead @0
- Tangerine @0
- SpuriousDragon @0
- Byzantium @0
- Constantinople @0
- Petersburg @0
- Istanbul @0
- MuirGlacier @0
- Berlin @3950000
- London @105235063
- ArrowGlacier @105235063
- GrayGlacier @105235063
- Bedrock @105235063
Merge hard forks:
- Paris @0 (network is known to be merged)
Post-merge hard forks (timestamp based):
- Regolith @0
- Shanghai @1704992401
- Canyon @1704992401
- Cancun @1710374401
- Ecotone @1710374401
- Fjord @1720627201
2024-08-29T19:32:17.155784Z INFO Transaction pool initialized
2024-08-29T19:32:17.350790Z INFO StaticFileProducer initialized
2024-08-29T19:32:17.351186Z INFO Pruner initialized prune_config=PruneConfig { block_interval: 5, segments: PruneModes { sender_recovery: Some(Full), transaction_lookup: None, receipts: Some(Full), account_history: Some(Distance(10064)), storage_history: Some(Distance(10064)), receipts_log_filter: ReceiptsLogPruneConfig({}) } }
2024-08-29T19:32:17.351394Z INFO Consensus engine initialized
2024-08-29T19:32:17.351514Z INFO Engine API handler initialized
2024-08-29T19:32:17.351617Z INFO Creating JWT auth secret file path="/Users/zach/Library/Application Support/reth/optimism/jwt.hex"
2024-08-29T19:32:17.353954Z INFO RPC auth server started url=127.0.0.1:8551
2024-08-29T19:32:17.354153Z INFO RPC IPC server started path=/tmp/reth.ipc
2024-08-29T19:32:17.354186Z INFO Starting consensus engine
2024-08-29T19:32:18.433414Z INFO Wrote network peers to file peers_file="/Users/zach/Library/Application Support/reth/optimism/known-peers.json"
❯ ./op-reth node --chain optimism --full
2024-08-29T19:32:19.596160Z INFO Initialized tracing, debug log directory: /Users/zach/Library/Caches/reth/logs/optimism
2024-08-29T19:32:19.596846Z INFO Starting reth version="1.0.5 (603e39ab)"
2024-08-29T19:32:19.796931Z INFO Opening database path="/Users/zach/Library/Application Support/reth/optimism/db"
2024-08-29T19:32:19.833153Z ERROR shutting down due to error
Error: Could not load config file "/Users/zach/Library/Application Support/reth/optimism/reth.toml"
Caused by:
0: Bad TOML data
1: TOML parse error at line 55, column 12
1: |
1: 55 | receipts = "full"
1: | ^^^^^^
1: invalid value: string "full", expected prune mode that leaves at least 10064 blocks in the database
hmm this
receipts = "full"
this shouldn't be in there, could you try after deleting the reth.toml file in case you haven't modified it manually?
I didn't touch the reth.toml, this bug occurs on its own without any other commands or modification, easy to reproduce.
If i remove the reth.toml, it reproduces the same way. It must be something to do with how the reth.toml config is serialized for the first time with --full
in the arg list. It writes a config that is not valid and fails to read on the second run.
❯ rm -rf /Users/zach/Library/Application\ Support/reth/optimism/reth.toml
❯ ./op-reth node --chain optimism --full
2024-08-29T20:12:54.516050Z INFO Initialized tracing, debug log directory: /Users/zach/Library/Caches/reth/logs/optimism
2024-08-29T20:12:54.516596Z INFO Starting reth version="1.0.5 (603e39ab)"
2024-08-29T20:12:54.716712Z INFO Opening database path="/Users/zach/Library/Application Support/reth/optimism/db"
2024-08-29T20:12:54.739317Z INFO Saving prune config to toml file
2024-08-29T20:12:54.739479Z INFO Configuration loaded path="/Users/zach/Library/Application Support/reth/optimism/reth.toml"
2024-08-29T20:12:54.749645Z INFO Skipping storage verification for OP mainnet, expected inconsistency in OVM chain
2024-08-29T20:12:54.749680Z INFO Database opened
2024-08-29T20:12:54.749848Z INFO
Pre-merge hard forks (block based):
- Frontier @0
- Homestead @0
- Tangerine @0
- SpuriousDragon @0
- Byzantium @0
- Constantinople @0
- Petersburg @0
- Istanbul @0
- MuirGlacier @0
- Berlin @3950000
- London @105235063
- ArrowGlacier @105235063
- GrayGlacier @105235063
- Bedrock @105235063
Merge hard forks:
- Paris @0 (network is known to be merged)
Post-merge hard forks (timestamp based):
- Regolith @0
- Shanghai @1704992401
- Canyon @1704992401
- Cancun @1710374401
- Ecotone @1710374401
- Fjord @1720627201
2024-08-29T20:12:54.750157Z INFO Transaction pool initialized
2024-08-29T20:12:54.750260Z INFO Loading saved peers file=/Users/zach/Library/Application Support/reth/optimism/known-peers.json
2024-08-29T20:12:54.940685Z INFO StaticFileProducer initialized
2024-08-29T20:12:54.940935Z INFO Pruner initialized prune_config=PruneConfig { block_interval: 5, segments: PruneModes { sender_recovery: Some(Full), transaction_lookup: None, receipts: Some(Full), account_history: Some(Distance(10064)), storage_history: Some(Distance(10064)), receipts_log_filter: ReceiptsLogPruneConfig({}) } }
2024-08-29T20:12:54.941076Z INFO Consensus engine initialized
2024-08-29T20:12:54.941182Z INFO Engine API handler initialized
2024-08-29T20:12:54.942138Z INFO RPC auth server started url=127.0.0.1:8551
2024-08-29T20:12:54.942258Z INFO RPC IPC server started path=/tmp/reth.ipc
2024-08-29T20:12:54.942271Z INFO Starting consensus engine
^C2024-08-29T20:12:55.976424Z INFO Wrote network peers to file peers_file="/Users/zach/Library/Application Support/reth/optimism/known-peers.json"
❯ ./op-reth node --chain optimism --full
2024-08-29T20:12:56.515160Z INFO Initialized tracing, debug log directory: /Users/zach/Library/Caches/reth/logs/optimism
2024-08-29T20:12:56.515517Z INFO Starting reth version="1.0.5 (603e39ab)"
2024-08-29T20:12:56.547250Z INFO Opening database path="/Users/zach/Library/Application Support/reth/optimism/db"
2024-08-29T20:12:56.586618Z ERROR shutting down due to error
Error: Could not load config file "/Users/zach/Library/Application Support/reth/optimism/reth.toml"
Caused by:
0: Bad TOML data
1: TOML parse error at line 55, column 12
1: |
1: 55 | receipts = "full"
1: | ^^^^^^
1: invalid value: string "full", expected prune mode that leaves at least 10064 blocks in the database
Location:
crates/node/builder/src/launch/common.rs:119:14
@mattsse I was able to repro on latest using
cargo run --bin op-reth -F optimism -- node --chain optimism --full
... Ctrl + c
cargo run --bin op-reth -F optimism -- node --chain optimism --full
It fails on the second execution with the same error. I can take this issue.
Looks like PruneModes.receipts.Some(Full)
is being serialized as receipts = "full"
by serde in crates/config/src/config.rs > impl Config > pub fn save. Should it be serialized to receipts = { distance = 10064 }
(MINIMUM_PRUNING_DISTANCE) ?
ah I see, this only affects, optimism because of this bad unwrap:
Describe the bug
I started a new node in a custom network to sync data from block 0 to the latest block, and encountered this error. The Docker container keeps trying to restart after some retry logs.
In the first phase, the node shows this error in the 13/14 stage:
In the second phase, the node still restarts with this error:
Steps to reproduce
Node logs
No response
Platform(s)
Linux (x86)
What version/commit are you on?
reth version="1.0.4 (e24e4c77)"
What database version are you on?
use default
Which chain / network are you on?
custom chain
What type of node are you running?
Archive (default)
What prune config do you use, if any?
If you've built Reth from source, provide the full command you used
No response
Code of Conduct