Open G8XSU opened 2 years ago
Seems like this would happen if the header Cache
used with lightning-block-sync
didn't contain the header for the block hash in question. It would need to survive across restarts as well.
I assume this is actually only temporary until the bitcoind finishes syncing and gets the block again. In the case of a reorg we are indeed stuck, but in theory we can use the Confirm interface to do a reorg without actually walking the chain back fully.
Because the general case is actually just that we have to wait (vs needing to apply the reorg) I don't think we need to hurry up to fix this - if we're in the reorg case panicking until the node catches back up is fine, and using Confirm to do a reorg only to undo it a second later is really kinda annoying. Eventually we should probably do the reorg but that should be an incredibly rare case.
Yes, I think it is temporary. LDK will probably crash loop (depending on how the client has it configured) and recover after bitcoinD fully syncs up to the latest chain tip. What I am not sure about is whether this can be triggered during normal deployments of bitcoinD host/service.
No. Unless bitcoind crashes causing it to lose data you shouldn't be able to reach this.
BitcoinD found a block, sent to LDK. We updated best_block_header hash in ChannelManager in LDK. Afterwards bitcoind killed/"unclean restart"/crashed without persisting the block in disk.
Later bitcoind comes back online, LDK tries to get block_header by hash for channelManager, (we got this block_header_hash from storage).
LDK panics because bitcoinD returned "BlockNotFound" because that block wasn't persisted.
Related : https://github.com/jamaljsr/polar/issues/614