Closed BitromNO closed 1 year ago
"channel": { "remote_node_pub": "0290cc884704073b2b633f69f852e8ca2a37660bb359a1e861f2b48760c298ac53", "channel_point": "629cac110c56c635c9a45f3b8cd21fef6ec780654933680293540c0385c2d9a2:0", "capacity": "3636958", "local_balance": "3633488", "remote_balance": "0", "local_chan_reserve_sat": "0", "remote_chan_reserve_sat": "0", "initiator": "INITIATOR_LOCAL", "commitment_type": "ANCHORS", "num_forwarding_packages": "0", "chan_status_flags": "", "private": false }, "closing_txid": "70adc5ee10e7659fd1f4e3fc3fa7a90bd81d2a7221c972d82e3f180658b2d518", "limbo_balance": "3633818", "maturity_height": 0, "blocks_til_maturity": 0, "recovered_balance": "0", "pending_htlcs": [ ], "anchor": "LIMBO" } ], "waiting_close_channels": [ ]
got a new node up an running the next day.
Is this a new node or a migration from the old one (reusing the ~/.lnd
)?
my node was opening two channels when it happend
I see that you way two but in the next message you only show one?
The channel was open about three weeks ago and closed (https://mempool.space/tx/70adc5ee10e7659fd1f4e3fc3fa7a90bd81d2a7221c972d82e3f180658b2d518) ~2 weeks ago, is it one of the ones you were talking about?
Did you try restarting your node and rescan the wallet?
Also, what's the output for lncli wallet pendingsweeps
(I guess not because thinks it is still opening)?
got a new node up an running the next day.
Is this a new node or a migration from the old one (reusing the ~/.lnd
)? Its a migration (raspiblitz) using new hardware.
my node was opening two channels when it happend
I see that you way two but in the next message you only show one?
admin@192.168.1.190:~ βΏ lncli pendingchannels { "total_limbo_balance": "4098058", "pending_open_channels": [ ], "pending_closing_channels": [ ], "pending_force_closing_channels": [ { "channel": { "remote_node_pub": "0217890e3aad8d35bc054f43acc00084b25229ecff0ab68debd82883ad65ee8266", "channel_point": "deded54c5e2227a23325de94e4031cf5f86a0784d9b2322a9830c200d69dc685:0", "capacity": "468040", "local_balance": "464570", "remote_balance": "0", "local_chan_reserve_sat": "0", "remote_chan_reserve_sat": "0", "initiator": "INITIATOR_LOCAL", "commitment_type": "ANCHORS", "num_forwarding_packages": "0", "chan_status_flags": "", "private": false }, "closing_txid": "3983c6566b58de69a8f2d30d5f4576db8fec72c9f8553de05548ea1e0c7bf779", "limbo_balance": "464570", "maturity_height": 0, "blocks_til_maturity": 0, "recovered_balance": "330", "pending_htlcs": [ ], "anchor": "RECOVERED" }, { "channel": { "remote_node_pub": "0290cc884704073b2b633f69f852e8ca2a37660bb359a1e861f2b48760c298ac53", "channel_point": "629cac110c56c635c9a45f3b8cd21fef6ec780654933680293540c0385c2d9a2:0", "capacity": "3636958", "local_balance": "3633488", "remote_balance": "0", "local_chan_reserve_sat": "0", "remote_chan_reserve_sat": "0", "initiator": "INITIATOR_LOCAL", "commitment_type": "ANCHORS", "num_forwarding_packages": "0", "chan_status_flags": "", "private": false }, "closing_txid": "70adc5ee10e7659fd1f4e3fc3fa7a90bd81d2a7221c972d82e3f180658b2d518", "limbo_balance": "3633488", "maturity_height": 0, "blocks_til_maturity": 0, "recovered_balance": "330", "pending_htlcs": [ ], "anchor": "RECOVERED" } ], "waiting_close_channels": [ ] }
The channel was open about three weeks ago and closed (https://mempool.space/tx/70adc5ee10e7659fd1f4e3fc3fa7a90bd81d2a7221c972d82e3f180658b2d518) ~2 weeks ago, is it one of the ones you were talking about?
Yes thats the one with most sats
Did you try restarting your node and rescan the wallet?
Yes two times
Also, what's the output for
lncli wallet pendingsweeps
(I guess not because thinks it is still opening)?
admin@192.168.1.190:~ βΏ lncli wallet pendingsweeps { "pending_sweeps": [] }
is there a chance that ive lost my sats?
Your sats are safe, just keep calm for now they will be easily recoverable.
Could you say whether you switched version of lnd when the Hardware Failure occured? Which version were/are you running ?
The main problem I see is, that your funds are timelocked (CSV Locked) but the pendingchannels
cmd does not recognize this but reports:
"maturity_height": 0,
"blocks_til_maturity": 0,
thats not correct, it should be a number either negative if the timelock expired or positive.
Have you tried rescanning the wallet, as positiveblue pointed out?
is there a chance that ive lost my sats?
Your sats are safe, just keep calm for now they will be easily recoverable.
Thanks I'm calm about it! π Yes I noticed that, I don't think the channels where opened properly before I used the force close command.
Could you say whether you switched version of lnd when the Hardware Failure occured? Which version were/are you running ?
I might have updated from 16 to 16.2 since the problem occurred..
The main problem I see is, that your funds are timelocked (CSV Locked) but the
pendingchannels
cmd does not recognize this but reports:"maturity_height": 0, "blocks_til_maturity": 0,
thats not correct, it should be a number either negative if the timelock expired or positive.
It have shown 0 blocks from day one.. never understood why Waited over 2000 blocks and still nothing..
Have you tried rescanning the wallet, as positiveblue pointed out?
Yes two times, I'll try one more time now
Could you increase the log level of the sweeper (SWPR) to debug and grep for the output you are missing 70adc5ee10e7659fd1f4e3fc3fa7a90bd81d2a7221c972d82e3f180658b2d518:1
or the funding tx.
Easiest way to recover funds would be chantools:
chantools sweeptimelockmanual
but make sure you understand what you are doing before you initiate any cmds, because always critical operation to sweep funds while a node is still running meaning that you only sweep the correct channel etc...
Could you increase the log level of the sweeper (SWPR) to debug and grep for the output you are missing
70adc5ee10e7659fd1f4e3fc3fa7a90bd81d2a7221c972d82e3f180658b2d518:1
or the funding tx.ok how do i do that?
Easiest way to recover funds would be chantools:
chantools sweeptimelockmanual
but make sure you understand what you are doing before you initiate any cmds, because always critical operation to sweep funds while a node is still running meaning that you only sweep the correct channel etc...
ok.. i cannot install channeltools with the latest lnd in raspiblitz.. ill try to spinup a new node with a stable older lnd version first. i dont have any other channels atm, as long as i can somhow sweep the adresses ill be happy.
ok how do i do that?
lncli debuglevel --level SWPR=debug
also in general check the logs for these mentioned channel points and see whats happening there.
No, i had to use the channel.backup
sΓΈn. 7. mai 2023 kl. 16:43 skrev ziggieXXX @.***>:
When you say your hardware failed, where you able to start your node again with the normal ~/.lnd folder or did you use the channel.backup file to restore your node?
β Reply to this email directly, view it on GitHub https://github.com/lightningnetwork/lnd/issues/7670#issuecomment-1537458400, or unsubscribe https://github.com/notifications/unsubscribe-auth/ATRMIL6FZO2RAP5GNIYFN4LXE6YJPANCNFSM6AAAAAAXYNODLM . You are receiving this because you authored the thread.Message ID: @.***>
Ahh ok now all makes sense, tho normally the channel.backup would have also lead to correct maturity numbers, but maybe you run in an edge case here. So just switch off your node, then use chantools
to recover the funds (you might be waiting until fees come down tho).
you will have to use chantools sweepremoteclosed
Ahh ok now all makes sense, tho normally the channel.backup would have also lead to correct maturity numbers, but maybe you run in an edge case here. So just switch off your node, then use
chantools
to recover the funds (you might be waiting until fees come down tho).you will have to use
chantools sweepremoteclosed
Ok. Ill tried this.. --sweepaddr did not work (missing output file)
Do i need all of them?
chantools sweepremoteclosed \ --recoverywindow 300 \ --feerate 20 \ --sweepaddr bc1q..... \ --publish
Ok. Ill tried this.. --sweepaddr did not work (missing output file)
Do i need all of them?
chantools sweepremoteclosed --recoverywindow 300 --feerate 20 --sweepaddr bc1q..... --publish
Please read the help message thoroughly, and don't run commands you don't fully understand.
after sweepremoteclosed found 0 sweep targets with total value of 0 satoshis which is below the dust limit of 600
Hmm I think we need more logs to understand the situation your node is in, can you provide those?
No problem! What log do you want?
In your lnd.conf set:
#Debug Levels
debuglevel=info,SWPR=debug,LNWL=debug
and restart your node, then let it run for 3-5 min and provide the complete log maybe via slack => lnd
Will follow the /mnt/hdd/lnd/logs/bitcoin/mainnet/lnd.log running 'sudo tail -n 30 -f /mnt/hdd/lnd/logs/bitcoin/mainnet/lnd.log'
Press ENTER to continue use CTRL+C any time to abort .. then use the command 'raspiblitz' to return to the menu #######################################################################################
00000020 94 cb d9 78 69 02 20 13 5a 14 d0 0e 83 76 aa 04 |...xi. .Z....v..|
00000030 4c fe 9e a4 8d 43 41 a8 26 e0 08 d9 12 3e 92 ee |L....CA.&....>..|
00000040 f1 ab 11 64 e3 28 4a 01 |...d.(J.|
},
([]uint8) (len=40 cap=40) {
00000000 21 02 30 77 52 97 ee f2 1c 2d 31 38 9c c5 e3 a9 |!.0wR....-18....|
00000010 b9 f4 97 6b 58 a7 3e 88 ee dd 71 7f e6 94 68 4a |...kX.>...q...hJ|
00000020 84 9f ac 73 64 60 b2 68 |...sd`.h|
}
}, Sequence: (uint32) 0 }) }, TxOut: ([]wire.TxOut) (len=1 cap=15) { (wire.TxOut)(0x4002db0080)({ Value: (int64) 465, PkScript: ([]uint8) (len=34 cap=500) { 00000000 51 20 99 7f 8f ff a6 a3 80 0c c5 97 2c b5 88 ff |Q ..........,...| 00000010 68 dd ee c8 83 06 1e 03 a0 25 94 4b 6c 2c 6f ea |h........%.Kl,o.| 00000020 ca 5e |.^| } }) }, LockTime: (uint32) 788674 })
2023-05-07 20:36:32.642 [DBG] SWPR: Rescheduling input 70adc5ee10e7659fd1f4e3fc3fa7a90bd81d2a7221c972d82e3f180658b2d518:0 after 2 attempts at height 788676 (delta 2) 2023-05-07 20:36:32.643 [DBG] SWPR: Rescheduling input 3983c6566b58de69a8f2d30d5f4576db8fec72c9f8553de05548ea1e0c7bf779:0 after 2 attempts at height 788676 (delta 2) 2023-05-07 20:37:11.177 [INF] DISC: Broadcasting 170 new announcements in 17 sub batches 2023-05-07 20:37:20.576 [INF] CRTR: Processed channels=0 updates=98 nodes=6 in last 59.999853712s 2023-05-07 20:38:20.575 [INF] CRTR: Processed channels=0 updates=89 nodes=13 in last 59.999575093s 2023-05-07 20:38:41.177 [INF] DISC: Broadcasting 159 new announcements in 16 sub batches 2023-05-07 20:39:20.576 [INF] CRTR: Processed channels=0 updates=115 nodes=0 in last 1m0.000404082s 2023-05-07 20:40:07.951 [DBG] LNWL: Filtering block 788675 (00000000000000000002629e195518eb93bb89c3e7a6e17464d934757886aaf9) with 5009 transactions 2023-05-07 20:40:07.951 [DBG] LNWL: Filtering block 788675 (00000000000000000002629e195518eb93bb89c3e7a6e17464d934757886aaf9) with 5009 transactions 2023-05-07 20:40:07.952 [DBG] LNWL: Filtering block 788675 (00000000000000000002629e195518eb93bb89c3e7a6e17464d934757886aaf9) with 5009 transactions 2023-05-07 20:40:08.460 [INF] CRTR: Pruning channel graph using block 00000000000000000002629e195518eb93bb89c3e7a6e17464d934757886aaf9 (height=788675) 2023-05-07 20:40:08.632 [INF] CRTR: Block 00000000000000000002629e195518eb93bb89c3e7a6e17464d934757886aaf9 (height=788675) closed 0 channels 2023-05-07 20:40:09.083 [INF] NTFN: New block: height=788675, sha=00000000000000000002629e195518eb93bb89c3e7a6e17464d934757886aaf9 2023-05-07 20:40:09.084 [DBG] SWPR: New block: height=788675, sha=00000000000000000002629e195518eb93bb89c3e7a6e17464d934757886aaf9 2023-05-07 20:40:09.084 [DBG] SWPR: Sweep candidates at height=788675: total_num_pending=0, total_num_new=0 2023-05-07 20:40:09.084 [INF] SWPR: Sweep candidates at height=788675 with fee_rate=253 sat/kw, yield 0 distinct txns 2023-05-07 20:40:09.084 [INF] UTXN: Attempting to graduate height=788675: num_kids=0, num_babies=0 2023-05-07 20:40:11.178 [INF] DISC: Broadcasting 168 new announcements in 17 sub batches 2023-05-07 20:40:20.576 [INF] CRTR: Processed channels=0 updates=102 nodes=11 in last 59.999527878s
I need way more, basically from Restart to 3-5 minutes in the process
`Will follow the /mnt/hdd/lnd/logs/bitcoin/mainnet/lnd.log running 'sudo tail -n 30 -f /mnt/hdd/lnd/logs/bitcoin/mainnet/lnd.log'
Press ENTER to continue use CTRL+C any time to abort .. then use the command 'raspiblitz' to return to the menu #######################################################################################
2023-05-08 07:19:07.981 [INF] RPCS: Stopping NeutrinoKitRPC Sub-RPC Server 2023-05-08 07:19:07.981 [INF] RPCS: Stopping VersionRPC Sub-RPC Server 2023-05-08 07:19:07.981 [INF] RPCS: Stopping WatchtowerRPC Sub-RPC Server 2023-05-08 07:19:07.981 [INF] RPCS: Stopping WatchtowerClientRPC Sub-RPC Server 2023-05-08 07:19:07.981 [INF] RPCS: Stopping RouterRPC Sub-RPC Server 2023-05-08 07:19:07.981 [INF] RPCS: Stopping AutopilotRPC Sub-RPC Server 2023-05-08 07:19:07.981 [INF] RPCS: Stopping ChainRPC Sub-RPC Server 2023-05-08 07:19:07.993 [INF] LTND: Shutdown complete
2023-05-08 07:19:08.553 [INF] LTND: Version: 0.15.4-beta commit=v0.15.4-beta, build=production, logging=default, debuglevel=info,SWPR=debug,LNWL=debug
2023-05-08 07:19:08.554 [INF] LTND: Active chain: Bitcoin (network=mainnet)
2023-05-08 07:19:08.558 [INF] RPCS: RPC server listening on 0.0.0.0:10009
2023-05-08 07:19:08.576 [INF] RPCS: gRPC proxy started at 0.0.0.0:8080
2023-05-08 07:19:08.577 [INF] LTND: Opening the main database, this might take a few minutes...
2023-05-08 07:19:08.577 [INF] LTND: Opening bbolt database, sync_freelist=true, auto_compact=true
2023-05-08 07:19:08.577 [INF] CHDB: Not compacting database file at /home/bitcoin/.lnd/data/graph/mainnet/channel.db, it was last compacted at 2023-05-07 11:35:03.859781935 +0100 BST (19h44m4s ago), min age is set to 672h0m0s
2023-05-08 07:19:08.579 [INF] CHDB: Not compacting database file at /home/bitcoin/.lnd/data/chain/bitcoin/mainnet/macaroons.db, it was last compacted at 2023-05-07 11:35:03.947138579 +0100 BST (19h44m4s ago), min age is set to 672h0m0s
2023-05-08 07:19:08.579 [INF] CHDB: Not compacting database file at /home/bitcoin/.lnd/data/graph/mainnet/sphinxreplay.db, it was last compacted at 2023-05-07 11:35:03.957977315 +0100 BST (19h44m4s ago), min age is set to 672h0m0s
2023-05-08 07:19:08.580 [INF] LTND: Creating local graph and channel state DB instances
2023-05-08 07:19:09.675 [ERR] RPCS: [/lnrpc.Lightning/GetInfo]: waiting to start, RPC services not available
2023-05-08 07:19:09.986 [ERR] RPCS: [/lnrpc.Lightning/ListInvoices]: waiting to start, RPC services not available
2023-05-08 07:19:09.995 [ERR] RPCS: [/lnrpc.Lightning/ListPayments]: waiting to start, RPC services not available
2023-05-08 07:19:09.996 [ERR] RPCS: [/lnrpc.Lightning/GetTransactions]: waiting to start, RPC services not available
2023-05-08 07:19:11.224 [INF] CHDB: Checking for schema update: latest_version=29, db_version=29
2023-05-08 07:19:11.225 [INF] CHDB: Checking for optional update: prune_revocation_log=false, db_version=empty
2023-05-08 07:19:11.225 [INF] LTND: Database(s) now open (time_to_open=2.647489129s)!
2023-05-08 07:19:11.225 [INF] LTND: Systemd was notified about our readiness
2023-05-08 07:19:11.225 [INF] LTND: Waiting for wallet encryption password. Use lncli create
to create a wallet, lncli unlock
to unlock an existing wallet, or lncli changepassword
to change the password of an existing wallet and unlock it.
2023-05-08 07:19:13.839 [ERR] RPCS: [/lnrpc.Lightning/GetInfo]: wallet locked, unlock it to enable full RPC access
2023-05-08 07:19:20.902 [ERR] RPCS: [/lnrpc.Lightning/GetInfo]: wallet locked, unlock it to enable full RPC access
2023-05-08 07:19:29.257 [ERR] RPCS: [/lnrpc.Lightning/GetInfo]: wallet locked, unlock it to enable full RPC access
2023-05-08 07:19:29.984 [ERR] RPCS: [/lnrpc.Lightning/ListInvoices]: wallet locked, unlock it to enable full RPC access
2023-05-08 07:19:29.986 [ERR] RPCS: [/lnrpc.Lightning/GetTransactions]: wallet locked, unlock it to enable full RPC access
2023-05-08 07:19:29.987 [ERR] RPCS: [/lnrpc.Lightning/ListPayments]: wallet locked, unlock it to enable full RPC access
2023-05-08 07:19:37.335 [ERR] RPCS: [/lnrpc.Lightning/GetInfo]: wallet locked, unlock it to enable full RPC access
2023-05-08 07:19:44.574 [ERR] RPCS: [/lnrpc.Lightning/GetInfo]: wallet locked, unlock it to enable full RPC access
2023-05-08 07:19:50.024 [ERR] RPCS: [/lnrpc.Lightning/ListInvoices]: wallet locked, unlock it to enable full RPC access
2023-05-08 07:19:50.026 [ERR] RPCS: [/lnrpc.Lightning/GetTransactions]: wallet locked, unlock it to enable full RPC access
2023-05-08 07:19:50.028 [ERR] RPCS: [/lnrpc.Lightning/ListPayments]: wallet locked, unlock it to enable full RPC access
2023-05-08 07:19:52.171 [ERR] RPCS: [/lnrpc.Lightning/GetInfo]: wallet locked, unlock it to enable full RPC access
2023-05-08 07:19:59.182 [ERR] RPCS: [/lnrpc.Lightning/GetInfo]: wallet locked, unlock it to enable full RPC access
2023-05-08 07:20:06.479 [ERR] RPCS: [/lnrpc.Lightning/GetInfo]: wallet locked, unlock it to enable full RPC access
2023-05-08 07:20:10.000 [ERR] RPCS: [/lnrpc.Lightning/ListInvoices]: wallet locked, unlock it to enable full RPC access
2023-05-08 07:20:10.001 [ERR] RPCS: [/lnrpc.Lightning/GetTransactions]: wallet locked, unlock it to enable full RPC access
2023-05-08 07:20:10.002 [ERR] RPCS: [/lnrpc.Lightning/ListPayments]: wallet locked, unlock it to enable full RPC access
2023-05-08 07:20:14.737 [ERR] RPCS: [/lnrpc.Lightning/GetInfo]: wallet locked, unlock it to enable full RPC access
2023-05-08 07:20:21.759 [ERR] RPCS: [/lnrpc.Lightning/GetInfo]: wallet locked, unlock it to enable full RPC access
2023-05-08 07:20:28.913 [ERR] RPCS: [/lnrpc.Lightning/GetInfo]: wallet locked, unlock it to enable full RPC access
2023-05-08 07:20:29.990 [ERR] RPCS: [/lnrpc.Lightning/ListInvoices]: wallet locked, unlock it to enable full RPC access
2023-05-08 07:20:29.992 [ERR] RPCS: [/lnrpc.Lightning/GetTransactions]: wallet locked, unlock it to enable full RPC access
2023-05-08 07:20:29.995 [ERR] RPCS: [/lnrpc.Lightning/ListPayments]: wallet locked, unlock it to enable full RPC access
2023-05-08 07:20:37.297 [ERR] RPCS: [/lnrpc.Lightning/GetInfo]: wallet locked, unlock it to enable full RPC access
2023-05-08 07:20:45.188 [ERR] RPCS: [/lnrpc.Lightning/GetInfo]: wallet locked, unlock it to enable full RPC access
2023-05-08 07:20:49.987 [ERR] RPCS: [/lnrpc.Lightning/ListInvoices]: wallet locked, unlock it to enable full RPC access
2023-05-08 07:20:49.989 [ERR] RPCS: [/lnrpc.Lightning/GetTransactions]: wallet locked, unlock it to enable full RPC access
2023-05-08 07:20:49.992 [ERR] RPCS: [/lnrpc.Lightning/ListPayments]: wallet locked, unlock it to enable full RPC access
2023-05-08 07:20:53.377 [ERR] RPCS: [/lnrpc.Lightning/GetInfo]: wallet locked, unlock it to enable full RPC access
2023-05-08 07:21:00.369 [ERR] RPCS: [/lnrpc.Lightning/GetInfo]: wallet locked, unlock it to enable full RPC access
2023-05-08 07:21:07.853 [ERR] RPCS: [/lnrpc.Lightning/GetInfo]: wallet locked, unlock it to enable full RPC access
2023-05-08 07:21:09.992 [ERR] RPCS: [/lnrpc.Lightning/ListInvoices]: wallet locked, unlock it to enable full RPC access
2023-05-08 07:21:09.993 [ERR] RPCS: [/lnrpc.Lightning/GetTransactions]: wallet locked, unlock it to enable full RPC access
2023-05-08 07:21:09.994 [ERR] RPCS: [/lnrpc.Lightning/ListPayments]: wallet locked, unlock it to enable full RPC access
2023-05-08 07:21:16.563 [ERR] RPCS: [/lnrpc.Lightning/GetInfo]: wallet locked, unlock it to enable full RPC access
2023-05-08 07:21:24.642 [ERR] RPCS: [/lnrpc.Lightning/GetInfo]: wallet locked, unlock it to enable full RPC access
2023-05-08 07:21:29.993 [ERR] RPCS: [/lnrpc.Lightning/ListInvoices]: wallet locked, unlock it to enable full RPC access
2023-05-08 07:21:29.995 [ERR] RPCS: [/lnrpc.Lightning/GetTransactions]: wallet locked, unlock it to enable full RPC access
2023-05-08 07:21:29.996 [ERR] RPCS: [/lnrpc.Lightning/ListPayments]: wallet locked, unlock it to enable full RPC access
2023-05-08 07:21:30.674 [INF] LNWL: Opened wallet
2023-05-08 07:21:31.058 [INF] CHRE: Primary chain is set to: bitcoin
2023-05-08 07:21:31.104 [INF] CHRE: Initializing bitcoind backed fee estimator in CONSERVATIVE mode
2023-05-08 07:21:31.105 [INF] LNWL: Started listening for bitcoind block notifications via ZMQ on 127.0.0.1:28332
2023-05-08 07:21:31.105 [INF] LNWL: Started listening for bitcoind transaction notifications via ZMQ on 127.0.0.1:28333
2023-05-08 07:21:33.509 [INF] LNWL: The wallet has been unlocked without a time limit
2023-05-08 07:21:33.616 [DBG] LNWL: Birthday block has already been verified: height=784239, hash=0000000000000000000339dfbb27584dc64c013cd912e9895b2c1d2999073cb3
2023-05-08 07:21:33.617 [DBG] LNWL: Waiting for chain backend to sync to tip
2023-05-08 07:21:33.620 [INF] CHRE: LightningWallet opened
2023-05-08 07:21:33.679 [INF] HSWC: Cleaning circuits from disk for closed channels
2023-05-08 07:21:33.683 [INF] HSWC: Finished cleaning: num_closed_channel=6, num_circuits=0, num_keystone=0
2023-05-08 07:21:33.684 [INF] HSWC: Restoring in-memory circuit state from disk
2023-05-08 07:21:33.685 [INF] HSWC: Payment circuits loaded: num_pending=0, num_open=0
2023-05-08 07:21:33.705 [INF] LTND: Channel backup proxy channel notifier starting
2023-05-08 07:21:33.706 [INF] ATPL: Instantiating autopilot with active=false, max_channels=5, allocation=0.600000, min_chan_size=20000, max_chan_size=16777215, private=false, min_confs=1, conf_target=3
2023-05-08 07:21:33.707 [INF] LTND: Systemd was notified about our readiness
2023-05-08 07:21:33.712 [INF] LTND: Waiting for chain backend to finish sync, start_height=788756
2023-05-08 07:21:34.680 [DBG] LNWL: Chain backend synced to tip!
2023-05-08 07:21:34.752 [INF] LNWL: Started rescan from block 000000000000000000031dfdca766c998f8544b55d65aafb259cd6a2c6b5a962 (height 788756) for 39 addresses
2023-05-08 07:21:34.761 [INF] LNWL: Catching up block hashes to height 788756, this might take a while
2023-05-08 07:21:34.763 [INF] LNWL: Done catching up block hashes
2023-05-08 07:21:34.764 [INF] LNWL: Finished rescan for 39 addresses (synced to block 000000000000000000031dfdca766c998f8544b55d65aafb259cd6a2c6b5a962, height 788756)
2023-05-08 07:21:35.728 [INF] LTND: Chain backend is fully synced (endheight=788756)!
2023-05-08 07:21:35.728 [WRN] HLCK: check: disk space configured with 0 attempts, skipping it
2023-05-08 07:21:35.728 [WRN] HLCK: check: tls configured with 0 attempts, skipping it
2023-05-08 07:21:35.729 [INF] LNWL: SigPool starting
2023-05-08 07:21:35.743 [INF] CHNF: ChannelNotifier starting
2023-05-08 07:21:35.743 [INF] PRNF: PeerNotifier starting
2023-05-08 07:21:35.744 [INF] HSWC: HtlcNotifier starting
2023-05-08 07:21:35.744 [INF] SWPR: Sweeper starting
2023-05-08 07:21:35.744 [DBG] SWPR: Publishing last tx 6041a74f2c4a474038732e952d1c8985b78b12f6d5dadd237a43ddc975012008
2023-05-08 07:21:35.745 [INF] LNWL: Inserting unconfirmed transaction 6041a74f2c4a474038732e952d1c8985b78b12f6d5dadd237a43ddc975012008
2023-05-08 07:21:35.746 [DBG] LNWL: Marked address bc1p4ank694a0m4g0lv4h2wrtjcfkrddr39ugw5zdmczxyynzz6lkrhsnmd2zr used
2023-05-08 07:21:35.850 [INF] LNWL: Removed invalid transaction: (wire.MsgTx)(0x4001500100)({
Version: (int32) 2,
TxIn: ([]wire.TxIn) (len=2 cap=2) {
(*wire.TxIn)(0x40001283c0)({
PreviousOutPoint: (wire.OutPoint) 70adc5ee10e7659fd1f4e3fc3fa7a90bd81d2a7221c972d82e3f180658b2d518:0,
SignatureScript: ([]uint8) {
},
Witness: (wire.TxWitness) (len=2 cap=2) {
([]uint8) (len=72 cap=72) {
00000000 30 45 02 21 00 ba ad a7 64 9f 0f b0 6a 15 0b 15 |0E.!....d...j...|
00000010 aa 75 9b 34 d7 19 91 eb 02 e5 3e ce 71 ce 4d d7 |.u.4......>.q.M.|
00000020 1f 86 03 c4 c4 02 20 53 3f 49 e9 d3 79 c9 8a 6c |...... S?I..y..l|
00000030 cc 52 f2 69 86 2b ff 4b 09 bf 5f 40 8b 52 3a 49 |.R.i.+.K..@.R:I|
00000040 ce 19 91 7d 98 0b e2 01 |...}....|
},
([]uint8) (len=40 cap=40) {
00000000 21 02 3a 97 e1 37 ef bc 55 aa f0 f7 54 f5 f7 20 |!.:..7..U...T.. |
00000010 4a 0f 7b a8 a9 c0 b5 f9 82 27 23 14 15 36 a1 54 |J.{......'#..6.T|
00000020 a6 1f ac 73 64 60 b2 68 |...sd`.h|
}
},
Sequence: (uint32) 0
}),
(wire.TxIn)(0x4000128420)({
PreviousOutPoint: (wire.OutPoint) c56c1bc37980264e13d53aead959836720f6b6fe7fe245b9377c8e02917af6a0:0,
SignatureScript: ([]uint8) {
},
Witness: (wire.TxWitness) (len=1 cap=1) {
([]uint8) (len=64 cap=64) {
00000000 5b 40 46 a8 c0 26 07 18 7f b9 70 24 6b d9 ca a5 |[@F..&....p$k...|
00000010 33 49 98 c6 d8 cf 45 a1 0a fd 39 a3 d0 5b 22 cf |3I....E...9..[".|
00000020 a7 46 f0 d4 79 2f 65 85 ec 51 2f e5 86 31 07 75 |.F..y/e..Q/..1.u|
00000030 c5 27 43 d6 a5 e2 5a e6 7d 94 be 36 76 7d 49 72 |.'C...Z.}..6v}Ir|
}
},
Sequence: (uint32) 0
})
},
TxOut: ([]wire.TxOut) (len=1 cap=1) {
(*wire.TxOut)(0x400004d6e0)({
Value: (int64) 612,
PkScript: ([]uint8) (len=34 cap=34) {
00000000 51 20 af 67 6d 16 bd 7e ea 87 fd 95 ba 9c 35 cb |Q .gm..~......5.|
00000010 09 b0 da d1 c4 bc 43 a8 26 ef 02 31 09 31 0b 5f |......C.&..1.1._|
00000020 b0 ef |..|
}
})
},
LockTime: (uint32) 788755
})
2023-05-08 07:21:35.851 [INF] UTXN: UTXO nursery starting
2023-05-08 07:21:35.851 [INF] NTFN: New block epoch subscription
2023-05-08 07:21:35.856 [INF] NTFN: New block epoch subscription
2023-05-08 07:21:35.859 [INF] BRAR: Breach arbiter starting
2023-05-08 07:21:35.863 [INF] FNDG: Funding manager starting
2023-05-08 07:21:35.863 [INF] BRAR: Starting contract observer, watching for breaches.
2023-05-08 07:21:35.864 [INF] HSWC: HTLC Switch starting
2023-05-08 07:21:35.864 [INF] NTFN: New block epoch subscription
2023-05-08 07:21:35.864 [INF] CNCT: ChainArbitrator starting
2023-05-08 07:21:35.867 [INF] CNCT: Creating ChannelArbitrators for 2 closing channels
2023-05-08 07:21:35.871 [INF] CNCT: ChannelArbitrator(deded54c5e2227a23325de94e4031cf5f86a0784d9b2322a9830c200d69dc685:0): starting state=StateWaitingFullResolution, trigger=chainTrigger, triggerHeight=786092
2023-05-08 07:21:35.871 [INF] CNCT: ChannelArbitrator(deded54c5e2227a23325de94e4031cf5f86a0784d9b2322a9830c200d69dc685:0): still awaiting contract resolution
2023-05-08 07:21:35.872 [INF] CNCT: ChannelArbitrator(deded54c5e2227a23325de94e4031cf5f86a0784d9b2322a9830c200d69dc685:0): relaunching 1 contract resolvers
2023-05-08 07:21:35.874 [INF] NTFN: New confirmation subscription: conf_id=1, txid=3983c6566b58de69a8f2d30d5f4576db8fec72c9f8553de05548ea1e0c7bf779, num_confs=1 height_hint=788756
2023-05-08 07:21:35.874 [INF] SWPR: Sweep request received: out_point=3983c6566b58de69a8f2d30d5f4576db8fec72c9f8553de05548ea1e0c7bf779:0, witness_type=CommitmentAnchor, relative_time_lock=0, absolute_time_lock=0, amount=0.0000033 BTC, params=(fee=253 sat/kw, force=false, exclusive_group=
2023-05-08 07:22:06.008 [DBG] SWPR: Rescheduling input 70adc5ee10e7659fd1f4e3fc3fa7a90bd81d2a7221c972d82e3f180658b2d518:0 after 1 attempts at height 788757 (delta 1) 2023-05-08 07:22:06.010 [DBG] SWPR: Rescheduling input 3983c6566b58de69a8f2d30d5f4576db8fec72c9f8553de05548ea1e0c7bf779:0 after 1 attempts at height 788757 (delta 1) 2023-05-08 07:22:42.403 [INF] CRTR: Processed channels=4 updates=7 nodes=2 in last 1m4.360594432s 2023-05-08 07:23:05.891 [INF] DISC: Broadcasting 3 new announcements in 1 sub batches 2023-05-08 07:23:42.415 [INF] CRTR: Processed channels=0 updates=4 nodes=0 in last 1m0.012260924s 2023-05-08 07:24:35.895 [INF] DISC: Broadcasting 19 new announcements in 2 sub batches 2023-05-08 07:24:42.402 [INF] CRTR: Processed channels=0 updates=26 nodes=0 in last 59.986913211s 2023-05-08 07:25:42.403 [INF] CRTR: Processed channels=0 updates=62 nodes=0 in last 1m0.000314047s 2023-05-08 07:26:05.884 [INF] DISC: Broadcasting 101 new announcements in 11 sub batches 2023-05-08 07:26:42.403 [INF] CRTR: Processed channels=0 updates=44 nodes=2 in last 1m0.000100649s 2023-05-08 07:27:35.884 [INF] DISC: Broadcasting 101 new announcements in 11 sub batches 2023-05-08 07:27:42.403 [INF] CRTR: Processed channels=0 updates=95 nodes=1 in last 59.99981524s 2023-05-08 07:28:42.403 [INF] CRTR: Processed channels=0 updates=89 nodes=1 in last 59.99942315s 2023-05-08 07:29:05.884 [INF] DISC: Broadcasting 149 new announcements in 15 sub batches
`
Interesting, no sign of the missing outputs in your log-file, I would suggest restoring your channel.backup file once again either using your running lnd instance or a complete new instance, something went wrong during recovery IMO
Interesting, no sign of the missing outputs in your log-file, I would suggest restoring your channel.backup file once again either using your running lnd instance or a complete new instance, something went wrong during recovery IMO
Did a seed and channel.backup recovery: signature mismatch?
log says: 2023-05-08 10:23:03.710 [ERR] RPCS: [/lnrpc.Lightning/ListInvoices]: verification failed: signature mismatch after caveat verification 2023-05-08 10:23:03.711 [ERR] RPCS: [/lnrpc.Lightning/GetTransactions]: verification failed: signature mismatch after caveat verification 2023-05-08 10:23:03.713 [ERR] RPCS: [/lnrpc.Lightning/ListPayments]: verification failed: signature mismatch after caveat verification 2023-05-08 10:23:23.703 [ERR] RPCS: [/lnrpc.Lightning/ListInvoices]: verification failed: signature mismatch after caveat verification 2023-05-08 10:23:23.705 [ERR] RPCS: [/lnrpc.Lightning/GetTransactions]: verification failed: signature mismatch after caveat verification 2023-05-08 10:23:23.707 [ERR] RPCS: [/lnrpc.Lightning/ListPayments]: verification failed: signature mismatch after caveat verification 2023-05-08 10:23:32.182 [DBG] RPCS: [getrecoveryinfo] is recovery mode=true, progress=0.9993378945045244 2023-05-08 10:23:33.883 [DBG] RPCS: [walletbalance] Total balance=0.00020465 BTC (confirmed=0.00020465 BTC, unconfirmed=0 BTC) 2023-05-08 10:23:34.176 [DBG] RPCS: [channelbalance] local_balance=0 mSAT remote_balance=0 mSAT unsettled_local_balance=0 mSAT unsettled_remote_balance=0 mSAT pending_open_local_balance=0 mSAT pending_open_remote_balance=0 mSAT
You're using the wrong macaroon. Maybe from a previous installation?
ok.. ill try a fresh install and recovery..
did a check with lncli pendingchannels
"total_limbo_balance": "0", "pending_open_channels": [ ], "pending_closing_channels": [ ], "pending_force_closing_channels": [ ], "waiting_close_channels": [ { "channel": { "remote_node_pub": "0217890e3aad8d35bc054f43acc00084b25229ecff0ab68debd82883ad65ee8266", "channel_point": "deded54c5e2227a23325de94e4031cf5f86a0784d9b2322a9830c200d69dc685:0", "capacity": "468040", "local_balance": "0", "remote_balance": "0", "local_chan_reserve_sat": "0", "remote_chan_reserve_sat": "0", "initiator": "INITIATOR_LOCAL", "commitment_type": "ANCHORS", "num_forwarding_packages": "0", "chan_status_flags": "ChanStatusRestored", "private": true }, "limbo_balance": "0", "commitments": { "local_txid": "", "remote_txid": "", "remote_pending_txid": "", "local_commit_fee_sat": "0", "remote_commit_fee_sat": "0", "remote_pending_commit_fee_sat": "0" }, "closing_txid": "" }, { "channel": { "remote_node_pub": "0290cc884704073b2b633f69f852e8ca2a37660bb359a1e861f2b48760c298ac53", "channel_point": "629cac110c56c635c9a45f3b8cd21fef6ec780654933680293540c0385c2d9a2:0", "capacity": "3636958", "local_balance": "0", "remote_balance": "0", "local_chan_reserve_sat": "0", "remote_chan_reserve_sat": "0", "initiator": "INITIATOR_LOCAL", "commitment_type": "ANCHORS", "num_forwarding_packages": "0", "chan_status_flags": "ChanStatusRestored", "private": true }, "limbo_balance": "0", "commitments": { "local_txid": "", "remote_txid": "", "remote_pending_txid": "", "local_commit_fee_sat": "0", "remote_commit_fee_sat": "0", "remote_pending_commit_fee_sat": "0" }, "closing_txid": "" }, { "channel": { "remote_node_pub": "03864ef025fde8fb587d989186ce6a4a186895ee44a926bfc370e2c366597a3f8f", "channel_point": "f14cdb59387c153a0472db9e9b174d9a9a84df8314af289872e326329217aa8b:0", "capacity": "1500000", "local_balance": "0", "remote_balance": "0", "local_chan_reserve_sat": "0", "remote_chan_reserve_sat": "0", "initiator": "INITIATOR_LOCAL", "commitment_type": "ANCHORS", "num_forwarding_packages": "0", "chan_status_flags": "ChanStatusRestored", "private": true }, "limbo_balance": "0", "commitments": { "local_txid": "", "remote_txid": "", "remote_pending_txid": "", "local_commit_fee_sat": "0", "remote_commit_fee_sat": "0", "remote_pending_commit_fee_sat": "0" }, "closing_txid": "" } ] }
Looks good. Now you just need to let it run, re-scan the chain then sweep the outputs. With the current mempool going crazy this might literally take days or weeks. So patience is key.
ok.. the rescan is done, i have a concern to one of the channels.. i think its allready closed earlier.. but the first two is the ones that didnt recover.. should i go ahead with chantools? or a debug first?
Did your channels move all from waiting_close_channels
=> pending_force_closing_channels
?
lncli pendingchannels { β β UPDATE Check/Prepare RaspiBlitz Update β β "total_limbo_balance": "4098718", β β REBOOT Reboot RaspiBlitz β β "pending_open_channels": [ β β OFF PowerOff RaspiBlitz β β ], β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β "pending_closing_channels": [ ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€ ], β <Select> < Exit > β "pending_force_closing_channels": [ ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ { "channel": { "remote_node_pub": "0217890e3aad8d35bc054f43acc00084b25229ecff0ab68debd82883ad65ee8266", "channel_point": "deded54c5e2227a23325de94e4031cf5f86a0784d9b2322a9830c200d69dc685:0", "capacity": "468040", "local_balance": "0", "remote_balance": "0", "local_chan_reserve_sat": "0", "remote_chan_reserve_sat": "0", "initiator": "INITIATOR_LOCAL", "commitment_type": "ANCHORS", "num_forwarding_packages": "0", "chan_status_flags": "", "private": true }, "closing_txid": "3983c6566b58de69a8f2d30d5f4576db8fec72c9f8553de05548ea1e0c7bf779", "limbo_balance": "464900", "maturity_height": 0, "blocks_til_maturity": 0, "recovered_balance": "0", "pending_htlcs": [ ], "anchor": "LIMBO" }, { "channel": { "remote_node_pub": "0290cc884704073b2b633f69f852e8ca2a37660bb359a1e861f2b48760c298ac53", "channel_point": "629cac110c56c635c9a45f3b8cd21fef6ec780654933680293540c0385c2d9a2:0", "capacity": "3636958", "local_balance": "0", "remote_balance": "0", "local_chan_reserve_sat": "0", "remote_chan_reserve_sat": "0", "initiator": "INITIATOR_LOCAL", "commitment_type": "ANCHORS", "num_forwarding_packages": "0", "chan_status_flags": "", "private": true }, "closing_txid": "70adc5ee10e7659fd1f4e3fc3fa7a90bd81d2a7221c972d82e3f180658b2d518", "limbo_balance": "3633818", "maturity_height": 0, "blocks_til_maturity": 0, "recovered_balance": "0", "pending_htlcs": [ ], "anchor": "LIMBO" } ], "waiting_close_channels": [ ] }
Looks good, your funds were swept !
I did see that! thanks for the help! so running a recovery with seed and channel.backup in the raspiblitz recovery menu, was the soution! Now i just have to wait for mempool to clear up! :) Thank you so much for your time! :zap:
Hi! My node crashed because of a failing SSD disk, and my node was opening two channels when it happend. i started to panic a bit and got a new node up an running the next day. the channel opnening did not go through but my sats was gone. i then found a guide to force close to get the sats back.. used forclose with outputindex 1.. and i think i should have used outputindex 0.. is there a chance that ive lost my sats?
the channels is showing in lncli pendingchannels, ive waited for two weeks now, but nothing happens.