ElementsProject / lightning

Core Lightning — Lightning Network implementation focusing on spec compliance and performance
Other
2.85k stars 901 forks source link

Unable to network-connect peer after peer force-closed channel but tx unconfirmed #6255

Closed G8XSU closed 1 year ago

G8XSU commented 1 year ago

Issue and Steps to Reproduce

Peer is LDK node. MyNode is CLN node.

MyNode tried updating feerate and LDK refused with error "feerate much too low" code

And force-closed the channel. (close-tx isn't confirmed yet)

Error: "Peer's feerate much too low. Actual: 3258. Our expected lower limit: 3537 (- 250)"

Now if i try to just network-connect to peer using lightning-cli connect it gets stuck and node is unable to connect.

If peer tries to connect to me, my node replies with peer's original error msg "channeld: received ERROR error channel {channelId}: Peer's feerate much too low. Actual: 3258. Our expected lower limit: 3537 (- 250)"

And peer is unable to connect.

Expected behaviour: Both nodes should be able to connect, channel-opening would be next step.

TheBlueMatt commented 1 year ago

Errr, sorry, I didn't actually ever try to connect to the CLN node, the CLN node sent that error when it tried to connect to me, just repeating my (now very old) error message back to me.

vincenzopalazzo commented 1 year ago

I did not understand the workflow here, sorry!

So from cln --- connect --> ldk the first time and you fund a channel;

at some point ldk force close the channel with cln because there is an error message Peer's feerate much too low. Actual: 3258. Our expected lower limit: 3537 (- 250)

So now you cannot connect again cln --- connect --> ldk .

Do I get the workflow?

In addition, could you post the log of core lightning? we should log everything, so if something strange happens the answer is there

TheBlueMatt commented 1 year ago

Yes, correct, after a force-close CLN refuses to connect at all.

G8XSU commented 1 year ago

Yes that's the correct order of workflow. CLN relevant logs, lemme know if you need more:

03db10aa09ff04d3568b0621750794063df401e6853c79a21a83e1a3f3b5bfb0c8-connectd: Connected out, starting crypto
2023-05-11T18:38:05.487Z DEBUG   020b1d32cf2d8b9001f9eb183379417d8d136c86b45bee59e2bf2b83e3fba9aa65-hsmd: Got WIRE_HSMD_ECDH_REQ
2023-05-11T18:38:05.487Z DEBUG   hsmd: Client: Received message 1 from client
2023-05-11T18:38:05.488Z DEBUG   03db10aa09ff04d3568b0621750794063df401e6853c79a21a83e1a3f3b5bfb0c8-connectd: Connect OUT
2023-05-11T18:38:05.488Z DEBUG   03db10aa09ff04d3568b0621750794063df401e6853c79a21a83e1a3f3b5bfb0c8-connectd: peer_out WIRE_INIT
2023-05-11T18:38:06.051Z DEBUG   03db10aa09ff04d3568b0621750794063df401e6853c79a21a83e1a3f3b5bfb0c8-connectd: peer_in WIRE_INIT
2023-05-11T18:38:06.051Z DEBUG   03db10aa09ff04d3568b0621750794063df401e6853c79a21a83e1a3f3b5bfb0c8-gossipd: seeker: disabling gossip
2023-05-11T18:38:06.051Z DEBUG   03db10aa09ff04d3568b0621750794063df401e6853c79a21a83e1a3f3b5bfb0c8-chan#14: Peer has reconnected, state AWAITING_UNILATERAL: telling connectd to make active
2023-05-11T18:38:06.052Z DEBUG   03db10aa09ff04d3568b0621750794063df401e6853c79a21a83e1a3f3b5bfb0c8-connectd: Handed peer, entering loop
2023-05-11T18:38:06.053Z DEBUG   03db10aa09ff04d3568b0621750794063df401e6853c79a21a83e1a3f3b5bfb0c8-connectd: peer_out WIRE_GOSSIP_TIMESTAMP_FILTER
2023-05-11T18:38:06.053Z DEBUG   03db10aa09ff04d3568b0621750794063df401e6853c79a21a83e1a3f3b5bfb0c8-lightningd: Telling connectd to send error 0011145bdaf576a9b8aed1442c5f5ee97d84b7b49c5c05a3ef3e53cb9b10d15a696600ba6368616e6e656c643a207265636569766564204552524f52206572726f72206368616e6e656c20313435626461663537366139623861656431343432633566356565393764383462376234396335633035613365663365353363623962313064313561363936363a205065657227732066656572617465206d75636820746f6f206c6f772e2041637475616c3a20333235382e204f7572206578706563746564206c6f776572206c696d69743a203335333720282d2032353029
2023-05-11T18:38:06.057Z DEBUG   03db10aa09ff04d3568b0621750794063df401e6853c79a21a83e1a3f3b5bfb0c8-connectd: destroy_subd: 1 subds, to_peer conn 0x557e110408, read_to_die = 0
2023-05-11T18:38:06.057Z DEBUG   03db10aa09ff04d3568b0621750794063df401e6853c79a21a83e1a3f3b5bfb0c8-connectd: peer_out WIRE_ERROR
2023-05-11T18:38:06.057Z DEBUG   03db10aa09ff04d3568b0621750794063df401e6853c79a21a83e1a3f3b5bfb0c8-connectd: peer_out WIRE_ERROR
2023-05-11T18:38:06.058Z DEBUG   03db10aa09ff04d3568b0621750794063df401e6853c79a21a83e1a3f3b5bfb0c8-connectd: peer_conn_closed
2023-05-11T18:38:06.058Z DEBUG   03db10aa09ff04d3568b0621750794063df401e6853c79a21a83e1a3f3b5bfb0c8-lightningd: peer_disconnect_done
2023-05-11T18:38:06.086Z DEBUG   plugin-funder: Cleaning up inflights for peer id 03db10aa09ff04d3568b0621750794063df401e6853c79a21a83e1a3f3b5bfb0c8
vincenzopalazzo commented 1 year ago

2023-05-11T18:38:06.051Z DEBUG 03db10aa09ff04d3568b0621750794063df401e6853c79a21a83e1a3f3b5bfb0c8-chan#14: Peer has reconnected, state AWAITING_UNILATERAL: telling connectd to make active

From this core lighting, see the reconnect, but maybe we need to handle something better! I will take a look, thanks

vincenzopalazzo commented 1 year ago

Mh! I forget to ask what kind of cln version this is, from the error looks like one very old

I found an old issue https://github.com/ElementsProject/lightning/issues/5255 that looks like what you had reported too.

So I wonder if you are using an old version of core lightning?

G8XSU commented 1 year ago

Hi, CLN Version was v0.11.2.

vincenzopalazzo commented 1 year ago

ok the version match of the bugs matches, now we should see if you are able to see this bug with the current version.

Are you able to upgrade cln to the latest version?

G8XSU commented 1 year ago

i did upgrade but did not run into situation where this will be reproduced. Afaiu, at time of reporting that version was considered a stable version (but 2 behind the latest stable). Great to hear that this is addressed already. If you can confirm that then feel free to close this :)

vincenzopalazzo commented 1 year ago

Let's do in this way. We close this, and if the issue reproduces again feel free to reopen it and ping me.

Sorry if this take me so long.