ACINQ / eclair

A scala implementation of the Lightning Network.
Apache License 2.0
1.24k stars 266 forks source link

Channel Open from LDK to Acinq results in error #2394

Closed TonyGiorgio closed 2 years ago

TonyGiorgio commented 2 years ago

Describe the issue

Have not tested this with a local eclair yet but when opening a channel from an LDK node to the Acinq node, I got this error on the LDK side:

EVENT: Channel feb7300a9210b8087834fc6a0f3f348b3561f56613f05ddb4ff917cd88bed418 closed due to: ProcessingError { err: "Channel Type in accept_channel didn't match the one sent in open_channel." } 

One thing I did try though was using the SCID_ALIAS flag so I don't know if that had anything to do with it. But chatting with Matt, apparently it's a protocol rule that channel type should always match when sending an accept_channel? Could it also be because the channel type was not anchor output?

What behaviour did you expect?

Channel Opens

What was the actual behaviour?

Channel does not open

How reliably can you reproduce the issue, what are the steps to do so?

Use Sensei to open a channel from it to an Eclair node. I could maybe do more testing locally to try that.

What version of Eclair are you using, where did you get it (website, self-compiled, etc)?

The version that Acinq is currently using on Mainnet.

Any extra information that might be useful in the debugging process.

I don't have much more logs on my side but happy to try to reproduce with a regtest elcair node at some point if it would be useful.

t-bast commented 2 years ago

One thing I did try though was using the SCID_ALIAS flag Could it also be because the channel type was not anchor output?

That is indeed the issue, eclair currently only supports scid_alias and/or zero_conf with anchor output channels, not with previous commitment types.

Our understanding was that every implementation was close to shipping support for anchor outputs (or had already shipped such support), that's why we thought it was an acceptable trade-off.

TheBlueMatt commented 2 years ago

Two notes:

a) why connect them at all? I don't see how they're related or even touch similar parts of code.

b) That doesn't explain the issue - if you don't accept a channel_open channel_type the response should be an error, not an accept_channel with a different channel_type.

t-bast commented 2 years ago

I don't see how they're related or even touch similar parts of code.

It does require more code, at least in our implementation, and it requires expanding the test matrix. Granted, it's not very complex, but it's also code we don't need if everyone supports anchor outputs, so why have it at all?

if you don't accept a channel_open channel_type the response should be an error, not an accept_channel with a different channel_type

Yes, that should be our behavior. This issue indicates that we may have a bug in this scenario, but looking at our code I don't immediately see why. Can you share the contents of the open_channel message that was sent? Or at least the approximate timestamp when the channel open was attempted, so that I can check our logs?

moneyball commented 2 years ago

What's the status of anchor outputs in various implementations? Eclair and LND done yeah? CLN I think Rusty mentioned in Oakland that they actually have more work to do? LDK doesn't have it yet but Wilmer is actively working on it so Hopefully Soon

Where is the line drawn for implementations? For example, Does Electrum's implementation support them? Does Muun's implementation support them?

t-bast commented 2 years ago

Eclair and LND done yeah?

Yes, it has shipped in lnd and eclair a while ago. As for cln, they need to do some work on the fee-bumping utilities, but apart from that they already support the initial version of anchor outputs (without zero-fee htlc txs).

LDK doesn't have it yet but Wilmer is actively working on it so Hopefully Soon

Great news!

Where is the line drawn for implementations? For example, does Electrum's implementation support them? Does Muun's implementation support them?

Muun is simply using lnd, so they do support it (the part between their server and the wallet isn't using lightning at all, so it's not applicable there). Electrum told me they were working on it, but I don't think they have shipped it yet.

TheBlueMatt commented 2 years ago

It does require more code, at least in our implementation, and it requires expanding the test matrix.

Huh, that's surprising. At least on our end (and lnd) the only interaction here is if you add code to reject based on the bit combination 😂.

More generally, even once anchors are broadly shipped, I assume there will be many nodes which don't support it - as y'all noted the requirement that you keep an on-chain balance is quite burdensome, and while lightning is particularly insecure without it, those more willing to partially trust LSPs may opt for no anchors. I think, especially for mobile/0conf/SCID aliases, we'll probably always need to support no-anchors. Of course that doesn't mean eclair needs to support it if you don't have LSP clients that will avoid it, but at least we will.

Can you share the contents of the open_channel message that was sent? Or at least the approximate timestamp when the channel open was attempted, so that I can check our logs?

@TonyGiorgio can you chime in here? The LDK TRACE-level logs should include every message as sent/received.

TonyGiorgio commented 2 years ago

For some reason I don't really see much in terms of logs in the way I'm running things (using sensei as a library which is using ldk).

Here's when it tried to connect to acinq and I just see a ton of these error messages:

2022-08-23 18:19:10 DEBUG [lightning::ln::peer_handler:961] Finished noise handshake for connection with 03864ef025fde8fb587d989186ce6a4a186895ee44a926bfc370e2c366597a3f8f
2022-08-23 18:19:10 TRACE [lightning::ln::peer_handler:851] Enqueueing message Init { features: [170, 81, 10, 8, 0, 160, 8], remote_network_address: Some(IPv4 { addr: [3, 33, 236, 230], port: 9735 }) } to 03864ef025fde8fb587d989186ce6a4a186895ee44a926bfc370e2c366597a3f8f
2022-08-23 18:19:10 INFO  [lightning::ln::peer_handler:1096] Received peer Init message from 03864ef025fde8fb587d989186ce6a4a186895ee44a926bfc370e2c366597a3f8f: DataLossProtect: supported, InitialRoutingSync: not supported, UpfrontShutdownScript: supported, GossipQueries: supported, VariableLengthOnion: required, StaticRemoteKey: supported, PaymentSecret: required, BasicMPP: supported, Wumbo: supported, ShutdownAnySegwit: supported, ChannelType: supported, SCIDPrivacy: not supported, ZeroConf: not supported, unknown flags: supported
2022-08-23 18:19:10 TRACE [lightning::ln::peer_handler:851] Enqueueing message GossipTimestampFilter { chain_hash: 000000000019d6689c085ae165831e934ff763ae46a2a6c172b3f1b60a8ce26f, first_timestamp: 1660069150, timestamp_range: 4294967295 } to 03864ef025fde8fb587d989186ce6a4a186895ee44a926bfc370e2c366597a3f8f
2022-08-23 18:19:14 TRACE [lightning::ln::peer_handler:851] Enqueueing message Ping { ponglen: 0, byteslen: 64 } to 03864ef025fde8fb587d989186ce6a4a186895ee44a926bfc370e2c366597a3f8f
2022-08-23 18:19:14 TRACE [lightning::ln::peer_handler:1135] Received message Pong(Pong { byteslen: 0 }) from 03864ef025fde8fb587d989186ce6a4a186895ee44a926bfc370e2c366597a3f8f
2022-08-23 18:19:25 TRACE [lightning::ln::peer_handler:851] Enqueueing message Ping { ponglen: 0, byteslen: 64 } to 03864ef025fde8fb587d989186ce6a4a186895ee44a926bfc370e2c366597a3f8f
2022-08-23 18:19:25 TRACE [lightning::ln::peer_handler:1135] Received message Pong(Pong { byteslen: 0 }) from 03864ef025fde8fb587d989186ce6a4a186895ee44a926bfc370e2c366597a3f8f
2022-08-23 18:19:36 TRACE [lightning::ln::peer_handler:851] Enqueueing message Ping { ponglen: 0, byteslen: 64 } to 03864ef025fde8fb587d989186ce6a4a186895ee44a926bfc370e2c366597a3f8f
2022-08-23 18:19:36 TRACE [lightning::ln::peer_handler:1135] Received message Pong(Pong { byteslen: 0 }) from 03864ef025fde8fb587d989186ce6a4a186895ee44a926bfc370e2c366597a3f8f
2022-08-23 18:19:47 TRACE [lightning::ln::peer_handler:851] Enqueueing message Ping { ponglen: 0, byteslen: 64 } to 03864ef025fde8fb587d989186ce6a4a186895ee44a926bfc370e2c366597a3f8f
2022-08-23 18:19:47 TRACE [lightning::ln::peer_handler:1135] Received message Pong(Pong { byteslen: 0 }) from 03864ef025fde8fb587d989186ce6a4a186895ee44a926bfc370e2c366597a3f8f
2022-08-23 18:19:47 DEBUG [lightning::ln::peer_handler:1055] Error handling message from 03864ef025fde8fb587d989186ce6a4a186895ee44a926bfc370e2c366597a3f8f; ignoring: Couldn't find channel for update
2022-08-23 18:19:47 DEBUG [lightning::ln::peer_handler:1055] Error handling message from 03864ef025fde8fb587d989186ce6a4a186895ee44a926bfc370e2c366597a3f8f; ignoring: Couldn't find channel for update
2022-08-23 18:19:47 DEBUG [lightning::ln::peer_handler:1055] Error handling message from 03864ef025fde8fb587d989186ce6a4a186895ee44a926bfc370e2c366597a3f8f; ignoring: Couldn't find channel for update
2022-08-23 18:19:47 DEBUG [lightning::ln::peer_handler:1055] Error handling message from 03864ef025fde8fb587d989186ce6a4a186895ee44a926bfc370e2c366597a3f8f; ignoring: Couldn't find channel for update
2022-08-23 18:19:47 DEBUG [lightning::ln::peer_handler:1055] Error handling message from 03864ef025fde8fb587d989186ce6a4a186895ee44a926bfc370e2c366597a3f8f; ignoring: Couldn't find channel for update
2022-08-23 18:19:47 DEBUG [lightning::ln::peer_handler:1055] Error handling message from 03864ef025fde8fb587d989186ce6a4a186895ee44a926bfc370e2c366597a3f8f; ignoring: Couldn't find channel for update
2022-08-23 18:19:47 DEBUG [lightning::ln::peer_handler:1055] Error handling message from 03864ef025fde8fb587d989186ce6a4a186895ee44a926bfc370e2c366597a3f8f; ignoring: Couldn't find channel for update
2022-08-23 18:19:47 DEBUG [lightning::ln::peer_handler:1055] Error handling message from 03864ef025fde8fb587d989186ce6a4a186895ee44a926bfc370e2c366597a3f8f; ignoring: Couldn't find channel for update

Filtering out those and ping/pongs, these are all the logs i see:

2022-08-23 18:19:10 DEBUG [lightning::ln::peer_handler:961] Finished noise handshake for connection with 03864ef025fde8fb587d989186ce6a4a186895ee44a926bfc370e2c366597a3f8f
2022-08-23 18:19:10 TRACE [lightning::ln::peer_handler:851] Enqueueing message Init { features: [170, 81, 10, 8, 0, 160, 8], remote_network_address: Some(IPv4 { addr: [3, 33, 236, 230], port: 9735 }) } to 03864ef025fde8fb587d989186ce6a4a186895ee44a926bfc370e2c366597a3f8f
2022-08-23 18:19:10 INFO  [lightning::ln::peer_handler:1096] Received peer Init message from 03864ef025fde8fb587d989186ce6a4a186895ee44a926bfc370e2c366597a3f8f: DataLossProtect: supported, InitialRoutingSync: not supported, UpfrontShutdownScript: supported, GossipQueries: supported, VariableLengthOnion: required, StaticRemoteKey: supported, PaymentSecret: required, BasicMPP: supported, Wumbo: supported, ShutdownAnySegwit: supported, ChannelType: supported, SCIDPrivacy: not supported, ZeroConf: not supported, unknown flags: supported
2022-08-23 18:19:10 TRACE [lightning::ln::peer_handler:851] Enqueueing message GossipTimestampFilter { chain_hash: 000000000019d6689c085ae165831e934ff763ae46a2a6c172b3f1b60a8ce26f, first_timestamp: 1660069150, timestamp_range: 4294967295 } to 03864ef025fde8fb587d989186ce6a4a186895ee44a926bfc370e2c366597a3f8f
2022-08-23 18:24:48 TRACE [lightning::ln::peer_handler:851] Enqueueing message WarningMessage { channel_id: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], data: "Unreadable/bogus gossip message" } to 03864ef025fde8fb587d989186ce6a4a186895ee44a926bfc370e2c366597a3f8f
2022-08-23 18:26:48 TRACE [lightning::ln::peer_handler:851] Enqueueing message WarningMessage { channel_id: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], data: "Unreadable/bogus gossip message" } to 03864ef025fde8fb587d989186ce6a4a186895ee44a926bfc370e2c366597a3f8f
2022-08-23 18:30:48 TRACE [lightning::ln::peer_handler:851] Enqueueing message WarningMessage { channel_id: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], data: "Unreadable/bogus gossip message" } to 03864ef025fde8fb587d989186ce6a4a186895ee44a926bfc370e2c366597a3f8f
2022-08-23 18:39:48 DEBUG [lightning::ln::peer_handler:1055] Error handling message from 03864ef025fde8fb587d989186ce6a4a186895ee44a926bfc370e2c366597a3f8f; ignoring: No existing channels for node_announcement
2022-08-23 18:49:48 TRACE [lightning::ln::peer_handler:851] Enqueueing message WarningMessage { channel_id: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], data: "Unreadable/bogus gossip message" } to 03864ef025fde8fb587d989186ce6a4a186895ee44a926bfc370e2c366597a3f8f
2022-08-23 18:53:48 TRACE [lightning::ln::peer_handler:851] Enqueueing message WarningMessage { channel_id: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], data: "Unreadable/bogus gossip message" } to 03864ef025fde8fb587d989186ce6a4a186895ee44a926bfc370e2c366597a3f8f
2022-08-23 18:54:48 TRACE [lightning::ln::peer_handler:851] Enqueueing message WarningMessage { channel_id: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], data: "Unreadable/bogus gossip message" } to 03864ef025fde8fb587d989186ce6a4a186895ee44a926bfc370e2c366597a3f8f

The channel would have tried opening just after the peer connection at the top, if that timestamp helps.

TheBlueMatt commented 2 years ago

Hmm, all of those are gossip-based messages, you should have something with the OpenChannel/AcceptChannel a ways after that, maybe grep for it?

t-bast commented 2 years ago

I think, especially for mobile/0conf/SCID aliases, we'll probably always need to support no-anchors.

Right, it could make sense to add support for other commitment types than anchor_outputs_zero_fee_htlc_txs, I'll give it a try soon. Note that I'm not sure we've deployed support for scid_alias on our node yet (even though the master branch has had it for a while).

TonyGiorgio commented 2 years ago

Hmm, all of those are gossip-based messages, you should have something with the OpenChannel/AcceptChannel a ways after that, maybe grep for it?

I typically remember there being lots of LDK logs but for whatever reason how I'm running it via sensei is not producing anything besides some basic logs.

$ cat logs.txt | grep Open
$ cat logs.txt | grep Accept
$ cat logs.txt | grep Channel
2022-08-23 18:19:10 INFO  [lightning::ln::peer_handler:1096] Received peer Init message from 03864ef025fde8fb587d989186ce6a4a186895ee44a926bfc370e2c366597a3f8f: DataLossProtect: supported, InitialRoutingSync: not supported, UpfrontShutdownScript: supported, GossipQueries: supported, VariableLengthOnion: required, StaticRemoteKey: supported, PaymentSecret: required, BasicMPP: supported, Wumbo: supported, ShutdownAnySegwit: supported, ChannelType: supported, SCIDPrivacy: not supported, ZeroConf: not supported, unknown flags: supported
2022-08-23 20:17:52 INFO  [lightning::ln::peer_handler:1096] Received peer Init message from 02aace31b8120e29cfc29d991b63fe8614cddd3fbf6148431cc3a68932c363ed29: DataLossProtect: required, InitialRoutingSync: not supported, UpfrontShutdownScript: supported, GossipQueries: supported, VariableLengthOnion: supported, StaticRemoteKey: required, PaymentSecret: required, BasicMPP: supported, Wumbo: supported, ShutdownAnySegwit: not supported, ChannelType: supported, SCIDPrivacy: not supported, ZeroConf: not supported, unknown flags: supported
TheBlueMatt commented 2 years ago

Hmm, kinda annoyed we didn't figure out the failure to fall back to a different channel type thing, but I guess not enough to actually go test and figure out if I can repro :/

t-bast commented 2 years ago

IIRC there was a lack of logs on the LDK side to properly figure out what happened, right? If anyone cares enough, it should be easy to run the same experiment against an older version of eclair and activate more LDK logs to understand what happened.

TheBlueMatt commented 2 years ago

No there should be more than enough logs (at trace level we log all messages as sent and received), it just wasn't reproduced (and I forgot about this issue until yesterday).