lightninglabs / pool

Lightning Pool: a non-custodial batched uniform clearing-price auction for Lightning Channel Leases (LCL). An LCL packages up inbound channel liquidity (ability to receive funds) as a fixed income asset with a maturity date expressed in blocks.
MIT License
253 stars 47 forks source link

Trying to FUND ACCOUNT and receive the error "UNABLE TO CREATE THE ACCOUNT no_route" #464

Closed machadommm closed 1 year ago

machadommm commented 1 year ago

I am trying to open account and funding it but receive the error described in subject.

Actual behavior

image

System information UMBREL 0.5.4 and all apps updated in this date Aug-07-2023

Liongrass commented 1 year ago

Do you have a channel open with some funds in it? To open a Pool account you need to be able to buy the L402, which requires a Lightning payment

machadommm commented 1 year ago

Yes, I have several open channels. I'm even T1 as you can see in the picture.

guggero commented 1 year ago

Weird, there should be enough liquidity in public channels to be able to pay the invoice. Can you check your logs please? And what do you get if you run lncli queryroutes --dest 0259536c60a5efebce8d153bf32afd743cfe92c3f8f22f53a9e4586c516c9a538f --amt 1000 --fee_limit 50?

machadommm commented 1 year ago

Hi Guggero, The result: lndcliente queryroutes --dest 0259536c60a5efebce8d153bf32afd743cfe92c3f8f22f53a9e4586c516c9a538f --amt 1000 --fee_limit 50 [lncli] rpc error: code = Unknown desc = unable to find a path to destination

guggero commented 1 year ago

Okay, that's weird. I assume nothing changes in the result if you increase --fee_limit to 1000? Sounds like an issue with your local graph. Does synced_to_graph say true in lncli getinfo?

machadommm commented 1 year ago

Yes, I ncreased to --fee_limit to1000 and same error.

The getinfo show this: { "version": "0.16.4-beta commit=v0.16.4-beta", "commit_hash": "6bd30047c1b1188029e8af6ee8a135cf86e7dc4b", "identity_pubkey": "02d6f4112fdb133ed2c26a5617d6c996526d3eed45dc0711da4dd71e56d4641d8c", "alias": "T4m010ORG🗿", "color": "#3399ff", "num_pending_channels": 0, "num_active_channels": 20, "num_inactive_channels": 0, "num_peers": 22, "block_height": 802247, "block_hash": "00000000000000000000bed993a5324e69d1b01a27ef20953193ff78c1eb861a", "best_header_timestamp": "1691512243", "synced_to_chain": true, "synced_to_graph": true, "testnet": false,

guggero commented 1 year ago

That looks good. Very weird. What do you get when you run lncli getnodeinfo --pub_key 0259536c60a5efebce8d153bf32afd743cfe92c3f8f22f53a9e4586c516c9a538f --include_channels?

machadommm commented 1 year ago

Hi, This one had this response: machado@jginyueadm:~$ machado@jginyueadm:~$ lndcliente getnodeinfo --pub_key 0259536c60a5efebce8d153bf32afd743cfe92c3f8f22f53a9e4586c516c9a538f --include_channels [lncli] rpc error: code = NotFound desc = unable to find node

I tried another one and receive info: machado@jginyueadm:~$ lndcliente getnodeinfo --pub_key 0229aafd5cb0e9400a37e8cb9245423011aedfa12e9d9339cd92c12c606f39678a --include_channels { "node": { "last_update": 1691496153, "pub_key": "0229aafd5cb0e9400a37e8cb9245423011aedfa12e9d9339cd92c12c606f39678a", "alias": "Noderunner ⚡️⛏️13%☣️", "addresses": [ { "network": "tcp", "addr": "famienubbheqv3fb6233o3ixiabgafmlkp3yrrtldi32ae5env3fknid.onion:9735" } ], "color": "#e3973e",

guggero commented 1 year ago

I'm not sure what's going on, but it looks like your node is missing a good chunk of the public network graph. Do you have any custom configuration options set? What do you get for lncli getnetworkinfo?

machadommm commented 1 year ago

Hi Guggero, Really my node is not seeing the whole network. There is another node in a different location and more information appears. Look at the tests I did on each node: 1 - The one with the problem: machado@jginyueadm:$ lndcliente getnetworkinfo { "graph_diameter": 15, "avg_out_degree": 3.569979381443299, "max_out_degree": 76, "num_nodes": 12125, "num_channels": 21643, "total_network_capacity": "86305666923", "avg_channel_size": 3987694.2624867163, "min_channel_size": "1050", "max_channel_size": "1000000000", "median_channel_size_sat": "1000000", "num_zombie_chans": "182685" } 2 - The one that is on another network: machado@t4b4node:$ lndcliente getnetworkinfo { "graph_diameter": 13, "avg_out_degree": 4.831332310784366, "max_out_degree": 113, "num_nodes": 18907, "num_channels": 45673, "total_network_capacity": "151461970126", "avg_channel_size": 3316225.5627175793, "min_channel_size": "1050", "max_channel_size": "1000000000", "median_channel_size_sat": "1000000", "num_zombie_chans": "142202" }

How can we fix this? On this problematic node I have two different Internet providers. I already changed the output of it but still the same result. See the connectivity tests and the two nodes have the same result (I used MTR). It doesn't seem to be an ISP issue.

1 - Problematic: My traceroute [v0.95] jginyueadm (192.168.1.88) -> 44.239.22.138 (44.232023-08-09T11:23:59-0300 Keys: Help Display mode Restart statistics Order of fields quit Packets Pings Host Loss% Snt Last Avg Best Wrst StDev

  1. rbjg.m3.srv.br 0.0% 7 0.2 0.2 0.2 0.2 0.0
  2. (waiting for reply)
  3. 201-1-224-5.dsl.telesp.net 0.0% 7 3.2 2.7 1.5 3.6 0.8
  4. 152-255-194-22.user.vivoza 14.3% 7 4.6 3.0 2.3 4.6 0.9
  5. (waiting for reply)
  6. 213.140.39.14 14.3% 7 2.8 3.5 2.6 5.2 1.0
  7. 5.53.6.111 0.0% 6 108.4 107.5 106.3 108.4 0.9
  8. 94.142.117.132 0.0% 6 133.3 137.6 131.9 162.5 12.2
  9. 84.16.15.76 0.0% 6 153.2 152.4 150.9 155.1 1.6
    1. 84.16.11.49 0.0% 6 161.5 171.5 161.5 188.8 9.8
    2. (waiting for reply)
    3. (waiting for reply)
    4. (waiting for reply)
    5. (waiting for reply)
    6. (waiting for reply)
    7. (waiting for reply)
    8. (waiting for reply)
    9. (waiting for reply)
    10. 108.166.236.6 0.0% 6 185.6 187.2 184.7 192.4 3.1
    11. 108.166.236.22 0.0% 6 188.2 190.3 187.4 201.4 5.5
    12. (waiting for reply)
    13. 108.166.236.54 0.0% 6 186.1 186.4 185.2 188.6 1.2
    14. 108.166.236.41 0.0% 6 186.7 186.1 184.9 187.4 0.9
    15. (waiting for reply)

2 - Good node: My traceroute [v0.95] t4b4node (10.0.0.20) -> 44.239.22.138 (44.239.22.2023-08-09T11:24:10-0300 Keys: Help Display mode Restart statistics Order of fields quit Packets Pings Host Loss% Snt Last Avg Best Wrst StDev

  1. _gateway 0.0% 5 0.1 0.1 0.1 0.2 0.0
  2. 45.58.126.1 0.0% 5 6.6 1.8 0.3 6.6 2.7
  3. ae0-200.cr8-mia1.ip4.gtt.n 0.0% 5 1.2 2.6 0.8 8.4 3.2
  4. ae6.cr8-lax2.ip4.gtt.net 0.0% 5 55.9 55.2 54.9 55.9 0.4
  5. ip4.gtt.net 0.0% 5 54.5 56.6 54.5 63.4 3.8
  6. (waiting for reply)
  7. (waiting for reply)
  8. (waiting for reply)
  9. (waiting for reply)
    1. (waiting for reply)
    2. (waiting for reply)
    3. (waiting for reply)
    4. (waiting for reply)
    5. 108.166.236.7 0.0% 4 76.3 76.2 76.2 76.3 0.1
    6. 108.166.236.21 0.0% 4 77.2 77.2 77.2 77.2 0.0
    7. (waiting for reply)
    8. 108.166.236.53 0.0% 4 75.8 75.8 75.7 75.8 0.0
    9. (waiting for reply)
guggero commented 1 year ago

I'm not sure what the issue here could be. The latency or speed of the connection shouldn't matter. You just need enough peer connections to be able to get all the gossip.

Just to confirm a few things:

It might make sense to open an issue in lnd instead. I'm not sure why the graphs would be so different on two nodes that are well connected to the network.

machadommm commented 1 year ago

I'm not sure what the issue here could be. The latency or speed of the connection shouldn't matter. You just need enough peer connections to be able to get all the gossip.

Just to confirm a few things:

  • The "bad" node, is the one on version lnd v0.16.4-beta with 20 channels? Are those channels with more or less random peers or all nodes you operate?

Correct lnd v0.16.4-beta and the channels are random peers.

  • On the "good" node, how many channels/peers do you have?

Active Channels: 27 / 27 [Peers]: 30

  • Any settings in the lnd.conf that are non-default (e.g. that you set yourself and aren't provisioned by Umbrel)?

Both nodes using same lnd.conf

It might make sense to open an issue in lnd instead. I'm not sure why the graphs would be so different on two nodes that are well connected to the network.

If you want I open a ticket with lnd

Another thing, I issued again the command below and started receive info about your node. That's weird...

machado@jginyueadm:~$ lndcliente getnodeinfo --pub_key 0229aafd5cb0e9400a37e8cb9245423011aedfa12e9d9339cd92c12c606f39678a --include_channels { "node": { "last_update": 1691582555, "pub_key": "0229aafd5cb0e9400a37e8cb9245423011aedfa12e9d9339cd92c12c606f39678a", "alias": "Noderunner ⚡️⛏️13%☣️", "addresses": [ { "network": "tcp", "addr": "famienubbheqv3fb6233o3ixiabgafmlkp3yrrtldi32ae5env3fknid.onion:9735" } ], "color": "#e3973e", "features": { "0": { "name": "data-loss-protect", "is_required": true, "is_known": true

guggero commented 1 year ago

Hmm, so maybe your node is still catching up with the graph? So if you see channels, can you open a Pool account now (being able to pay for the L402 token)?

machadommm commented 1 year ago

Hi Guggero, I tried again and still not. I opened two more channels to see if it improved the reach on the network. What would be the specific node that this transaction needs to reach? Maybe making a connection between the nodes can work?

guggero commented 1 year ago

Very weird. Sure, a direct channel should work, though that probably won't solve similar problems in the future. The node you need to reach is the one from this comment: https://github.com/lightninglabs/pool/issues/464#issuecomment-1669986224

machadommm commented 1 year ago

Hi Guggero, I did the following:

1 - I opened a channel with the FEWSATS node that has an open channel to the node that receives the fees:

Channel open with my node and fewsats: https://amboss.space/edge/883041977562365953

Channel open with fewsats and POOL fee node: https://amboss.space/edge/854849400001724416

2 - I did the test of sending the fee amount to FEWSATS which worked:

machado@jginyueadm:~$ lndcliente queryroutes --dest 03676f530adb4df9f7f4981a8fb216571f2ce36c34cbefe77815c33d5aec4f2638 --amt 1000 --fee_limit 50 { "routes": [ { "total_time_lock": 803578, "total_fees": "0", "total_amt": "1000", "hops": [ { "chan_id": "883041977562365953", "chan_capacity": "1000000", "amt_to_forward": "1000", "fee": "0", "expiry": 803578, "amt_to_forward_msat": "1000000", "fee_msat": "0", "pub_key": "03676f530adb4df9f7f4981a8fb216571f2ce36c34cbefe77815c33d5aec4f2638", "tlv_payload": true, "mpp_record": null, "amp_record": null, "custom_records": { }, "metadata": null } ], "total_fees_msat": "0", "total_amt_msat": "1000000" } ], "success_prob": 1

3 - However, when I send the test to the POOL node that receives the fees, the error still persists.

machado@jginyueadm:~$ lndcliente queryroutes --dest 0259536c60a5efebce8d153bf32afd743cfe92c3f8f22f53a9e4586c516c9a538f --amt 1000 --fee_limit 50 [lncli] rpc error: code = Unknown desc = unable to find a path to destination

What can we do to fix the error?

guggero commented 1 year ago

I can only think of two things: First, try to clear the mission control (which keeps track of success/failures): lncli resetmc. Second, increase the log level to "debug" then see what is logged when you do the queryroutes command to get some indication what's missing.

machadommm commented 1 year ago

Hi Guggero, The command lncli resetmc not worked. See below the debug log to queryroutes:

2023-08-18 00:13:41.392 [DBG] RPCS: [/lnrpc.Lightning/GetNodeInfo] requested 2023-08-18 00:13:41.450 [DBG] RPCS: [/lnrpc.Lightning/GetNodeInfo] requested 2023-08-18 00:13:41.477 [DBG] RPCS: [/lnrpc.Lightning/GetNodeInfo] requested 2023-08-18 00:13:41.497 [DBG] RPCS: [/lnrpc.Lightning/GetNodeInfo] requested 2023-08-18 00:13:41.518 [DBG] RPCS: [/lnrpc.Lightning/GetNodeInfo] requested 2023-08-18 00:13:41.535 [DBG] RPCS: [/lnrpc.Lightning/GetNodeInfo] requested 2023-08-18 00:13:41.609 [DBG] RPCS: [/lnrpc.Lightning/GetNodeInfo] requested 2023-08-18 00:13:41.641 [DBG] RPCS: [/lnrpc.Lightning/ListPayments] requested 2023-08-18 00:13:41.677 [DBG] RPCS: [/lnrpc.Lightning/GetNodeInfo] requested 2023-08-18 00:13:41.684 [DBG] RPCS: [/lnrpc.Lightning/QueryRoutes] requested 2023-08-18 00:13:41.685 [DBG] CRTR: Searching for path to 0259536c60a5efebce8d153bf32afd743cfe92c3f8f22f53a9e4586c516c9a538f, sending 1000000 mSAT 2023-08-18 00:13:41.686 [WRN] CRTR: ShortChannelID=791679:3848:0: link not found: channel link not found 2023-08-18 00:13:41.686 [DBG] LNWL: ChannelPoint(695d6166887c49de17fabd0340d87d8b6bec9bbf62699e7dc6b0863b02a69866:0): May add outgoing htlc rejected: commitment transaction dips peer below chan reserve: negative local balance 2023-08-18 00:13:41.686 [WRN] CRTR: ShortChannelID=800555:2076:0: cannot add outgoing htlc: commitment transaction dips peer below chan reserve: negative local balance 2023-08-18 00:13:41.686 [DBG] CRTR: Pathfinding absolute attempt cost: 10.006 sats 2023-08-18 00:13:41.686 [DBG] CRTR: Pathfinding perf metrics: nodes=1, edges=0, time=460.503µs 2023-08-18 00:13:41.686 [ERR] RPCS: [/lnrpc.Lightning/QueryRoutes]: unable to find a path to destination 2023-08-18 00:13:41.727 [DBG] RPCS: [/lnrpc.Lightning/GetNodeInfo] requested 2023-08-18 00:13:41.750 [DBG] RPCS: [/lnrpc.Lightning/GetNodeInfo] requested 2023-08-18 00:13:41.768 [DBG] RPCS: [/lnrpc.Lightning/GetNodeInfo] requested 2023-08-18 00:13:41.793 [DBG] RPCS: [/lnrpc.Lightning/GetNodeInfo] requested

Thanks!

guggero commented 1 year ago

Thanks. This looks very weird, as if it would only consider a single node: 2023-08-18 00:13:41.686 [DBG] CRTR: Pathfinding perf metrics: nodes=1, edges=0, time=460.503µs. I think it's best to open an issue in lnd, and reference this one.

Closing it here as it's not a Pool issue.