Closed TheFockinFury closed 1 year ago
Hello again,
I believe you are trying to connect to the rest api server port which is different than the grpc connection.
I think the log you read (gRPC proxy started at 0.0.0.0:8080
) is misleading. If you look at the lnd source code it's actually starting the rest server.
https://github.com/lightningnetwork/lnd/blob/5b354c659894c9120660530f691005cc6cf373a0/lnd.go#L880
My guess is the log means is the rest server is acting as a "proxy" to the grpc server, but it is not the grpc server itself.
The log above it
RPCS: RPC server listening on 127.0.0.1:10009
I believe is referring to the GRPC server.
https://github.com/lightningnetwork/lnd/blob/5b354c659894c9120660530f691005cc6cf373a0/lnd.go#L750
which at the moment is bound to 127.0.0.1:10009
So, I think you need to adjust your docker-compose.yml
to use LND_SOCKET: host.docker.internal:10009
instead of LND_SOCKET: host.docker.internal:8080
I think you probably don't need to change the binding address, but if that doesn't work you may need to add rpclisten=0.0.0.0:10009
to your lnd.conf
which will allow for any ip address to connect to your grpc port.
Like I said, I don't think you should need to do this but I could be wrong
Really appreciate all that detail. So I ended up doing both things you mentioned (changing LND socket port and also adding rpclisten=0.0.0.0:10009
to lnd.conf
-- but no dice. I'm still getting the same issue with the visualizer failing to load.
I'm getting frequent errors in the web console that look like this, that might be a clue:
2023/03/08 01:14:54 [error] 7#7: *67 connect() failed (113: No route to host) while connecting to upstream, client: 192.168.50.129, server: localhost, request: "GET /api/ HTTP/1.1", upstream: "http://172.19.0.3:5647/api/", host: "192.168.50.11:5646"
In the above: 192.168.50.129 is the IP of the computer I'm trying to access LNVisualizer from 172.19.0.3 is the IP of the ln-visualizer-web container 192.168.50.11 is the IP of the host machine
It seems like there's an issue trying to resolve 'http://lnvisapi:5647
as defined in docker-compose.yml
... does that make any sense?
edited to add: i just realized that the API container isn't getting an IP address. is that expected?
It's been a while since I've tested this, so I double checked to make sure it worked on my end.
This config does work for me on my machine:
version: "3.7"
services:
lnvisweb:
image: maxkotlan/ln-visualizer-web:v0.0.27
init: true
restart: on-failure
stop_grace_period: 1m
ports:
- '5646:80'
environment:
LN_VISUALIZER_API_URL: "http://lnvisapi:5647"
lnvisapi:
image: maxkotlan/ln-visualizer-api:v0.0.27
init: true
restart: on-failure
stop_grace_period: 1m
user: 1000:1000
volumes:
- "S:\\LNVISTEST\\lnd:/lnd:ro"
environment:
LND_CERT_FILE: "/lnd/tls.cert"
LND_MACAROON_FILE: "/lnd/data/chain/bitcoin/mainnet/readonly.macaroon"
LND_SOCKET: "umbrel.local:10009"
However maybe my docker network setup is different than yours.
When I inspected my api container using docker inspect
,
"Networks": {
"mocklnd_default": {
"IPAMConfig": null,
"Links": null,
"Aliases": [
"mocklnd-lnvisapi-1",
"lnvisapi",
"d2704aaa0b7a"
],
"NetworkID": "34a91066944c118d7e561cf0b6d623ca3a5bf16a83f6fb722deb3d30a1e6bf31",
"EndpointID": "cbd303186605607a94ba875075bc1f999f3fe1225f49392b6b381786b6630fac",
"Gateway": "172.18.0.1",
"IPAddress": "172.18.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:12:00:02",
"DriverOpts": null
}
}
It looks like my docker-compose is assigning an ipaddress to the api container whereas you pointed out yours is not.
The error you're getting is interesting.
2023/03/08 01:14:54 [error] 7#7: *67 connect() failed (113: No route to host) while connecting to upstream, client: 192.168.50.129, server: localhost, request: "GET /api/ HTTP/1.1", upstream: "http://172.19.0.3:5647/api/", host: "192.168.50.11:5646"
It's failing because the web container cannot connect to the api container.
I changed the port in my docker config to try to replicate this error and I noticed a few differences with my error.
mocklnd-lnvisweb-1 | 2023/03/08 23:02:15 [error] 9#9: *33 connect() failed (111: Connection refused) while connecting to upstream, client: 172.18.0.1, server: localhost, request: "GET /api/ HTTP/1.1", upstream: "http://172.18.0.2:5648/api/", host: "localhost:5646"
The most interesting issue is in my upstream url looks like http://172.18.0.2:5648/api/"
which is correctly pointing to the api container, not the web container.
But for some reason in your setup your upstream url is http://172.19.0.3:5647/api/
which based on your screenshot is the web container ip address. Which means the web container is trying to pull the data from itself, not the api container.
I am unsure why that is happening because in the docker compose you listed, you have the address as lnvisapi (LN_VISUALIZER_API_URL: 'http://lnvisapi:5647'
) so docker should automatically handle the networking and assign the api container an ip address and automatically map any http://lnvisapi:5647 requests to the ip address of the api container.
Currently my only guess is it could be because you have this extra_hosts
option for your adapter container in your docker-compose.yml
extra_hosts:
- 'host.docker.internal:host-gateway'
I would try removing that and see if you still get the issue, or at least assigns an ip address to the container
I really appreciate the help but it seems like I won't be able to get this running. I removed the extra_hosts
and that surprisingly didn't change anything.
As a reminder for context, I'm trying to depoy LN-Visualizer in a container but have it connect to LND on my host machine (not in docker), which is where a lot of complexity is coming in for my smooth brain.
For whatever reason, my API container just isn't getting an IP:
"Networks": {
"ln-visualizer_default": {
"IPAMConfig": null,
"Links": null,
"Aliases": [
"ln-visualizer-lnvisapi-1",
"lnvisapi",
"2bb1fc1cc611"
],
"NetworkID": "a30f9c6d490c25bd756c1f33619590df1168760c392df7c24fa7466837626ca5",
"EndpointID": "",
"Gateway": "",
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "",
"DriverOpts": null
}
That's probably because the API container never finishes starting up due to these errors in that container's console that repeat periodically:
WARNING: NODE_ENV value of 'production' did not match any deployment config file names.
WARNING: See https://github.com/lorenwest/node-config/wiki/Strict-Mode
Initializing Websocket Server on ws://0.0.0.0:5647
Starting Graph Sync
Pod status server on port 3000
[
503,
'GetNetworkGraphError',
{
err: Error: 14 UNAVAILABLE: No connection established
at Object.callErrorFromStatus (/usr/local/app/node_modules/@grpc/grpc-js/build/src/call.js:31:26)
at Object.onReceiveStatus (/usr/local/app/node_modules/@grpc/grpc-js/build/src/client.js:180:52)
at Object.onReceiveStatus (/usr/local/app/node_modules/@grpc/grpc-js/build/src/client-interceptors.js:336:141)
at Object.onReceiveStatus (/usr/local/app/node_modules/@grpc/grpc-js/build/src/client-interceptors.js:299:181)
at /usr/local/app/node_modules/@grpc/grpc-js/build/src/call-stream.js:160:78
at processTicksAndRejections (internal/process/task_queues.js:77:11) {
code: 14,
details: 'No connection established',
metadata: [Metadata]
}
}
]
In the web container, I'm getting a lot of these as before:
2023/03/09 00:20:41 [error] 9#9: *75 connect() failed (113: No route to host) while connecting to upstream, client: 192.168.50.129, server: localhost, request: "GET /api/ HTTP/1.1", upstream: "http://172.24.0.3:5647/api/", host: "192.168.50.11:5646"
So all signs point to the API container not being able to connect to the back end, and therefore not being able to start, causing the web container to fail to connect.
Here's my current lnd.conf
showing that I'm listening for RPC calls on 0.0.0.0:10009:
[Application Options]
alias=CIA Surveillance Van
debuglevel=info
maxpendingchannels=5
listen=0.0.0.0:9735
# Password: automatically unlock wallet with the password in this file
# -- comment out to manually unlock wallet, and see RaspiBolt guide for more secure options
wallet-unlock-password-file=/data/lnd/password.txt
wallet-unlock-allow-create=true
# Automatically regenerate certificate when near expiration
tlsautorefresh=true
# Do not include the interface IPs or the system hostname in TLS certificate.
tlsdisableautofill=true
# Enabling hybrid mode -- make sure to also look at [tor] settings below
externalip=45.33.72.139:9735
nat=false
tlsextraip=10.0.0.3
#listening for REST calls
restlisten=0.0.0.0:8080
rpclisten=0.0.0.0:10009
# Channel settings
bitcoin.basefee=1000
bitcoin.feerate=1
minchansize=100000
accept-keysend=true
accept-amp=true
protocol.wumbo-channels=true
protocol.no-anchors=false
coop-close-target-confs=24
# Watchtower
wtclient.active=true
# Performance
gc-canceled-invoices-on-startup=true
gc-canceled-invoices-on-the-fly=true
ignore-historical-gossip-filters=1
stagger-initial-reconnect=true
routing.strictgraphpruning=true
# Database
[bolt]
db.bolt.auto-compact=true
db.bolt.auto-compact-min-age=168h
[Bitcoin]
bitcoin.active=1
bitcoin.mainnet=1
bitcoin.node=bitcoind
[tor]
tor.active=true
tor.v3=true
# Change tor.streamisolation to true if deactivating hybrid mode
tor.streamisolation=false
# Activate hybrid mode
tor.skip-proxy-for-clearnet-targets=true
And my docker-compose.yml
. Note that as I'm running LND on my host machine, not another docker container, I'm unsure how to resolve port 10009 on my host machine without the use of the extra_hosts:
stanza to define host.docker.internal:host-gateway
. I've tried defining LND_SOCKET
as the IP of the docker network's gateway, the IP of the host machine, as 127.0.0.1:9000
, `0.0.0.0:100009', etc. For whatever reason I just can't connect to the API container (and have confirmed that all ports involved--10009, 5646, 5647, etc--are all open on my firewall.
version: "3.7"
services:
lnvisweb:
image: maxkotlan/ln-visualizer-web:v0.0.27
init: true
restart: on-failure
stop_grace_period: 1m
ports:
- '5646:80'
environment:
LN_VISUALIZER_API_URL: "http://lnvisapi:5647"
lnvisapi:
image: maxkotlan/ln-visualizer-api:v0.0.27
init: true
restart: on-failure
stop_grace_period: 1m
user: 1000:1000
volumes:
- "/home/lnd/.lnd:/lnd:ro"
environment:
LND_CERT: "/lnd/tls.cert"
LND_MACAROON: "/lnd/data/chain/bitcoin/mainnet/readonly.macaroon"
LND_SOCKET: "192.168.50.11:10009"
One final detail is that I've got my host machine talking to the internet via a Wireguard tunnel out to a VPS. I don't think this is the cause of my issue because this happens to me even if my tunnel is down... but I'm posting wg0.conf
here just in case:
[Interface]
PrivateKey = [privatekey]
Address = 10.0.0.3/24
PostUp = ip rule add from 192.168.50.11 table main
PostUp = ip route add default via 192.168.50.1 table main
PreDown = ip rule delete from 192.168.50.11 table main
PreDown = ip route delete default via 192.168.50.1 table main
DNS = 173.255.225.5 50.116.53.5
[Peer]
PublicKey = [publickey]
AllowedIPs = 0.0.0.0/0
Endpoint = 45.33.72.139:51820
PersistentKeepalive = 25
Thanks again for all your help! I'll keep toying around with it but unless I'm missing something obvious I feel like this might be beyond my ability to debug!
One last thought @MaxKotlan ... I can try running the container in host mode like below. That should get around whatever silly connection/networking issue I'm seeing. The problem is that port 3000, which the pod server uses in the api container, is already in use by RTL on my host.
Is there an easy way to tell the api container to use a different port for the pod server? I can't use the container's console to modify 'api/src/services/config.service.ts' because the container is constantly restarting itself. I'm hoping there's a way to set an environment variable instead. Any chance?
version: "3.7"
services:
lnvisweb:
image: maxkotlan/ln-visualizer-web:v0.0.27
init: true
restart: on-failure
stop_grace_period: 1m
ports:
- '5646:80'
environment:
LN_VISUALIZER_API_URL: "http://lnvisapi:5647"
lnvisapi:
image: maxkotlan/ln-visualizer-api:v0.0.27
init: true
network_mode: host
restart: unless-stopped
stop_grace_period: 1m
user: 1005:1005
volumes:
- "/home/lnd/.lnd:/lnd:ro"
environment:
LND_CERT: "/lnd/tls.cert"
LND_MACAROON: "/lnd/data/chain/bitcoin/mainnet/readonly.macaroon"
LND_SOCKET: "127.0.0.1:10009"
The pod status server was something I was testing with a while ago for making sure the container was up and running. I originally intended to make it configurable but currently it is not.
I pushed an update that disables it, since it's not being used and was more of an experimental feature.
Use v0.0.28 for both the web and the api and it should not bind to port 3000
Another thing I think you could try is adding
extra_hosts:
- 'host.docker.internal:host-gateway'
back into the api and making the connection url
LND_SOCKET: "host.docker.internal:10009"
instead of LND_SOCKET: "127.0.0.1:10009"
Dude! It works!! Thanks again for all the help here. Below are some notes for posterity's sake. Super happy to have this up and running!
My updated docker-compose.yml
is at the bottom of this comment for reference.
rpclisten=0.0.0.0:10009
ufw
used by this app--5647, the web port, as well as 10009 for the RPC calls.host.docker.internal:host-gateway
to the web container (see docker-compose.yml
below)If I did not do 3 and 4, then I would get the following in the logs for the API container:
WARNING: NODE_ENV value of 'production' did not match any deployment config file names.
WARNING: See https://github.com/lorenwest/node-config/wiki/Strict-Mode
Initializing Websocket Server on ws://0.0.0.0:5647
Starting Graph Sync
[
503,
'GetNetworkGraphError',
{
err: Error: 14 UNAVAILABLE: No connection established
at Object.callErrorFromStatus (/usr/local/app/node_modules/@grpc/grpc-js/build/src/call.js:31:26)
at Object.onReceiveStatus (/usr/local/app/node_modules/@grpc/grpc-js/build/src/client.js:180:52)
at Object.onReceiveStatus (/usr/local/app/node_modules/@grpc/grpc-js/build/src/client-interceptors.js:336:141)
at Object.onReceiveStatus (/usr/local/app/node_modules/@grpc/grpc-js/build/src/client-interceptors.js:299:181)
at /usr/local/app/node_modules/@grpc/grpc-js/build/src/call-stream.js:160:78
at processTicksAndRejections (internal/process/task_queues.js:77:11) {
code: 14,
details: 'No connection established',
metadata: [Metadata]
}
}
]
Ultimately, because I seem to have to run the API container in host mode, the fix you just pushed allowed me to run a more or less vanilla deployment of LNVisualizer without needing to do funny business like mounting and passing a config file that had the pod status server listen on a different port or something. So that was much appreciated!
updated, working docker-compose.yml
:
version: "3.7"
services:
lnvisweb:
image: maxkotlan/ln-visualizer-web:v0.0.28
init: true
extra_hosts:
- 'host.docker.internal:host-gateway'
restart: on-failure
stop_grace_period: 1m
ports:
- '5646:80'
environment:
LN_VISUALIZER_API_URL: "http://host.docker.internal:5647"
lnvisapi:
image: maxkotlan/ln-visualizer-api:v0.0.28
init: true
network_mode: host
restart: unless-stopped
stop_grace_period: 1m
user: 1000:1000
environment:
LND_CERT: "[base64 encoded cert as per readme]"
LND_MACAROON: "base64 encoded access.macaroon as per readme]"
LND_SOCKET: "0.0.0.0:10009"
That's awesome! Glad it is working.
Hi there,
I'm attempting to deploy LN-Visualizer in Docker but have it communicate with LND running on the host machine. There must be something wrong with my configuration but I can't wrap my head around what it is.
I get the following output repeatedly in the console for the "api" service after I start the container:
It seems like I can't establish communication with the LND back end. However, LND logs indicate that it should be listening for gRPC calls on 0.0.0.0:8080:
I can't see anything being logged in ufw that would indicate port 8080 is being blocked; I added a rule to allow 8080 on this machine.
Doesn't look like there are any port conflicts either; LND is the only service listening on 808:
Here's my lnd.conf (a bastardized version of the vanilla Raspibolt config):
And my docker-compose.yml:
Any pointers would be very much appreciated!