bubuntux / nordlynx

GNU General Public License v3.0
192 stars 43 forks source link

DNS leak when nordlynx is disconnected #101

Closed krbrs closed 1 year ago

krbrs commented 1 year ago

Describe the bug

DNS is leaking when the container starts, can't say if it's the same when the connection drops Same behavior as mentioned in https://github.com/bubuntux/nordvpn/issues/361 docker-compose was put together from different suggestions on the Wiki

Steps to reproduce

To Reproduce using docker-compose

docker-compose.yml:

version: "3"
services:
  nordvpn:
    container_name: nordvpn
    image: ghcr.io/bubuntux/nordlynx:latest
    pull_policy: always
    cap_add:
      - NET_ADMIN
    restart: unless-stopped
    environment:
      - QUERY=$QUERY
      - NET_LOCAL=$NET_LOCAL
      - ALLOWED_IPS=$ALLOWED_IPS
      - TZ=$TZ
      - PRIVATE_KEY=$PK
#     - DNS=$DNS # commented out as default has changed back to Nord servers
      - "POST_UP=ip -4 route add $$(wg | awk -F'[: ]' '/endpoint/ {print $$5}') via $$(ip route | awk '/default/ {print $$3}')"
      - "PRE_DOWN=ip -4 route del $$(route -n | awk '/255.255.255.255/ {print $$1}') via $$(ip route | awk '/default/ {print $$3}')"
    ports:
      - **ports you need forwarded for behind containers**

I don't think the variables are important for troubleshooting, just set whichever fits your setup

Expected behavior

No DNS leak at all even if nordvpn connection is down.

krbrs commented 1 year ago

So this issue bugged me as I don't want to leak my IP over DNS...

I updated part of my docker-compose.yml for the affected service (not included in the config above) and it seems to at least fix the issue for leaking the IP on start - idk about a leak on connection drop tho:

  *service-name*:
    container_name: *container-name*
    image: *imagename*:latest
    pull_policy: always
    restart: on-failure
    network_mode: service:nordvpn
    volumes:
      - *volume*:*volume*
    environment:
      - PGID=$PGID
      - PUID=$PUID
      - TZ=$TZ
    depends_on:
      nordvpn:
        condition: service_healthy

The last part makes the other container(s) wait on start until the healthcheck of "nordvpn" reports healthy

Still I think this can be addressed elsewhere on nordlynx container start

Smiggel commented 1 year ago

How did you get the service running?

When I add the code below to my containers, I get the error "no such service: nordvpn"

    depends_on:
      nordvpn:
        condition: service_healthy

Tried your stack example, but no luck.

krbrs commented 1 year ago

You just need to make sure the compose version is "3" and that your NordVPN service is called "nordvpn" as in the first comment on that issue - the vpn and other containers are all on the same compose file here. Don't add that dependency to the NordVPN service of course.

Smiggel commented 1 year ago

You just need to make sure the compose version is "3" and that your NordVPN service is called "nordvpn" as in the first comment on that issue - the vpn and other containers are all on the same compose file here. Don't add that dependency to the NordVPN service of course.

Thanks, going to look into it.

volschin commented 1 year ago

@Smiggel condition inside depends_on is not specified in compose v3, only in v2. https://docs.docker.com/compose/compose-file/compose-file-v2/#depends_on

krbrs commented 1 year ago

Sorry, for me it started this way and containers are waiting. Downgrading to v2.4 should do then. I am using Portainer so idk if this makes a difference

Smiggel commented 1 year ago

@Smiggel condition inside depends_on is not specified in compose v3, only in v2. https://docs.docker.com/compose/compose-file/compose-file-v2/#depends_on

Tried it again. But I still have no service called nordvpn. It's not made during setup I guess.

Also see errors in the log:

s6-rc: info: service 99-ci-service-check successfully started [2022-10-24T18:50:46+02:00] Finding the best server... curl: (6) Could not resolve host: api.nordvpn.com [2022-10-24T18:51:14+02:00] Unable to select a server ¯_(⊙︿⊙)_/¯

github-actions[bot] commented 1 year ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.