Closed DJFarr closed 1 day ago
Same issue here with unraid when i updated to binhex-delugevpn 2.1.1-6-04. I even tried loading a different torrent program like qbittorrentvpn and was getting the same error. I couldnt work out what was wrong.
I downgraded to binhex-delugevpn 2.1.1-6-03 by adding binhex/arch-delugevpn:2.1.1-6-03 when i edited the docker in the Repository: part and now its working.
Same on an Arch with plain dockerd. This started at about 04:00 CET (02:00 UTC) on 2024-07-05. Thanks @joey4ers for the exact label to downgrade to.
also getting this
please do a 'force update' then repost your supervisord.log with DEBUG set to 'true'
modprobe: FATAL: Module ip6_tables not found in directory /lib/modules/5.15.0-113-generic
ip6tables v1.8.10 (legacy): can't initialize ip6tables table `filter': Will be implemented real soon. I promise ;)
Perhaps ip6tables or your kernel needs to be upgraded.
2024-07-05 07:53:27.513876 [warn] ip6tables default policies not available, skipping ip6tables drops
Error: error sending query: Error creating socket
2024-07-05 07:53:27.546997 [debug] Having issues resolving name 'xxx-xxxxx.privacy.network', sleeping before retry...
When i did the update to the latest version it deleted my opvn file .. so i added it again please see further down the log.
@joey4ers I'm pretty confident you are not running the latest image as i see none of the new debug information in your log file, ensure you have the tag set to latest then ensure you have 'advanced view' toggled in the unraid web ui and click on 'force update', this will force a pull and re-creation of the container, once done post your log file here.
supervisord.log appears to be a name resolution error for the vpn or just a plain DNS error.
Latest:
2024-07-05 18:15:15.886555 [info] NAME_SERVERS defined as '84.200.69.80,37.235.1.174,1.1.1.1,37.235.1.177,84.200.70.40,1.0.0.1' 2024-07-05 18:15:15.916758 [debug] iptables default policies available, setting policy to drop... 2024-07-05 18:15:15.950892 [debug] ip6tables default policies available, setting policy to drop... Error: error sending query: Error creating socket 2024-07-05 18:15:16.005710 [debug] Having issues resolving name 'nl-amsterdam.privacy.network', sleeping before retry...
Previous Version:
2024-07-05 18:18:13.740616 [info] NAME_SERVERS defined as '84.200.69.80,37.235.1.174,1.1.1.1,37.235.1.177,84.200.70.40,1.0.0.1' 2024-07-05 18:18:13.774684 [debug] iptables default policies available, setting policy to drop... 2024-07-05 18:18:13.808571 [debug] ip6tables default policies available, setting policy to drop... 2024-07-05 18:18:13.839745 [debug] Adding 84.200.69.80 to /etc/resolv.conf... 2024-07-05 18:18:13.872751 [debug] Adding 37.235.1.174 to /etc/resolv.conf... 2024-07-05 18:18:13.902384 [debug] Adding 1.1.1.1 to /etc/resolv.conf... 2024-07-05 18:18:13.929604 [debug] Adding 37.235.1.177 to /etc/resolv.conf... 2024-07-05 18:18:13.957269 [debug] Adding 84.200.70.40 to /etc/resolv.conf... 2024-07-05 18:18:13.985621 [debug] Adding 1.0.0.1 to /etc/resolv.conf... 2024-07-05 18:18:29.414877 [debug] DNS operational, we can resolve name 'nl-amsterdam.privacy.network' to address '212.102.35.38 195.78.54.74 143.244.41.229' 2024-07-05 18:18:29.478211 [debug] DNS operational, we can resolve name 'www.privateinternetaccess.com' to address '104.18.36.183 172.64.151.73' 2024-07-05 18:18:44.862185 [debug] DNS operational, we can resolve name 'serverlist.piaservers.net' to address '104.18.159.201 104.19.240.167'
Ahh i know what's going on!, i am using the default bridge, i am assuming all you guys with issues are not, if i switch to a user defined bridge then i hit issues with name resolution, the reason being that when using a user defined bridge the injected name servers in /etc/resolve.conf is set to use internal name resolution for containers, not the name servers defined on the host.
So to illustrate: Here is the /etc/resolve.conf inside the container when running as default bridge, this works and the container will start:-
[root@009690be2587 /]# cat /etc/resolv.conf
nameserver 1.1.1.1
Here is the /etc/resolve.conf inside the container when running as user bridge, this does NOT work, 127.0.0.11 cannot resolve external names:
[root@0685fa21f2e8 /]# cat /etc/resolv.conf
nameserver 127.0.0.11
options ndots:0
this does NOT work, 127.0.0.11 cannot resolve external names:
EDIT - I'm still digging into this, but the above is not quite true, 127.0.0.11 CAN resolve external names, the issue is that iptables is blocking 127.0.0.11, thus its unable to perform name resolution, so I'm now working on a rule to permit this whilst keeping things locked down.
Same issue on privoxy and sabnzbd
full aware - once its fixed it will be pushed out to all images.
i am assuming all you guys with issues are not [using the default bridge]
This is indeed the case for me (I use a specific, manually created bridge)
Yes using a custom network.
I also edited my resolv.conf file in console and it worked. Changing it too nameserver 1.1.1.1 fixed it.
I also edited my resolv.conf file in console and it worked. Changing it too nameserver 1.1.1.1 fixed it.
This is a neat workaround, thanks: the change in the container will be preserved between restarts but ultimately the whole image will be replaced (and therefore the container). Would recommend 👍
I also edited my resolv.conf file in console and it worked. Changing it too nameserver 1.1.1.1 fixed it.
This is a neat workaround, thanks: the change in the container will be preserved between restarts but ultimately the whole image will be replaced (and therefore the container). Would recommend 👍
No on unraid it's not preserved after restarting the container. You have to edit the file while its started and edit the file then it will automatically work. Thats what happens in my case.
No on unraid it's not preserved after restarting the container
Ah, I do not know unraid. If this is a dockerd running behind, the setting that manages this is --rm
→ remove the container when stopped (not set by default, i.e. containers stay). This is useful for one shot containers, tests, etc.
In the grand scheme of things this does not make a difference because it is a matter of recreating a container from an image, except in our case where it is useful to have the container to stay.
Ver 2.1.1-6-05 update has fixed this issue for me.
Thank you binhex.
same here, Swapping from the temporary binhex/arch-delugevpn:2.1.1-6-03 back to binhex/arch-delugevpn, and then doing a force-update in Unraid has gotten it back to a working state (obviously for my arch-sabnzbdvpn as well). Thanks binhex
Hi,
I recently encountered an issue where my binhex-delugevn (and also binhex-sabnzbdvpn) failed and started spamming
Error: error sending query: Error creating socket
in the supervisord.log.I am on UNRAID, and I did a full reinstall of the delugevpn docker image, including removing the old image and template, and the old config folder too. This is happening with both openvpn and wireguard config, so I'm not sure if its that or not.
Here's my command, i did have the alter a couple ports but i hope that's not the cause:
/usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker create --name='binhex-delugevpn' --net='proxynet' --privileged=true -e TZ="XXX" -e HOST_OS="Unraid" -e 'VPN_ENABLED'='yes' -e 'VPN_USER'='XXX' -e 'VPN_PASS'='XXX' -e 'VPN_PROV'='pia' -e 'VPN_CLIENT'='wireguard' -e 'STRICT_PORT_FORWARD'='yes' -e 'ENABLE_PRIVOXY'='no' -e 'ENABLE_SOCKS'='no' -e 'SOCKS_USER'='admin' -e 'SOCKS_PASS'='XXX' -e 'LAN_NETWORK'='XXX/24' -e 'VPN_INPUT_PORTS'='' -e 'VPN_OUTPUT_PORTS'='' -e 'DEBUG'='false' -e 'VPN_OPTIONS'='' -e 'ENABLE_STARTUP_SCRIPTS'='no' -e 'USERSPACE_WIREGUARD'='no' -e 'NAME_SERVERS'='84.200.69.80,37.235.1.174,1.1.1.1,37.235.1.177,84.200.70.40,1.0.0.1' -e 'DELUGE_DAEMON_LOG_LEVEL'='info' -e 'DELUGE_WEB_LOG_LEVEL'='info' -e 'DELUGE_ENABLE_WEBUI_PASSWORD'='yes' -e 'PUID'='99' -e 'PGID'='100' -e 'UMASK'='000' -p '8112:8112/tcp' -p '8118:8118/tcp' -p '9119:9118/tcp' -p '58846:58846/tcp' -p '58947:58946/tcp' -p '58947:58946/udp' -v '/mnt/user/appdata/binhex-delugevpn':'/config':'rw' -v '/mnt/user/Downloads/':'/data':'rw' -v 'binhex-shared':'/shared':'rw' --sysctl="net.ipv4.conf.all.src_valid_mark=1" 'binhex/arch-delugevpn'
This error loops infinitely and the service never spins up fully