Open Sarevok933219 opened 3 months ago
Hi,
So you are using the host's WireGuard interface, which becomes functional once you restart the Docker container? Can you please enable trace logging and post the container startup logs?
You could also test whether the interface still works while the container is stopped and check whether the connections break due to the startup process.
Hi,
So you are using the host's WireGuard interface, which becomes functional once you restart the Docker container? Can you please enable trace logging and post the container startup logs?
You could also test whether the interface still works while the container is stopped and check whether the connections break due to the startup process. Good day. I'm starting all services in one docker-compose.yaml file. As a service of Wireguard I use image from lscr.io/linuxserver/wireguard. In wireguard logs there is nothing interesting:
.:53 CoreDNS-1.11.1 linux/amd64, go1.21.8, **** Found WG conf /config/wg_confs/wg0.conf, adding to list **** **** Activating tunnel /config/wg_confs/wg0.conf **** [#] ip link add wg0 type wireguard [#] wg setconf wg0 /dev/fd/63 Warning: AllowedIP has nonzero host part: 10.13.13.5/24 Warning: AllowedIP has nonzero host part: 10.13.13.2/24 Warning: AllowedIP has nonzero host part: 10.13.13.3/24 Warning: AllowedIP has nonzero host part: 10.13.13.4/24 [#] ip -4 address add 10.13.13.1/32 dev wg0 [#] ip link set mtu 1420 up dev wg0 [#] ip -4 route add 192.168.50.0/24 dev wg0 [#] ip -4 route add 172.21.0.0/24 dev wg0 [#] ip -4 route add 10.13.13.0/24 dev wg0 [#] iptables -A FORWARD -i wg0 -j ACCEPT;iptables -A FORWARD -o wg0 -j ACCEPT;iptables -t nat -A POSTROUTING -o eth+ -j MASQUERADE;iptables -t nat -A POSTROUTING -o wg+ -j MASQUERADE **** All tunnels are now active **** [ls.io-init] done.
seems it's all ok. In logs of wg-portal service I alse didn't find anything suspicious (errors). In UI I see status "connected". I will try to add more information. Could I send my docker-compose.yaml file?
Just post the contents of the docker-compose.yml file here and remove any sensitive information like passwords, hostnames or public IP addresses.
Just post the contents of the docker-compose.yml file here and remove any sensitive information like passwords, hostnames or public IP addresses.
version: "3.6"
services: wireguard: image: lscr.io/linuxserver/wireguard:latest container_name: wireguard cap_add:
net.ipv4.conf.all.src_valid_mark=1 restart: always
wg-portal: image: wgportal/wg-portal:v2.0.0-alpha.2 container_name: wg-portal restart: always depends_on:
Please, can You tell me what code under button <APPLY PEER DEFAULTs> does? Only this button takes effect (helps to restore normal work), if pressed.
![image](https://github.com/h44z/wg-portal/assets/56883129/1d240039-b753-4ffc-9253-4a0cf0e3c05a)
Just post the contents of the docker-compose.yml file here and remove any sensitive information like passwords, hostnames or public IP addresses.
I think I solved it. The problem was that, wg-portal ups wg0 interface and wireguard service could not up it one more time (because it was already started). In config.yaml I changed option restore_state
from default true
to false
and made one more interface wg1 (although I think everything will work with a default one), and used and configured it as main.
For some reason it works only on Ubuntu 22.04 lts. On CentOS7 it doesn't take any effect.
Hello. I continue testing v2.0.0-alpha.2. Many bugs are fixed, thanks. Then wireguard is up, it works stable. But i found one problem. Rebooting host, or restarting containers by
docker compose down && docker compose up -d
leads wireguard to network unavailability (although no changes have been made to the config files. Even the external IP does not change). There is no traffic via clients and server (even icmp packets don't get through). Restarting services doesn't take any effect. The only way is to log in web, to go to server config (chapter Peer Defaults) and, without making any changes, to press buttonAPPLY PEER Defauts
. Аfter that, everything is work normal (until the next reboot host or restart services). What is it that causes such incorrect behavior of services???