Open bitchecker opened 4 years ago
This is not intended to be used with --net host
. So at you said, the internal dnsmasq
is binding port 53 on it's own IP, then you need to expose port 53 using --publish
.
But if you already have a DNS service running on your host, the --publish
may fail because port is already in use by another DNS service (like the dnsmasq
started by libvirt
or NetworkManager
)
NetworkManager
running ?sudo netstat -plunt | grep -w 53
Hi,
libvirt
and podman
NetworkManager
is running and is mounted as ro
volume on containertcp 0 0 $_virbr0_address_:53 0.0.0.0:* LISTEN 1090/dnsmasq
udp 0 0 $_dalidock_container_ip_:53 0.0.0.0:* 1199/podman
udp 0 0 $_virbr0_address_:53 0.0.0.0:* 1090/dnsmasq
I don't understand why podman is listenning on $_dalidock_container_ip_
, it should be the IP address of your network bridge. Maybe this is specific to podman
.
it's the same configuration about docker
, also on those container management system, you will get an ip address relative to docker subnet.
If i run
podman run \
--name dalidock \
--net host \
--cap-add NET_ADMIN \
--publish $_LIBVIRT_HOST_:53:53/udp \
--publish 80:80 \
--env DNS_DOMAIN=my.local.env \
--env LB_DOMAIN=my.local.env \
--volume /run/NetworkManager:/run/NetworkManager:ro \
--volume /var/run/libvirt:/var/run/libvirt:ro \
lionelnicolas/dalidock
and i disable the internal dnmasq
used for NAT
subnet I can see virtual machine that were detected.
dalidock[14]: [INFO] wait for domain test QEMU guest agent to reply
dalidock[14]: [INFO] name=test hostname=test ip=None net=br0 domain=my.local.env use_wildcard=False
At this point, it will be already reachable with $_hostname_.$_domain:
, In this case test.my.local.env
? Metadata are necessary always or just for setup custom DNS/LB entry?
PS: of course ip=None is just because virtual machine is not completely starterd
Yes, metadata and labels are only needed for custom DNS/LB.
So your that case, all these commands should return the correct IP:
# using `host` from `bind-utils` package on RedHat-like or `bind9-host` on Debian-like
host test ${LIBVIRT_HOST}
host test.my.local.env ${LIBVIRT_HOST}
Or using dig
:
dig @${LIBVIRT_HOST} test.my.local.env
I see that you have defined a qemu guest agent in your VM config (wait for domain test QEMU guest agent to reply
). If the guest agent is not running inside the VM, dalidock
will timeout when trying to get the IP address. If it timeouts, then no DNS record will be created as there is no known IP to associate. You can customize the timeout by adding :
--env LIBVIRT_IP_TIMEOUT=120
If you want to make dalidock
use libvirt DHCP leases instead of the guest agent, you'll need to remove the org.qemu.guest_agent.0
channel from the VM config.
Yes, I always use qemu guest agent, but virtual machine was a simple test that was booting, so, after that I need to configure it and so on. I think that could be fail while I work to configure it, after a reboot, it will works properly, but timeout
option can be very useful.
I will try to configure and running some tests, also changing domain and using LB domain.
Ok !
I'm thinking about adding a fallback to DHCP lease discovery if the guest agent fails, that would help in your case.
Hi, I'm trying to run this service but i see that the DNS that will start into the container, it's binded to his internal ip address, so, it's not possible to have a DNS service exposed to the network.
With this limitation, it's not possible to configure a container/virtualization host on network with services published via domain names.
As you can see from
netstat
, service is binded to internal container ip address:that is not reachable from network, also with
publish
costraintI've also tried to not specify binding address, but i get error:
because there are other services running that are related to libvirt