Open xversial opened 1 year ago
Same issue running the current image on a Synology NAS docker containers, MacOS Sonoma 14.1.1 and Windows 11. In each case, the container is receiving the connections but refusing them.
Windows command line:
docker run --rm -v ${PWD}\config:/config -v ${PWD}\access:/access -p 69:69/udp -p 3000:3000 -p 8080:80 --name netboot netbootxyz/netbootxyz
Weird I also have same Issue with docker, but on my Synology NAS I can get it working by using host network instead of bridge. Some of the things I tried, setting up a test bridge with manually configured ip settings(no connection at all). But running with host network seems to work.
Only issue is to get port 80 to be 8080 since I use port 80 for a website. Is there a way to change this when using host network inside the linux image or some config file? Right now it keeps spawning new nginx with new pid number every second.
Found solution to port 80, there is a config file called "default", which can be found in the folder called "config" --> nginx --> site.confs. Now I can use netboot.xyz via host network as a workaround.
Same issue as OP, same Synology docker host as @kfsone and @GVKAWESOME. I also can get this working by using a docker macvlan network assigning the container an IP from a subnet directly on my router.
The docker bridged network issue I think is a Synology "feature". I've not yet gotten to the bottom of it but I think the typical NAT helper and IPtable rules aren't in place that you find in a more normal linux environment.
@KillerKelvUK Docker on Mac has to use a VM to host the linux kernel for the containers, so I'd moved from the Mac to a Windows machine and had the same problem, and then I tried it on Docker inside an Ubuntu 20.04 VM on my ESXi server, where it worked for some parts of my network and not others, and I settled on just using the USB Install for the moment (https://netboot.xyz/docs/booting/usb/).
One way to get it to work is to use -P or the host network, but the latter isn't going to help if you want to serve your entire network and not just the machine hosting the container.
One way to get it to work is to use -P or the host network, but the latter isn't going to help if you want to serve your entire network and not just the machine hosting the container.
I'm having this issue too, what do you mean with -P?
Bridged network specifying ports with -p xxx:xxx doesn't work on Synology...for me at least. Host network as in --network="host" works fine, just make sure you don't have any ports already taken by Synology here as the container won't start if there is overlap.
Alright, thank you!
@drmutaba -P
(capital) exposes all ports. If you're actively using the docker features of the NAS the dsm gui is ok but take a look at 'portainer-ce' for a fully featured container manager.
DSM gui is a crock. Portainer okay but I prefer sticking to the terminal and doing everything in compose.
-P doesn't work for me on my Synology, haven't used it previously but it looks like its just exporting the entire port range which which just won't work on DSM.
I wonder if the NET_ADMIN capability or some other capabilities could resolve this issue, has anyone tried them? The host mode is considered insecure since it gives the container full access to the D-Bus. The upside is it gives better performance.
https://docs.docker.com/engine/reference/run/#network-settings
@GVKAWESOME Tried it on Mac and Windows/WSL, but I still couldn't see the tftp responses from outside the container.
Hmm - I wonder if the problem isn't the packets coming out of the container on the randomly selected port, I wonder if it is the tftp client trying to respond back on those ports:
By default, all containers have networking enabled and they can make any outgoing connections. The operator can completely disable networking with docker run --network none which disables all incoming and outgoing networking. In cases like this, you would perform I/O through files or STDIN and STDOUT only.
tcpdump of the docker0 interface looking only at tftp traffic shows that all packets have BAD UDP CKSUM. Researching the in.tftpd process and the "connection refused" error seems to mostly point back to firewall rules.
I'm still leaning on this being a NAT and iptables issue for my scenario on Synology unless anyone can share any alternative. Reading this has shown Synology aren't shipping the nf_nat_tftp kernel module in their latest DSM build for my NAS model and thus the iptables tftp helper cannot be added to the rules.
I could be barking up the wrong tree here though admittedly.
Hi, I did some captures with wireshark, by sending a fixed payload with netcat. It seems Docker is filtering outgoing udp packets.
netboot.xyz Docker container: 192.168.253.234. Exposed port 69 in docker container as - "192.168.253.234:69:69/udp"
client: 192.168.253.7.
netcat: echo AAFuZXRib290Lnh5ei5lZmkAb2N0ZXQAdHNpemUAMABibGtzaXplADE0NjgA | base64 -d | nc -u 192.168.253.234 69
wireshark 1: listen on netboot.xyz server machine on any interface wireshark 2: listen on client machine on any interface
netcat: no response
wireshark 1: wireshark 2:
I noted the the linuxserver.io variant of netboot includes an addition env to inform the in.tftp process of the port range to use for connections. This coupled with an additional matching -p statement for this range. Still hasn't solved my issue but the thread discussing the feature suggested it resolved a similar issue to this for them. Will see if I can find the link again.
Edit: https://github.com/linuxserver/docker-netbootxyz/issues/11
Edit2: To save others reading, in addition to pinning the connections to specific ports there is also a suggestion the Windows tftp client is a problem and isn't responding to the server. Haven't tested myself yet tho.
ok so this was my fix for this and works ok ATM. In the YAML Config make sure you add network_mode: host
version: "2.1" services: netbootxyz: image: lscr.io/linuxserver/netbootxyz:latest container_name: netbootxyz environment:
On the Synology machine in Files Services go to advanced and enable TFTP Services and point the root folder to your config folder for netbootxyz
For anyone using podman quadlets, I got rootless working with
Network=slirp4netns:port_handler=slirp4netns
The default port handler won't work because the client's ip address is mapped rather than passed through.
Some other magic I discovered but don't need for my own configuration:
Environment=TFTPD_OPTS="--port-range 30000:30010"
PublishPort=30000-30010:30000-30010/udp
The TFTPD_OPTS is passed through supervisord in the ghcr.io/netbootxyz/netbootxyz image
There is definitely some weirdness between the netboot container and LSIO container. netboot container with -p 69 worked fine, LSIO container nada. Problem is, netboot container didn't like custom PID and GID settings for downloading of assets (the web server would crash).
I'll see if I can dig out some logs, but I'm loathe to network_mode=host.
Describe the bug No files are able to be transferred over TFTP. All of the "valid" files return an error saying Connection refused.
To Reproduce Steps to reproduce the behavior: Just install the docker container on any machine, then on another device TFTP a file from it. For example,
I've only tried this on docker, I haven't ran netbootxyz on a baremetal or VM yet.