ShiromMakkad / LedFxDocker

A Docker Container for LedFx.
56 stars 19 forks source link

Update for the Latest LedFX dev branch and auto discovery #1

Closed spiro-c closed 3 years ago

ShiromMakkad commented 3 years ago

Thanks for the PR!

I really like the auto discovery feature, but is there any way to do this without network_mode: host? I don't want to add more permissions than necessary, and I want to keep as much of the container's activity containerized as possible. Is there a specific port we can access in the container instead? Also, how can the user define the auto-discovery URL, and what apps are compatible with it? Finally, please add a comment above the Avahi stuff with maybe a link to how you came up with those commands or some explanation of what's happening.

Regarding moving to the LedFX dev branch, I actually pushed a build with this change, but I got multiple bugs while doing so. I'm moving to LedFx/LedFx's master branch, and I've been told that once the dev branch is stable, it will be moved to the master branch.

spiro-c commented 3 years ago

How much i know the only other way is to pass the host dbus socket to the container dbus and need to run container --privileged https://github.com/mviereck/x11docker/issues/271 regarding the LedFX/LedFX dev brach i us it at the moment and and it's running whit no problems

spiro-c commented 3 years ago

And according to https://www.reddit.com/r/docker/comments/ebfic4/how_to_access_avahi_dns_resolver_zeroconf_from/ and docker documentation other solution is using MacVLAN or ipvlan https://docs.docker.com/network/macvlan/ Any way this can stay like option if any one like use auto discovery enable host network

ShiromMakkad commented 3 years ago

I like the documentation!

I'm going to look into getting Macvlan working without specifying a host network adapter or subnet so that the docker-compose file will work on any system. If that doesn't work, we can use network-mode: host.

How can I test the network discovery? Do you have a client in mind you want to use this feature with?

ShiromMakkad commented 3 years ago

I tried using macvlan, but it doesn't work with Wifi cards, it makes the docker compose file significantly more complex, and it will no longer work without configuration. The user will have to specify their subnet and network card (this is especially a nuisance on Windows), and I don't want to make the setup more complicated.

Unfortunately, network_mode: host doesn't work on Docker for Windows or Mac. Since they virtualize the Linux kernel, network_mode: hostonly exposes the avahi daemon on the virtualized host, not on the actual system itself. This means that LedFx is no longer accessible on these systems with network_mode: host. docker/for-mac#1031 has more details on this bug.

I don't want to merge a PR that drops support for Windows and Mac or drops support for Wifi cards. If you can find a workaround for the network_mode: host bug, I'd be happy to merge this PR, but I haven't found a fix.

spiro-c commented 3 years ago

I like the documentation!

I'm going to look into getting Macvlan working without specifying a host network adapter or subnet so that the docker-compose file will work on any system. If that doesn't work, we can use network-mode: host.

How can I test the network discovery? Do you have a client in mind you want to use this feature with?

The latest dev branch have ability to discover wled devices on the network if we don't pass the host network is not discovering devices but if we pass the host network it discover but is unable to resolve the .local host names avahi helps to resolve wled.local host to the ip address any way passing the host network can be optional, and whit note is not working on windows because of the limitations of docker for windows and need to add all the devices manually, but on unix systems is working and auto discover all the wled devices

LedFX

L3H0 commented 3 years ago

@spiro-c hi, i have build a image from your updated dockerfile (on Synology NAS) and i can`t successfully run the container. This is my log, when i start the container.

W: [pulseaudio] main.c: Running in system mode, but --disallow-module-loading not set. N: [pulseaudio] main.c: Running in system mode, forcibly disabling SHM mode. I: [pulseaudio] main.c: Daemon startup successful. mkdir: cannot create directory ‘/app/ledfx-config’: File exists Traceback (most recent call last): File "/usr/local/bin/ledfx", line 8, in sys.exit(main()) File "/usr/local/lib/python3.8/site-packages/ledfx/main.py", line 259, in main setup_logging(args.loglevel) File "/usr/local/lib/python3.8/site-packages/ledfx/main.py", line 86, in setup_logging file_handler = RotatingFileHandler( File "/usr/local/lib/python3.8/logging/handlers.py", line 148, in init BaseRotatingHandler.init(self, filename, mode, encoding, delay) File "/usr/local/lib/python3.8/logging/handlers.py", line 55, in init logging.FileHandler.init(self, filename, mode, encoding, delay) File "/usr/local/lib/python3.8/logging/init.py", line 1143, in init StreamHandler.init(self, self._open()) File "/usr/local/lib/python3.8/logging/init.py", line 1172, in _open return open(self.baseFilename, self.mode, encoding=self.encoding) FileNotFoundError: [Errno 2] No such file or directory: '/root/.ledfx/LedFx.log' Sentry is attempting to send 0 pending error messages Waiting up to 2 seconds Press Ctrl-C to quit

How can i create this file or what else i can do?

spiro-c commented 3 years ago

@L3H0 can u tray and use this image https://hub.docker.com/r/spirocekano/ledfx I test it on Windows10 docker hub windows 10 ubuntu wsl , bear Ubuntu 20.04 , ubunu18.04 vm in proxmox and is working on all of them i pass the audio over snapcast i don't know if all is working whit fifo pipe so i run it whit network_mode: host and in bridge mode is working

here is example docker-compose file i used

version: '3'

services:
  ledfxdev:
    image: spirocekano/ledfx:dev
    container_name: ledfx-dev
    hostname: ledfx-dev
    network_mode: host
    environment:
      - HOST=ip of the snapcast server
#      - FORMAT=-r 44100 -f S16_LE -c 2
    ports:
      - 8888:8888
    volumes:
      - ./ledfx-config:/root/.ledfx
#      - ./audio:/app/audio
L3H0 commented 3 years ago

@spiro-c Yes, image is OK. But i have problem with the audio :/ I don`t know how to "save" mopidy stream to "file" and then push to snapcast :/ but this has nothing to do with this issue.

spiro-c commented 3 years ago

@L3H0 in your mopidy.conf the audio output need to be

[audio]
#output = autoaudiosink
output = audioresample ! audioconvert ! audio/x-raw,rate=48000,channels=2,format=S16LE ! filesink location=/tmp/snapfifo

and in docker-compose.yml for ledfx u need to change

version: '3'

services:
  ledfxdev:
    image: spirocekano/ledfx:dev
    container_name: ledfx-dev
    hostname: ledfx-dev
    network_mode: host
    environment:
#      - HOST=ip of the snapcast server
      - FORMAT=-r 44100 -f S16_LE -c 2
    ports:
      - 8888:8888
    volumes:
      - ./ledfx-config:/root/.ledfx
      - /tmp/snapfifo:/app/audio
L3H0 commented 3 years ago

@spiro-c Thx, about 1 hour ago i have read few www and got the setup to work :) Now i must setup a RPi on mopidy and from them pass audio to the audio receiver.

ShiromMakkad commented 3 years ago

@L3H0 Just as a heads up, I tried running @spiro-c's container on a Raspberry Pi 4, but there were a lot of stuttering issues despite there being plenty of RAM and CPU power left. There's some issue with the dev branch and running on a Raspberry Pi, at least on Balena OS.

Let me know if you get it working, I'd love to see a Mopidy example too!

spiro-c commented 3 years ago

@ShiromMakkad Can you try and run the new containers from the Virtual brunch https://github.com/spiro-c/LedFxDocker-Virtual i run in on raspberry Pi 3b+ whit mopidy and snapcast on same Pi whit 4 Wled devices and i don't have any problems even there is steel some cpu on the pi

pi

ShiromMakkad commented 3 years ago

@spiro-c Sorry for taking so long, I'm pretty busy right now, but I just tested your image and it does work on my Pi. The virtual branch must help things a lot. The network discovery feature didn't work, but it didn't cause any errors either.

We can add a separate tag for the virtual branch with these improvements, but when moving to the virtual branch, I noticed that my base image doesn't support Python 3.9. I see you made a workaround using the official python image, which I prefer using.

I was thinking we could make a Dockerfile.virtual with both the build Dockerfile and installation Dockerfile you have and put it on a different tag.

What are your thoughts on this?

spiro-c commented 3 years ago

@ShiromMakkad i don't have any problem, yes we can have the Virtual tag and have both of them for now any way i will keep my fork for testing but how much i know the goal on LedFx is to push Virtual like master

And about two Dockerfiles i use just to speed up build process of the images and ability to use github actions to build the image the venv image i rebuild locally only if there is change in requirements.txt building the wheel is most time consuming and push to docker hub and after that just simple push to github will rebuild main image

spiro-c commented 3 years ago

One more side note i did manage to get mopidy to output to sound card and serve fifo for LedFx by using the tee function of gst-launch-1.0 and just change the output in mopidy.conf

[audio]
# output to audio card and fifo
output = tee name=t  ! queue ! audioresample ! autoaudiosink t. ! queue ! audioresample ! audioconvert ! audio/x-raw,rate=48000,channels=2,format=S16LE ! wavenc ! filesink location=/tmp/snapcast/snapfifo
ShiromMakkad commented 3 years ago

If you want to merge the Avahi daemon changes into Dockerfile.dev, I'd be happy to merge it. Otherwise, I'll close this PR.

Regarding the Virtuals branch, the devs said they want to merge it into the dev branch, so when that happens, I'll update Dockerfile.dev to use Python 3.9.

spiro-c commented 3 years ago

There is no point of this PR you have the dev branch i will close it