MaxWinterstein / homeassistant-addons

GNU General Public License v3.0
132 stars 39 forks source link

Cups Server not starting on HA OS 10.0+ #167

Open lead0r opened 1 year ago

lead0r commented 1 year ago

This has been discussed in https://github.com/MaxWinterstein/homeassistant-addons/issues/152

Unfortunately, adding "ulimit -n" to all run tasks did not solve the problem: https://github.com/MaxWinterstein/homeassistant-addons/pull/166

Current log when starting the add-on ( @Changuitox has a very similar, but not identical log in #166 ):

s6-rc: info: service s6rc-oneshot-runner: starting s6-rc: info: service s6rc-oneshot-runner successfully started s6-rc: info: service fix-attrs: starting s6-rc: info: service fix-attrs successfully started s6-rc: info: service legacy-cont-init: starting cont-init: info: running /etc/cont-init.d/dbus-setup cont-init: info: /etc/cont-init.d/dbus-setup exited 0 s6-rc: info: service legacy-cont-init successfully started s6-rc: info: service legacy-services: starting services-up: info: copying legacy longrun avahi (no readiness notification) services-up: info: copying legacy longrun dbus (no readiness notification) services-up: info: copying legacy longrun nginx (no readiness notification) s6-rc: info: service legacy-services successfully started [avahi] Found user 'avahi' (UID 104) and group 'avahi' (GID 109). [avahi] Successfully dropped root privileges. [avahi] avahi-daemon 0.8 starting up. [avahi] Successfully called chroot(). [avahi] Successfully dropped remaining capabilities. [avahi] No service file found in /etc/avahi/services. [avahi] Joining mDNS multicast group on interface eth0.IPv4 with address 172.30.33.5. [avahi] New relevant interface eth0.IPv4 for mDNS. [avahi] Joining mDNS multicast group on interface lo.IPv4 with address 127.0.0.1. [avahi] New relevant interface lo.IPv4 for mDNS. [avahi] Network interface enumeration completed. [avahi] Server startup complete. Host name is homeassistant.local. Local service cookie is 1866876624. [dbus] dbus-daemon[82]: [system] Activating service name='org.freedesktop.ColorManager' requested by ':1.1' (uid=0 pid=195 comm="/usr/sbin/cupsd -f ") (using servicehelper) [dbus] dbus-daemon[82]: [system] Successfully activated service 'org.freedesktop.ColorManager' s6-rc: info: service legacy-services: stopping s6-rc: info: service legacy-services successfully stopped s6-rc: info: service legacy-cont-init: stopping s6-rc: info: service legacy-cont-init successfully stopped s6-rc: info: service fix-attrs: stopping s6-rc: info: service fix-attrs successfully stopped s6-rc: info: service s6rc-oneshot-runner: stopping s6-rc: info: service s6rc-oneshot-runner successfully stopped

arffsaad commented 1 year ago

+1, still getting the same issue. Here is my logs:

s6-rc: info: service s6rc-oneshot-runner: starting s6-rc: info: service s6rc-oneshot-runner successfully started s6-rc: info: service fix-attrs: starting s6-rc: info: service fix-attrs successfully started s6-rc: info: service legacy-cont-init: starting cont-init: info: running /etc/cont-init.d/dbus-setup cont-init: info: /etc/cont-init.d/dbus-setup exited 0 s6-rc: info: service legacy-cont-init successfully started s6-rc: info: service legacy-services: starting services-up: info: copying legacy longrun avahi (no readiness notification) services-up: info: copying legacy longrun dbus (no readiness notification) services-up: info: copying legacy longrun nginx (no readiness notification) s6-rc: info: service legacy-services successfully started [avahi] Found user 'avahi' (UID 104) and group 'avahi' (GID 109). [avahi] Successfully dropped root privileges. [avahi] avahi-daemon 0.8 starting up. [avahi] Successfully called chroot(). [avahi] Successfully dropped remaining capabilities. [avahi] No service file found in /etc/avahi/services. [avahi] Joining mDNS multicast group on interface eth0.IPv4 with address 172.30.33.5. [avahi] New relevant interface eth0.IPv4 for mDNS. [avahi] Joining mDNS multicast group on interface lo.IPv4 with address 127.0.0.1. [avahi] New relevant interface lo.IPv4 for mDNS. [avahi] Network interface enumeration completed. [avahi] Server startup complete. Host name is homeassistant.local. Local service cookie is 451707781. s6-rc: info: service legacy-services: stopping s6-rc: info: service legacy-services successfully stopped s6-rc: info: service legacy-cont-init: stopping s6-rc: info: service legacy-cont-init successfully stopped s6-rc: info: service fix-attrs: stopping s6-rc: info: service fix-attrs successfully stopped s6-rc: info: service s6rc-oneshot-runner: stopping s6-rc: info: service s6rc-oneshot-runner successfully stopped

mrcn81 commented 1 year ago

I have the same.

jflecool2 commented 1 year ago

Ha os 10.3 does not fix this s6-rc: info: service s6rc-oneshot-runner: starting s6-rc: info: service s6rc-oneshot-runner successfully started s6-rc: info: service fix-attrs: starting s6-rc: info: service fix-attrs successfully started s6-rc: info: service legacy-cont-init: starting cont-init: info: running /etc/cont-init.d/dbus-setup cont-init: info: /etc/cont-init.d/dbus-setup exited 0 s6-rc: info: service legacy-cont-init successfully started s6-rc: info: service legacy-services: starting services-up: info: copying legacy longrun avahi (no readiness notification) services-up: info: copying legacy longrun dbus (no readiness notification) services-up: info: copying legacy longrun nginx (no readiness notification) s6-rc: info: service legacy-services successfully started [avahi] Found user 'avahi' (UID 104) and group 'avahi' (GID 109). [avahi] Successfully dropped root privileges. [avahi] avahi-daemon 0.8 starting up. [avahi] Successfully called chroot(). [avahi] Successfully dropped remaining capabilities. [avahi] No service file found in /etc/avahi/services. [avahi] Joining mDNS multicast group on interface eth0.IPv4 with address 172.30.33.12. [avahi] New relevant interface eth0.IPv4 for mDNS. [avahi] Joining mDNS multicast group on interface lo.IPv4 with address 127.0.0.1. [avahi] New relevant interface lo.IPv4 for mDNS. [avahi] Network interface enumeration completed. [avahi] Server startup complete. Host name is homeassistant.local. Local service cookie is 3155238918. [dbus] dbus-daemon[83]: [system] Activating service name='org.freedesktop.ColorManager' requested by ':1.1' (uid=0 pid=196 comm="/usr/sbin/cupsd -f ") (using servicehelper) [dbus] dbus-daemon[83]: [system] Successfully activated service 'org.freedesktop.ColorManager' s6-rc: info: service legacy-services: stopping s6-rc: info: service legacy-services successfully stopped s6-rc: info: service legacy-cont-init: stopping s6-rc: info: service legacy-cont-init successfully stopped s6-rc: info: service fix-attrs: stopping s6-rc: info: service fix-attrs successfully stopped s6-rc: info: service s6rc-oneshot-runner: stopping s6-rc: info: service s6rc-oneshot-runner successfully stopped

akslow commented 1 year ago

maybe here is solution: https://github.com/moby/moby/issues/45204

"containerd recently changed the default LimitNOFILE from 1048576 to infinity. This breaks various applications in containers such as cups, which cannot start and/or print with the infinite ulimit."

lkadar2015 commented 1 year ago

It still do not start on HAOS 10.3. Home Assistant 2023.7.1 Supervisor 2023.07.1

Log:

s6-rc: info: service s6rc-oneshot-runner: starting s6-rc: info: service s6rc-oneshot-runner successfully started s6-rc: info: service fix-attrs: starting s6-rc: info: service fix-attrs successfully started s6-rc: info: service legacy-cont-init: starting cont-init: info: running /etc/cont-init.d/dbus-setup cont-init: info: /etc/cont-init.d/dbus-setup exited 0 s6-rc: info: service legacy-cont-init successfully started s6-rc: info: service legacy-services: starting services-up: info: copying legacy longrun avahi (no readiness notification) services-up: info: copying legacy longrun dbus (no readiness notification) services-up: info: copying legacy longrun nginx (no readiness notification) s6-rc: info: service legacy-services successfully started [avahi] Found user 'avahi' (UID 104) and group 'avahi' (GID 109). [avahi] Successfully dropped root privileges. [avahi] avahi-daemon 0.8 starting up. [avahi] Successfully called chroot(). [avahi] Successfully dropped remaining capabilities. [avahi] No service file found in /etc/avahi/services. [avahi] Joining mDNS multicast group on interface eth0.IPv4 with address 172.30.33.5. [avahi] New relevant interface eth0.IPv4 for mDNS. [avahi] Joining mDNS multicast group on interface lo.IPv4 with address 127.0.0.1. [avahi] New relevant interface lo.IPv4 for mDNS. [avahi] Network interface enumeration completed. [avahi] Server startup complete. Host name is homeassistant.local. Local service cookie is 4067477012. s6-rc: info: service legacy-services: stopping s6-rc: info: service legacy-services successfully stopped s6-rc: info: service legacy-cont-init: stopping s6-rc: info: service legacy-cont-init successfully stopped s6-rc: info: service fix-attrs: stopping s6-rc: info: service fix-attrs successfully stopped s6-rc: info: service s6rc-oneshot-runner: stopping s6-rc: info: service s6rc-oneshot-runner successfully stopped

MaxWinterstein commented 1 year ago

I was able to reproduce this by installing a VM on my Synology. Now I can try to figure out what happens.

Sadly I have no real knowledge about all thath DBUS stuff and CUPS.

@akslow sounds promising, I am just confused about that systemd relation. We start cups directly as entrypoint. Which might be the issue, not sure why this is not wrapped like the other s6 stuff.

yousaf465 commented 1 year ago

@bdraco might be helpful with this.

lkadar2015 commented 1 year ago

Is there any progress with this

Alex-joomla commented 1 year ago

Ok i can tell that it doesn't start wit the latest OS

MaxWinterstein commented 1 year ago

Yeah, still want to solve this issue. Just rare on time :(

i can reproduce it, so next step might be to disable everything step by step to see what really causes the crashes.

tunnell commented 1 year ago

Ah, so this issue is known? For what it’s worth, it isn’t so simple as fiddling configuration options (host_dbus, full_access, usb, whatever) as far as I can tell. Additionally, no fork of the original repository has fixed this yet if that saves anybody else searching. Thanks for looking into it. If you can suggest things to try, I can also try then make a PR. I know Docker and Debian well though some knowledge likely dated, but brand new to Home Assistant.

GHGiampy commented 11 months ago

Trying to start on a RPI3 (model B) with HASSOS, same log of this issue. Looking in the journal:

Sep 24 13:25:54 homeassistant systemd[1]: Started libcontainer container c08052ef629044f97419e28960f0a25da05c51cc71c85464a74670f7da0a3d06.
Sep 24 13:25:57 homeassistant kernel: eth0: renamed from vethe487603
Sep 24 13:25:57 homeassistant kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth81f23fa: link becomes ready
Sep 24 13:25:57 homeassistant kernel: hassio: port 10(veth81f23fa) entered blocking state
Sep 24 13:25:57 homeassistant kernel: hassio: port 10(veth81f23fa) entered forwarding state
Sep 24 13:25:57 homeassistant NetworkManager[418]: <info>  [1695612357.4933] device (veth81f23fa): carrier: link connected
Sep 24 13:26:12 homeassistant kernel: __vm_enough_memory: pid: 344510, comm: cupsd, no enough memory for the allocation
Sep 24 13:26:12 homeassistant kernel: __vm_enough_memory: pid: 344510, comm: cupsd, no enough memory for the allocation
Sep 24 13:26:12 homeassistant kernel: __vm_enough_memory: pid: 344510, comm: cupsd, no enough memory for the allocation
Sep 24 13:26:17 homeassistant systemd[1]: docker-c08052ef629044f97419e28960f0a25da05c51cc71c85464a74670f7da0a3d06.scope: Deactivated successfully.

__vm_enough_memory: pid: 344510, comm: cupsd, no enough memory for the allocation

having:

[core-ssh ~]$ free -h
              total        used        free      shared  buff/cache   available
Mem:         909.4M      547.5M       27.3M      356.0K      334.6M      297.8M
Swap:        300.1M      244.2M       55.9M

Is so much memory needed to start cups?

zajac-grzegorz commented 11 months ago

Inspired by Max work, I have created the addon in a little different way - using S6-Overlay v3, Avahi configured in the reflector mode and with access to host network.

Tested with the latest Home Assistant 2023.9. AirPrint (i.e. from iPhone) works also.

https://github.com/zajac-grzegorz/homeassistant-addon-cups-airprint

MaxWinterstein commented 11 months ago

Looks pretty good to me! Just gave it a quick spin on my Debian based home assistant installation, so can't really speak for the users with issues.

Might need some tweaking here and there, like a more recent version of CUPS or prebuild images.


From: Grzegorz Zajac @.> Sent: Tuesday, September 26, 2023 8:48 PM To: MaxWinterstein/homeassistant-addons @.> Cc: Max Winterstein @.>; Comment @.> Subject: Re: [MaxWinterstein/homeassistant-addons] Cups Server not starting on HA OS 10.0+ (Issue #167)

Inspired by Max work, I have created the addon in a little different way - using S6-Overlay v3, Avahi configured in the reflector mode and with access to host network.

Tested with the latest Home Assistant 2023.9. AirPrint (i.e. from iPhone) works also.

https://github.com/zajac-grzegorz/homeassistant-addon-cups-airprint

— Reply to this email directly, view it on GitHubhttps://github.com/MaxWinterstein/homeassistant-addons/issues/167#issuecomment-1736095197, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ABNHB3AMP7ZBBTQ6725MXZDX4MPOZANCNFSM6AAAAAAZCSJ7UE. You are receiving this because you commented.Message ID: @.***>

jflecool2 commented 11 months ago

Inspired by Max work, I have created the addon in a little different way - using S6-Overlay v3, Avahi configured in the reflector mode and with access to host network.

Tested with the latest Home Assistant 2023.9. AirPrint (i.e. from iPhone) works also.

https://github.com/zajac-grzegorz/homeassistant-addon-cups-airprint

Tried it on RPI4 HAOS10.5 HA 2023.9.2 and it works!! Thanks for making this

GHGiampy commented 11 months ago

Inspired by Max work, I have created the addon in a little different way - using S6-Overlay v3, Avahi configured in the reflector mode and with access to host network.

Tested with the latest Home Assistant 2023.9. AirPrint (i.e. from iPhone) works also.

https://github.com/zajac-grzegorz/homeassistant-addon-cups-airprint

Tried on RPI3-B, HAOS 10.5, HA 2023.9.3, SUPERVISOR 2023.9.2 Unable to install with high I/O for about 20 minutes

23-09-27 00:25:34 INFO (MainThread) [supervisor.addons] Creating Home Assistant add-on data folder /data/addons/data/2c6aefcc_cupsik
23-09-27 00:25:34 INFO (MainThread) [supervisor.docker.addon] Starting build for 2c6aefcc/aarch64-addon-cupsik:1.0
23-09-27 00:31:36 INFO (MainThread) [supervisor.homeassistant.api] Updated Home Assistant API token
23-09-27 00:40:23 WARNING (MainThread) [supervisor.misc.tasks] Watchdog/Application found a problem with observer plugin!
23-09-27 00:40:23 INFO (SyncWorker_6) [supervisor.docker.manager] Stopping hassio_observer application
23-09-27 00:41:06 INFO (SyncWorker_6) [supervisor.docker.manager] Cleaning hassio_observer application
23-09-27 00:41:06 WARNING (MainThread) [supervisor.plugins.base] Watchdog found observer plugin failed, restarting...
23-09-27 00:41:06 INFO (MainThread) [supervisor.plugins.observer] Starting observer plugin
23-09-27 00:41:06 ERROR (MainThread) [supervisor.plugins.observer] Can't start observer plugin
23-09-27 00:41:06 ERROR (MainThread) [supervisor.plugins.base] Watchdog restart of observer plugin failed!
23-09-27 00:41:06 INFO (MainThread) [supervisor.plugins.observer] Starting observer plugin
23-09-27 00:41:13 INFO (MainThread) [supervisor.docker.observer] Starting Observer ghcr.io/home-assistant/aarch64-hassio-observer with version 2023.06.0 - 172.30.32.6
jflecool2 commented 11 months ago

Inspired by Max work, I have created the addon in a little different way - using S6-Overlay v3, Avahi configured in the reflector mode and with access to host network. Tested with the latest Home Assistant 2023.9. AirPrint (i.e. from iPhone) works also. https://github.com/zajac-grzegorz/homeassistant-addon-cups-airprint

Tried on RPI3-B, HAOS 10.5, HA 2023.9.3, SUPERVISOR 2023.9.2 Unable to install with high I/O for about 20 minutes

23-09-27 00:25:34 INFO (MainThread) [supervisor.addons] Creating Home Assistant add-on data folder /data/addons/data/2c6aefcc_cupsik
23-09-27 00:25:34 INFO (MainThread) [supervisor.docker.addon] Starting build for 2c6aefcc/aarch64-addon-cupsik:1.0
23-09-27 00:31:36 INFO (MainThread) [supervisor.homeassistant.api] Updated Home Assistant API token
23-09-27 00:40:23 WARNING (MainThread) [supervisor.misc.tasks] Watchdog/Application found a problem with observer plugin!
23-09-27 00:40:23 INFO (SyncWorker_6) [supervisor.docker.manager] Stopping hassio_observer application
23-09-27 00:41:06 INFO (SyncWorker_6) [supervisor.docker.manager] Cleaning hassio_observer application
23-09-27 00:41:06 WARNING (MainThread) [supervisor.plugins.base] Watchdog found observer plugin failed, restarting...
23-09-27 00:41:06 INFO (MainThread) [supervisor.plugins.observer] Starting observer plugin
23-09-27 00:41:06 ERROR (MainThread) [supervisor.plugins.observer] Can't start observer plugin
23-09-27 00:41:06 ERROR (MainThread) [supervisor.plugins.base] Watchdog restart of observer plugin failed!
23-09-27 00:41:06 INFO (MainThread) [supervisor.plugins.observer] Starting observer plugin
23-09-27 00:41:13 INFO (MainThread) [supervisor.docker.observer] Starting Observer ghcr.io/home-assistant/aarch64-hassio-observer with version 2023.06.0 - 172.30.32.6

considering your ram issue, consider an upgrade (then try again)

MaxWinterstein commented 11 months ago

@zajac-grzegorz should we collaborate on this somehow?

yousaf465 commented 11 months ago

not installed on rpi4 2Gb.

oraculix commented 11 months ago

not installed on rpi4 2Gb.

I got it running on an RPi4 2GB.

The image is built from scratch, which used much of the CPU and up to 500 GB RAM for about 5 minutes. Refresh the add-on page after a while (e.g., monitor CPU usage and refresh when it's back to normal), because my page stayed in the "installing..." state forever.

Also, the RAM usage during normal operations is a bit odd: It uses about 50MB RAM, which is 4x the consumption of Max's version. Edit: After setting up one printer, I'm at 131 MB RAM consumption now. A bit off for "just" printing, but I'll head over to @zajac-grzegorz repo for that.

zajac-grzegorz commented 11 months ago

here is the mem consumption on my setup with one printer added (CUPS 19,3MB):

Screenshot from 2023-10-05 20-12-08

MaxWinterstein commented 11 months ago

@zajac-grzegorz are you okay with me wrapping this into my repo and provide pre-build images? of course with credits.

zajac-grzegorz commented 11 months ago

@zajac-grzegorz are you okay with me wrapping this into my repo and provide pre-build images? of course with credits.

Sure! Go ahead

MaxWinterstein commented 11 months ago

I quickly gave it a try, after a store refresh there should be CUPS (DEV - unstable!) - as the name states, totally unstable. Not sure where this journey gets us.

Feedback, especially from those who had build issues, is highly recommended.

GHGiampy commented 11 months ago

not installed on rpi4 2Gb.

I got it running on an RPi4 2GB.

The image is built from scratch, which used much of the CPU and up to 500 GB RAM for about 5 minutes. Refresh the add-on page after a while (e.g., monitor CPU usage and refresh when it's back to normal), because my page stayed in the "installing..." state forever.

Also, the RAM usage during normal operations is a bit odd: It uses about 50MB RAM, which is 4x the consumption of Max's version. Edit: After setting up one printer, I'm at 131 MB RAM consumption now. A bit off for "just" printing, but I'll head over to @zajac-grzegorz repo for that.

I was able to complete the install process on a RPI3-B in the same way, waiting patiently and refreshing the add-on page. I discovered that the docker image build took forever on the RPI because it's a very huge image (~650 MB), but at least it works. Another downside is that it blew the size of my backups from 20 to 200+ MB.

CUPS

MaxWinterstein commented 11 months ago

@GHGiampy local built images are always included in the backup of the addon.

lkadar2015 commented 11 months ago

I quickly gave it a try, after a store refresh there should be CUPS (DEV - unstable!) - as the name states, totally unstable. Not sure where this journey gets us.

Feedback, especially from those who had build issues, is highly recommended.

Eventhough it is almost the same as the one from zajac-grzegorz, which won't install for me, this one is installed and starts, but I see the following in the log: *** WARNING: Detected another IPv4 mDNS stack running on this host. This makes mDNS unreliable and is thus not recommended. *** *** WARNING: Detected another IPv6 mDNS stack running on this host. This makes mDNS unreliable and is thus not recommended. ***

And it really is unreliable. I tried it with my iPhone, it sees the installed USB printer, but not able to print and after the first trial with printing, the server needs to be restarted to be able to try again.

Here is the complete log:

s6-rc: info: service base-addon-banner successfully started s6-rc: info: service fix-attrs: starting s6-rc: info: service base-addon-timezone: starting s6-rc: info: service base-addon-log-level: starting s6-rc: info: service fix-attrs successfully started [11:04:05] INFO: Configuring timezone (Europe/Budapest)... s6-rc: info: service base-addon-log-level successfully started s6-rc: info: service base-addon-timezone successfully started s6-rc: info: service legacy-cont-init: starting s6-rc: info: service legacy-cont-init successfully started s6-rc: info: service initialization: starting s6-rc: info: service initialization successfully started s6-rc: info: service dbus-daemon: starting s6-rc: info: service dbus-daemon successfully started s6-rc: info: service avahi-daemon: starting s6-rc: info: service avahi-daemon successfully started s6-rc: info: service legacy-services: starting [11:04:06] INFO: Starting DBUS daemon from S6 [11:04:06] INFO: Starting Avahi daemon from S6 Found user 'avahi' (UID 101) and group 'avahi' (GID 107). Successfully dropped root privileges. avahi-daemon 0.8 starting up. Successfully called chroot(). Successfully dropped remaining capabilities. No service file found in /etc/avahi/services. WARNING: Detected another IPv4 mDNS stack running on this host. This makes mDNS unreliable and is thus not recommended. WARNING: Detected another IPv6 mDNS stack running on this host. This makes mDNS unreliable and is thus not recommended. Joining mDNS multicast group on interface vethe37ec37.IPv6 with address fe80::ac22:caff:feeb:cbc1. New relevant interface vethe37ec37.IPv6 for mDNS. Joining mDNS multicast group on interface vethb74ed6f.IPv6 with address fe80::48a3:a1ff:fe72:4b59. New relevant interface vethb74ed6f.IPv6 for mDNS. Joining mDNS multicast group on interface veth852635d.IPv6 with address fe80::4f1:2ff:fea2:d9cf. New relevant interface veth852635d.IPv6 for mDNS. Joining mDNS multicast group on interface veth88d6c00.IPv6 with address fe80::a86c:a2ff:fef6:ce48. New relevant interface veth88d6c00.IPv6 for mDNS. Joining mDNS multicast group on interface vethb34f7cf.IPv6 with address fe80::f888:81ff:fe13:ca46. New relevant interface vethb34f7cf.IPv6 for mDNS. Joining mDNS multicast group on interface veth5b7dd2d.IPv6 with address fe80::bc1b:20ff:fe65:1c36. New relevant interface veth5b7dd2d.IPv6 for mDNS. Joining mDNS multicast group on interface veth8c0d654.IPv6 with address fe80::6011:49ff:fe54:5248. New relevant interface veth8c0d654.IPv6 for mDNS. Joining mDNS multicast group on interface veth64ef9fd.IPv6 with address fe80::1098:9aff:fe9e:deb5. New relevant interface veth64ef9fd.IPv6 for mDNS. Joining mDNS multicast group on interface veth0f1c474.IPv6 with address fe80::6456:8ff:fef0:e40. New relevant interface veth0f1c474.IPv6 for mDNS. Joining mDNS multicast group on interface vethe0175fa.IPv6 with address fe80::a860:aeff:fea9:d905. New relevant interface vethe0175fa.IPv6 for mDNS. Joining mDNS multicast group on interface veth8ea71fe.IPv6 with address fe80::189e:e5ff:fe78:e9fa. New relevant interface veth8ea71fe.IPv6 for mDNS. Joining mDNS multicast group on interface hassio.IPv6 with address fe80::42:1eff:fecd:4c14. New relevant interface hassio.IPv6 for mDNS. Joining mDNS multicast group on interface hassio.IPv4 with address 172.30.32.1. New relevant interface hassio.IPv4 for mDNS. Joining mDNS multicast group on interface docker0.IPv6 with address fe80::42:b7ff:fe64:7820. New relevant interface docker0.IPv6 for mDNS. Joining mDNS multicast group on interface docker0.IPv4 with address 172.30.232.1. New relevant interface docker0.IPv4 for mDNS. Joining mDNS multicast group on interface wlp1s0.IPv6 with address fe80::c91c:c9cc:90c6:dc7d. New relevant interface wlp1s0.IPv6 for mDNS. Joining mDNS multicast group on interface wlp1s0.IPv4 with address 192.168.1.71. New relevant interface wlp1s0.IPv4 for mDNS. Joining mDNS multicast group on interface enp2s0.IPv6 with address fe80::d97e:61a6:df0c:e237. New relevant interface enp2s0.IPv6 for mDNS. Joining mDNS multicast group on interface enp2s0.IPv4 with address 192.168.2.71. New relevant interface enp2s0.IPv4 for mDNS. Joining mDNS multicast group on interface lo.IPv6 with address ::1. New relevant interface lo.IPv6 for mDNS. Joining mDNS multicast group on interface lo.IPv4 with address 127.0.0.1. New relevant interface lo.IPv4 for mDNS. Network interface enumeration completed. Registering new address record for fe80::ac22:caff:feeb:cbc1 on vethe37ec37.. Registering new address record for fe80::48a3:a1ff:fe72:4b59 on vethb74ed6f.. Registering new address record for fe80::4f1:2ff:fea2:d9cf on veth852635d.. Registering new address record for fe80::a86c:a2ff:fef6:ce48 on veth88d6c00.. Registering new address record for fe80::f888:81ff:fe13:ca46 on vethb34f7cf.. Registering new address record for fe80::bc1b:20ff:fe65:1c36 on veth5b7dd2d.. Registering new address record for fe80::6011:49ff:fe54:5248 on veth8c0d654.. Registering new address record for fe80::1098:9aff:fe9e:deb5 on veth64ef9fd.. Registering new address record for fe80::6456:8ff:fef0:e40 on veth0f1c474.. Registering new address record for fe80::a860:aeff:fea9:d905 on vethe0175fa.. Registering new address record for fe80::189e:e5ff:fe78:e9fa on veth8ea71fe.. Registering new address record for fe80::42:1eff:fecd:4c14 on hassio.. Registering new address record for 172.30.32.1 on hassio.IPv4. Registering new address record for fe80::42:b7ff:fe64:7820 on docker0.. Registering new address record for 172.30.232.1 on docker0.IPv4. Registering new address record for fe80::c91c:c9cc:90c6:dc7d on wlp1s0.. Registering new address record for 192.168.1.71 on wlp1s0.IPv4. Registering new address record for fe80::d97e:61a6:df0c:e237 on enp2s0.. Registering new address record for 192.168.2.71 on enp2s0.IPv4. Registering new address record for ::1 on lo.. Registering new address record for 127.0.0.1 on lo.IPv4. s6-rc: info: service legacy-services successfully started [11:04:06] INFO: Starting CUPS server as CMD from S6 Server startup complete. Host name is 1e14b3fb-cups-dev.local. Local service cookie is 3636973608. Joining mDNS multicast group on interface vethca65f85.IPv6 with address fe80::10de:d4ff:feba:e134. New relevant interface vethca65f85.IPv6 for mDNS. Registering new address record for fe80::10de:d4ff:feba:e134 on vethca65f85.*.

zajac-grzegorz commented 11 months ago

WARNING: Detected another IPv4 mDNS stack running on this host. This makes mDNS unreliable and is thus not recommended.

That is a desired behaviour, because we run the second Avahi instance (in CUPS addon in the reflector mode) on the same host network. The first Avahi instance runs within the Hassio.

If you can see the printer in iOS, it should be possible to send data to that printer... Have you checked if the printing job is created in the CUPS web interface? If the printing job is there, most probably wrong driver was selected for your printer. BTW what is the printer you use?

lkadar2015 commented 11 months ago

The job is not in the list of the printing jobs, unfortunately. The printer is an Epson AL-M2300 with USB connection.

Grzegorz Zajac @.***> (időpont: 2023. okt. 9., H, 21:33) ezt írta:

WARNING: Detected another IPv4 mDNS stack running on this host. This makes mDNS unreliable and is thus not recommended.

That is a desired behaviour, because we run the second Avahi instance (in CUPS addon in the reflector mode) on the same host network. The first Avahi instance runs within the Hassio.

If you can see the printer in iOS, it should be possible to send data to that printer... Have you checked if the printing job is created in the CUPS web interface? If the printing job is there, most probably wrong driver was selected for your printer. BTW what is the printer you use?

— Reply to this email directly, view it on GitHub https://github.com/MaxWinterstein/homeassistant-addons/issues/167#issuecomment-1753591016, or unsubscribe https://github.com/notifications/unsubscribe-auth/BBASZ5HQK34JB3U5FULSVN3X6RGPJAVCNFSM6AAAAAAZCSJ7UGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTONJTGU4TCMBRGY . You are receiving this because you commented.Message ID: @.***>

zajac-grzegorz commented 11 months ago

The job is not in the list of the printing jobs, unfortunately. The printer is an Epson AL-M2300 with USB connection.

  1. Are you able to print test page from within the CUPS web interface?

  2. Do you have this option selected on the Administration tab? image

lkadar2015 commented 11 months ago

Sure, it is checked and also the “Allow printing from the Internet”.

Grzegorz Zajac @.***> (időpont: 2023. okt. 9., H, 21:42) ezt írta:

The job is not in the list of the printing jobs, unfortunately. The printer is an Epson AL-M2300 with USB connection.

Do you have this option selected on the Administration tab? [image: image] https://user-images.githubusercontent.com/74325984/273683034-8ea7dae1-3d24-462f-a732-ba0e95fce6db.png

— Reply to this email directly, view it on GitHub https://github.com/MaxWinterstein/homeassistant-addons/issues/167#issuecomment-1753616233, or unsubscribe https://github.com/notifications/unsubscribe-auth/BBASZ5FJHJMT7T6ZFZCO4SDX6RHTRAVCNFSM6AAAAAAZCSJ7UGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTONJTGYYTMMRTGM . You are receiving this because you commented.Message ID: @.***>

zajac-grzegorz commented 11 months ago

what about printing the test page directly from the CUPS UI?

lkadar2015 commented 11 months ago

I’ll try it

Grzegorz Zajac @.***> (időpont: 2023. okt. 9., H, 21:46) ezt írta:

what about printing the test page directly from the CUPS UI?

— Reply to this email directly, view it on GitHub https://github.com/MaxWinterstein/homeassistant-addons/issues/167#issuecomment-1753626663, or unsubscribe https://github.com/notifications/unsubscribe-auth/BBASZ5FUAST2L72GPLPU5JLX6RICRAVCNFSM6AAAAAAZCSJ7UGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTONJTGYZDMNRWGM . You are receiving this because you commented.Message ID: @.***>

walberjunior commented 11 months ago

With the latest version I was able to install the addon and also the printer. HP LaserJet P1005. I managed to install it on Windows 10, but I can't even print the test page, both from Windows and from Cups. I have no experience with cups, but I managed to install it on my router running openwrt once, but I could only print the test page. When I sent PDF files (text) it did not print. For it to work in openwrt I had to add this script, because whenever it was turned off/on it stopped working.

# Put your custom commands here that should be executed once
# the system init finished. By default this file does nothing.
cat /usr/lib/sihpP1005.dl > /dev/usb/lp0
exit 0

But I don't know how to do this through the addon to see if I can use it in cups

image

Any ideas what I can do?

lkadar2015 commented 11 months ago

what about printing the test page directly from the CUPS UI?

It could print the CUPS test page, and it seems today is my lucky day as it started working with my iPhone, too.

Great job guys! Thank you, everyone, especially Max and Grzegorz!

rcrail commented 11 months ago

Still not able to run CUPS Print Server:

Failed to to call /addons/1e14b3fb_cups/stats - Container addon_1e14b3fb_cups is not running 23-10-13 14:31:52 ERROR (SyncWorker_8) [supervisor.docker.manager] Container addon_1e14b3fb_cups is not running

OS = 11.0 Core = 10.3

Has there been an update or work-around? Thank you!

MaxWinterstein commented 11 months ago

@rcrail looking at the slug 'cups' it seems like you running neither my test version, nor the one of @zajac-grzegorz.

davidjirovec commented 10 months ago

How do I open the CUPS admin page?

image

Which means "Bad request".

Tried also https, same result. My home assistant is actually running just http, is this the issue?

I remember with the original addon, I had to disable ssl in addon configuration to be able to get to CUPS admin page. In new addon there is no such option, is this the problem?

zajac-grzegorz commented 10 months ago

How do I open the CUPS admin page?

image

Which means "Bad request".

Tried also https, same result. My home assistant is actually running just http, is this the issue?

I remember with the original addon, I had to disable ssl in addon configuration to be able to get to CUPS admin page. In new addon there is no such option, is this the problem?

Have you tried also with homeassistant.local:631 ??

Or try with the IP address like 192.168.x.x:631

Check also if the plug-in is running...

davidjirovec commented 10 months ago

Thanks, both .local and ip work. Addon works great. Only issue I have noticed while using it is that when I restart the addon, printer I have previously configured and used is missing in cups. Is anybody else able to reproduce this?

zajac-grzegorz commented 10 months ago

Thanks, both .local and ip work. Addon works great. Only issue I have noticed while using it is that when I restart the addon, printer I have previously configured and used is missing in cups. Is anybody else able to reproduce this?

Check my repo. I have recently added the persistency for printer config data.

davidjirovec commented 10 months ago

Thanks, persistence works.

MaxWinterstein commented 10 months ago

Hey everyone,

I updated the main cups addon with the code of grzegorz and implemented his fixes as well.

Please use the cups addon without dev in the name. It should work and provide persistence.

Known issue to me, the cups admin has no password for the moment, mkpasswd package seems to be not in the base image i use. will fix this later.

oraculix commented 7 months ago

I suppose this thread can be closed now? At least I can confirm a running container on

Thanks, Uwe