unifi-utilities / unifios-utilities

A collection of enhancements for UnifiOS based devices
GNU General Public License v3.0
3.94k stars 419 forks source link

onboot service not persistent across newer firmware upgrades #71

Closed andrewmiskell closed 3 years ago

andrewmiskell commented 4 years ago

Describe the bug After upgrade from 1.8.0 to 1.8.2-10 (and also from 1.8.2-10 to 1.8.3-2) the onboot service was wiped and not reloaded

To Reproduce Steps to reproduce the behavior:

  1. Have onboot scripts installed
  2. Upgrade firmware from 1.8.0 to newer revision
  3. Check onboot scripts after firmware upgrade

Expected behavior Onboot scripts are reinstalled from dpkg cache

UDM Information

Additional context There very well may be nothing that could be done and it might just be a difference on how the firmware upgrades are installed/handled now by the UDM. I just wanted to log it so if nothing can be done, documentation can be updated to reflect that it's no longer persistent.

dsully commented 3 years ago

I'm seeing this as well.

MarkBurchard commented 3 years ago

Also seeing this on my UDMB, after upgrade from 1.8.2-8 > 1.8.3-2 and again on upgrade from 1.8.3-2 > 1.8.3-3.

MarkBurchard commented 3 years ago

Oddly enough, I just installed the 6.0.3.7 Controller release and after a reboot, multicast-relay started automatically. Every reboot since the 1.8.3-2 upgrade, I had to start that manually. No idea why on-boot unbroke, but can't complain.

Djelibeybi commented 3 years ago

I just saw this upgrading to 1.8.3-4.

pokemon81 commented 3 years ago

upgrading to 1.8.3-4 same Problem had to start manuel

apoc4lyps commented 3 years ago

I noticed that onboot.d failed during the first boot after an update. subsequent reboots will trigger the onboot.d scripts

Murgeye commented 3 years ago

I noticed that onboot.d failed during the first boot after an update. subsequent reboots will trigger the onboot.d scripts

Same here. It always seems to fail on the first boot after an upgrade, but works after that. Other user added systemd units seem to run fine.

mlankamp commented 3 years ago

I noticed that onboot.d failed during the first boot after an update. subsequent reboots will trigger the onboot.d scripts

Same here. Always have to do a manual reboot after the upgrade, then the podman containers can start correctly

Murgeye commented 3 years ago

@mlankamp you can also always restart the service using systemctl start udm-boot in the unifi-os shell as a workaround.

Djelibeybi commented 3 years ago

Confirmed that another reboot works here too.

ravens commented 3 years ago

I observed that behavior only with the 1.0.2 udm-boot package. The 1.0.1 works fine - I got two routers and I was surprised to have to reboot the second one to fix the upgrade. I suspect the postinst changes are the culprit (original commit between 1.0.1 and 1.0.2 was https://github.com/boostchicken/udm-utilities/commit/282b9bd3ce00cb543e175ca46c2debe0143a5638#diff-60f00db60b712271f12f9fbaadee09d956229745493e2f812a701c28386cf54e, if I am not mistaken).

SamErde commented 3 years ago

I just upgraded to 1.8.3 official with 6.0.41 controller, and can confirm the same behavior. Doesn't start automatically on the first boot after an upgrade, but does on following restarts.

mojo333 commented 3 years ago

#

I observed that behavior only with the 1.0.2 udm-boot package. The 1.0.1 works fine

Just to confirm this, I downgraded to the 1.0.1 version of the package. Worked perfectly on the latest UDM update - no need to manually reboot which is great news - thanks @ravens for the tip.

leachbj commented 3 years ago

Failed to restart for me too; I found this in journalctl after the upgrade.

Dec 19 13:47:34 ubnt ubnt-dpkg-restore[26]: /usr/sbin/policy-rc.d returned 101, not running 'enable udm-boot'
Dec 19 13:47:34 ubnt ubnt-dpkg-restore[26]: /usr/sbin/policy-rc.d returned 101, not running 'start udm-boot'

/usr/sbin/policy-rc.d doesn't exist after the install completed so it seems like part of the update process is disabling the service configuration.

boostchicken commented 3 years ago

Can someone sum up the current state here for me?

From what I see

leachbj commented 3 years ago

I think the change @ravens id'ed above looks relevant that policy-rc.d message I saw is probably related to deb-systemd-invoke enable udm-boot vs the direct systemd call you had.

andrewmiskell commented 3 years ago

Can someone sum up the current state here for me?

From what I see

  • 1.0.1 works
  • 1.0.2 does not work at all
  • The first reboot after an upgrade it does not execute for some reason?

1.0.1 works without any issues, even across firmware upgrades 1.0.2 requires an extra reboot after upgrade to start working again, it doesn't reinstall directly after a firmware upgrade, after the second reboot, everything works fine until the next firmware upgrade

boostchicken commented 3 years ago

@spali can you investigate why your 1.0.2 behaves so differently. Its much more complex than mine but hits all the same bases

If we can't then we have to rollback to 1.0.1

boostchicken commented 3 years ago

I think the change @ravens id'ed above looks relevant that policy-rc.d message I saw is probably related to deb-systemd-invoke enable udm-boot vs the direct systemd call you had.

Man, that has to be it right? There is so little that really changed, just the more "debian" way of doing it. Maybe my simpleton way cut through some garbage, i'll see what @spali says and we go from there

boostchicken commented 3 years ago

Just went to 1.8.4 final with 1.0.1 with no issues

leachbj commented 3 years ago

@boostchicken btw this was the link I found when I looked at the problem; https://jpetazzo.github.io/2013/10/06/policy-rc-d-do-not-start-services-automatically/

spali commented 3 years ago

@spali can you investigate why your 1.0.2 behaves so differently. Its much more complex than mine but hits all the same bases

If we can't then we have to rollback to 1.0.1

To be honest, I have no idea why it behaves so differently. But I'm fine with rollback the calls to direct systemd calls. Maybe this is the same problem I observe with #46. Which basically uses the same calls.

Bit of theory: I'm not a debian packages specialist, but during my research on this calls... I did understand, that the deb calls should just postpone the calls to be called after all packages are installed, to have a more control over them if you install more complex stuff. So as long we do not interfere much with other services I think it would be save to revert this calls. Especially if you all observe this only since 1.0.2.

aessing commented 3 years ago

Hi,

yesterday I updated from 1.8.3 to 1.8.4 firmware on the UDM Pro.

I had onBoot 1.0.2 installed and everything worked like a charm... even after the upgrade, the onBoot works without doing something. I didn't had to do additional boots.

So, no issues with 1.0.2 and 1.8.4.

Cheers Andre

darizotas commented 3 years ago

Upgrading from 1.8.0 to 1.8.4 udm-boot 1.0.2

I can confirm that I had to reboot a second time to get the container and service running again.

Cheers, Darío.

vvuk commented 3 years ago

Is there a new udm-boot package with the potential fix, or do I need to build my own? Happy to install and see how it goes -- I've had no on-boot scripts run the past two automatic upgrades. (upgraded to 1.8.4 automatically last night, no on-boot)

spali commented 3 years ago

Not yet, I assume John will merge the fix and build it soon. If you want you can for sure build it by yourself based on the PR.

JKomoroski commented 3 years ago

I suggest we provide explicit instructions for how to update the deb when a release is cut.

spali commented 3 years ago

@JKomoroski this is already documented in https://github.com/boostchicken/udm-utilities/tree/master/on-boot-script#steps

smegoff commented 3 years ago

FYI - I exhibit this behaviour too. I have only recently installed pihole on a UDM Pro, and the last couple of firmware updates I am unable to access the web interface for the pihole. I checked using "podman ps" in SSH and the container isnt running. The quickest way to get the container to run is reboot the device.

spali commented 3 years ago

@smegoff

I checked using "podman ps" in SSH and the container isnt running. The quickest way to get the container to run is reboot the device.

In my case and I tink in any case of this issue, usually unifi-os restart is enough to fix it.

spali commented 3 years ago

BTW: just updated to 1.8.5 and at least with my custom build of #50 it worked as expected and started automatically everything again after the update. So I assume #86 would work too (basically the same fix)

rdoram commented 3 years ago

@spali, I used your v1.1.0 package (udm-boot) and installed two piHole containers and a ntopng container on UDM-base v1.8.5. I'll report back when I update to the next beta/alpha firmware on the UDM to confirm it survived the upgrade. As of now, it's surviving UDM reboots, so I'm optimistic it'll work. Here's the instructions for anyone that wants to run the same test with NTOPNG-UDM. It avoids some of the more complicated networking setup involved with the piHole. Feel free to correct/comment on any of this, I'm a GitHub and Linux noob, so I'm sure there's probably room for improvement :).

Step 1: Install the udm-utilities package...SSH into the UDM:

unifi-os shell

In the UniFi-OS shell, download the package, install it, and go back to the UDM
curl -L https://github.com/spali/udm-utilities/releases/download/v1.1.0-poc-4/udm-boot_1.1.0_all.deb -o /tmp/udm-boot_1.1.0_all.deb
dpkg -i /tmp/udm-boot_1.1.0_all.deb
exit

Step 2: Install ntopng-udm container

Create persistent folders and files for ntopng-udm
mkdir -p /mnt/data/udm-boot/data/ntopng/redis
mkdir -p /mnt/data/udm-boot/data/ntopng/lib
touch /mnt/data/udm-boot/data/ntopng/GeoIP.conf
curl -Lo /mnt/data/udm-boot/data/ntopng/ntopng.conf https://github.com/tusc/ntopng-udm/blob/master/ntopng/ntopng.conf?raw=true
curl -Lo /mnt/data/udm-boot/data/ntopng/redis.conf https://github.com/tusc/ntopng-udm/blob/master/ntopng/redis.conf?raw=true
Edit the file at /mnt/data/udm-boot/data/ntopng/ntopng.conf to determine which interfaces NTOP will analyze.
Enter udm-boot bash shell:

podman exec -it udm-boot /bin/bash

Use podman to create container
podman create \
    --restart always \
    --net=host \
    --name ntopng \
    -e TZ="America/Chicago" \
    -e VIRTUAL_HOST="ntopng" \
    -e PROXY_LOCATION="ntopng" \
    -e IPv6="False" \
    -v /mnt/data/udm-boot/data/ntopng/GeoIP.conf:/etc/GeoIP.conf \
    -v /mnt/data/udm-boot/data/ntopng/ntopng.conf:/etc/ntopng/ntopng.conf \
    -v /mnt/data/udm-boot/data/ntopng/redis.conf:/etc/redis/redis.conf \
    -v /mnt/data/udm-boot/data/ntopng/lib:/var/lib/ntopng \
    --hostname ntopng \
    docker.io/tusc/ntopng-udm:latest
Configure systemd to load containers at startup
podman generate systemd ntopng >/etc/systemd/system/ntopng.service
systemctl daemon-reload
systemctl enable ntopng.service
systemctl start ntopng.service
leachbj commented 3 years ago

I did a manual build for the Debian package, installed that and then did the 1.8.3 to 1.8.5 upgrade and everything came up correctly after the upgrade, no additional restart required.

boostchicken commented 3 years ago

Closing as this is now fixed with a new release!