Open pedropombeiro opened 1 year ago
use udm-boot-2x, it does not use ssh-proxy
Did you compain to Ubi regarding apt not pointing at bullseye?
Yeah just dropped that info on the UI discussion
I havve requested the kernel sources for 3.0.19, lets see how long it takes
Hello, For the podman problem, there is this message : https://github.com/unifi-utilities/unifios-utilities/issues/510#issuecomment-1478787703 Seems that we can replace it with systemd-nspawn.
@mabunixda : after your message on ui forum, rekoil says that his sources are pointing at bullseye. Maybe EA problem
I'd prefer a common solution that is backwards compatible about podman/docker 🤔
@marco3181 Yeah donot know ... could be EA stuff yeah
My /etc/apt/sources.list is properly pointing to bullseye (and have been since the start). I updated only a few hours after 3.x landed for UDM Pro in EA.
Mine too (updates within minutes after EA announcement:
root@udmp:~# cat /etc/apt/sources.list
deb http://deb.debian.org/debian/ bullseye main contrib non-free
deb http://deb.debian.org/debian/ bullseye-updates main contrib non-free
deb http://deb.debian.org/debian/ bullseye-backports main
deb http://security.debian.org/debian-security bullseye-security main contrib non-free
Fixed it on my installation - might also be an old EA update that did not change this definitions...
See my update here
So I have not upgraded yet, but on 2.5x is overlayfs working, I expect it will work on 3 as well. So the disk space issue is now fixed @peacey
I am pretty sure I can get podman running on the UDMSE. See BPF is used for security lockdown on syscalls. We are always root anyways so it doesnt matter. I am going to drop seccomp and apparmor from the UDMP-UDMSE build and edit configs accordingly and I think that will get you in business.
Also, they new podman builds come with crun, if you want to use it. It's much better on resources. Uses less memory, and executes faster
Also, netavark is built and in the latest zips as well. This is a replacement for CNI which is now deprecated. It has cool things like macvlan dhcp working, but its not zero effort to migrate your networks, syntax of the files is quite different.
I would move to netavark ASAP. It makes containers much much much faster and much less latent on the network. It is written in Rust instead of Go (much like crun). I also included the dhcp-client-proxy if anyone wants to macvlan dhcp working.
I am not sure when I can move to 3. If someone would volunteer to test my new build for 3 build with the mods above I'll crank it out asap that would be awesome.
](https://github.com/unifi-utilities/unifios-utilities/issues/510#issuecomment-1483820487)
systemd-nspawn
Yeah that works for the short term, I think I can get podman working see above. It's really just a matter of pulling anything bpf related. Thats all for rootless containers and we don't have to worry about it. Also make sure you do the fix for the disk space when using VFS
You know what it just dawned on the current builds should work fine on the latest 3. You just need to edit /usr/share/containers/seccomp.json and disallow any bpf syscalls, BOOM.
Beyound that you can just do this --security-opt=seccomp=unconfined in your podman command and it wont call bpf at all, no security stuff happening.
New builds coming out now with a fixxed seccomp.json that removevs bpf from the allows, also fixes some registry issues
Grab the two latest builds here if ipfs is being a shit head
https://github.com/unifi-utilities/unifios-utilities/actions
You know what it just dawned on the current builds should work fine on the latest 3. You just need to edit /usr/share/containers/seccomp.json and disallow any bpf syscalls, BOOM.
Beyound that you can just do this --security-opt=seccomp=unconfined in your podman command and it wont call bpf at all, no security stuff happening.
No bueno @boostchicken, it still doesn't work with these modification and your new build. Still same issue with BPF error, unfortunately.
Yes, same issue and I was wondering if I did something wrong! I also started getting warning for namespace:
WARN[0000] Failed to read current user namespace mappings
UDM Boot remote script still works fine. Firmware 3.X comes with DNSCrypt-Proxy service already pre-installed natively, which is a great replacement for simpe local DNS servers and doesn't require Podman!
Firmware 3.X comes with DNSCrypt-Proxy service already pre-installed natively, which is a great replacement for simpe local DNS servers and doesn't require Podman!
That's quite interesting! Wondering if I can get that to work with my blocklist (https://oisd.nl). I only use porman for Adguard, so this looks like a nice workaround.
Firmware 3.X comes with DNSCrypt-Proxy service already pre-installed natively, which is a great replacement for simpe local DNS servers and doesn't require Podman!
That's quite interesting! Wondering if I can get that to work with my blocklist (https://oisd.nl). I only use porman for Adguard, so this looks like a nice workaround.
Absolutely can be. https://github.com/DNSCrypt/dnscrypt-proxy/wiki/Public-blocklist & https://github.com/DNSCrypt/dnscrypt-proxy/wiki/Combining-Blocklists
You'll need to trigger the generate-domains-blocklist.py with your configs during cron and possibly reload dnscrypt-proxy afterwards.
and possibly reload dnscrypt-proxy afterwards.
From the UDM, i can use it via dig google.com @127.0.2.1
and that works. However, how do I configure this DNS to listen on the UDM's internal IP, instead of dnsmasq? Because currently it's only listening on 127.0.2.1 and thats no routable from my subnets.
I also need specific instructions on how to make DNSCrypt-Proxy the main DNS resolver for UDM for WAN and for LAN.
I think DNSCrypt-Proxy is normally configured via a TOML file, but UDM firmware 3.X uses SystemD, which is listed in DNSCrypt-Proxy manual as a non-standard way of making it work. So... Still need specific instructions.
I've followed @peacey's https://github.com/peacey/unifios-utilities/tree/nspawn/nspawn-container instructions to make nspawn containers replacement work for me. I created an Alpine nspawn container instead of Debian (5Mb instead of 300+, and it takes seconds to bootstrap) using https://gist.github.com/sfan5/52aa53f5dca06ac3af30455b203d3404#file-alpine-container-sh with replacing x86
in text with aarch64
. I didn't do any setup with passwd and so on, the container just works.
With installing the multicast
package from the community
repo within alpine, I've got multicast working for my Sonos in a separate VLAN with such nspawn config for the container (called /etc/systemd/nspawn/alpine-multicast.nspawn
in my case):
[Exec]
Boot=on
Capability=all
Parameters=multicast-relay.py --interfaces br0 br4 --foreground
[Network]
Private=off
VirtualEthernet=off
ResolvConf=off
@paskal - An on boot script also works to just call the python script directly.
@gatesry do you have an example?
@Stealthii For sure!
#!/bin/sh
/usr/bin/python3 /data/custom/multicast-relay/multicast-relay.py --interfaces br0 br20 br30
Thanks @gatesry! I'd no idea that the script was that simple all the dependencies were available in the UDM environment. No real need for a container.
Is it possible to make the on-boot script load sooner? I think it is set to "After/Wants=network-online.target". It loads my scripts like custom IPTables after UDM makes network connections. I need my custom rules to load before such connections happen. With old UniFi-OS 1.X firmware, booting sequence was stricter, I think.
FYI, there is no more need for SSH on-boot scripts. Once authorized_keys file is made, it persists after reboots on its own.
@gatesry thanks for the idea, actually I am not using on-boot now already, and with your suggestion, I can use systemd and remove nspawn usage.
@GY8VSdYYzvL8-K6T
Do you have an example for an systemd scipt? I have created one but the script is nog starting.
This is my systemd service file:
[Unit] Wants=network-online.target After=network-online.target
[Service] ExecStart=/data/scripts/multicast-relay/multicast-relay.sh
[Install] WantedBy=multi-user.target
The WantedBy i already tried network-online.target but that is also not working. The paths to the script are correct and when i run the multicast-relay.sh manually it is working.
This is the output of systemctl status multicast.service:
multicast.service Loaded: loaded (/etc/systemd/system/multicast.service; enabled; vendor preset: enabled) Active: inactive (dead) since Mon 2023-04-03 11:53:39 CEST; 1min 45s ago Main PID: 2471 (code=exited, status=0/SUCCESS) CPU: 169ms
Apr 03 11:53:39 UDMPro systemd[1]: Started multicast.service. Apr 03 11:53:39 UDMPro systemd[1]: multicast.service: Succeeded.
Unifi just released 3.0.20, which I believe is the first 3.x Release Candidate.
I keep monitoring this github thread, hoping for some progress on getting a working version for 3.x.
'Hoping' is unfortunately pretty much all I can do with my extremely limited understanding of what you experts here are talking about. That said, I am very grateful for your work. Without your efforts I would not be able to run PiHole on my UDM-P (first 1.x, currently 2.x and hopefully soon 3.x). So a BIG THANK YOU!
@waffles0042 the future of podman on 3.x seems doubtful, but this seems like a nice alternative. I've yet to try it out but I soon will: https://github.com/unifi-utilities/unifios-utilities/tree/main/nspawn-container
It seems likely this process can be scripted a little more to make it easier in the future.
Thanks for the link @fbernier
Also here, I would have to wait the well scripted dummy version of it before I‘d even attempt it 😁
@waffles0042 - There is a scripted version in that link. https://github.com/unifi-utilities/unifios-utilities/blob/main/nspawn-container/interactive_setup.sh
Thank you, @gatesry , I‘ll check it out.
Appreciate this is all new; I managed to get a pihole instance up and running, but unsure how I'm going to get a unifi-proxy-cam dockerhub image to work. I guess this will be the same with any old images!
I tried to run docker within these images, but after setting it to use VFS I still had errors from cgroups and don't know enough to kick that along. I'm also struggling to get a second container running with IP address(es) - they run, but can't ping anything else on the network or the internet (1.1.1.1) if on a MACVLAN
vim /etc/systemd/nspawn/dockers.nspawn
[Exec]
Boot=on
[Network]
MACVLAN=br201
ResolvConf=off
cd /data/custom/machines/dockers/etc/systemd/network
vim mv-br201.network
[Match]
Name=mv-br201
[Network]
IPForward=yes
Address=10.203.0.8/28
Gateway=10.203.0.1
root@dockers:~# ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: gre0@NONE: <NOARP> mtu 1476 qdisc noop state DOWN group default qlen 1000
link/gre 0.0.0.0 brd 0.0.0.0
3: gretap0@NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN group default qlen 1000
link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
4: erspan0@NONE: <BROADCAST,MULTICAST> mtu 1450 qdisc noop state DOWN group default qlen 1000
link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
5: ip_vti0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
6: sit0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
link/sit 0.0.0.0 brd 0.0.0.0
7: mv-br201@if33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9216 qdisc noqueue state UP group default qlen 1000
link/ether 2e:09:9b:21:8a:76 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.201.0.8/28 brd 10.201.0.15 scope global mv-br201
valid_lft forever preferred_lft forever
inet6 fe80::2c09:9bff:fe21:8a76/64 scope link
valid_lft forever preferred_lft forever
root@dockers:~# ping 1.1.1.1
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
^C
--- 1.1.1.1 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 1077ms
root@dockers:~#
I have the same issue as well.
If you guys want to use nspawn id look up skopeo https://github.com/containers/skopeo https://github.com/opencontainers/image-tools You can use that to make OCI bundles which nspawn launches no problem
I was doing a lot of trial and error installs with LXC and mananged to get it somewhat working, however I ended up breaking it, will probabably keep looking into it or try NSPAWN
@Stealthii For sure!
#!/bin/sh /usr/bin/python3 /data/custom/multicast-relay/multicast-relay.py --interfaces br0 br20 br30
@gatesry this would be perfect for me since I only use Unifi-Utils for the Multicast Relay. But will the log file fill up my disk without the container-common script?
@m-a-r-c-u-s I believe the logging issue only affects things like PiHole which are by design very verbose. I’ve been running this fine now for months.
@gatesry It hopefully won't. When reading up on the python script it seems logs are only written if you specify it when initiating the script:
" [--logfile FILE] saves log data to the specified file "
Following the discussion, upon upgrading to 3.x uboot on my UDMP is not able to be installed from the pre-built packages. Namely, doing the following,
root@UDM-Pro:~# curl -L https://unifi.boostchicken.io/udm-boot-2x_1.0.1_all.deb -o udm-boot-2x_1.0.1_all.deb
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 168 0 168 0 0 10 0 --:--:-- 0:00:16 --:--:-- 44
root@UDM-Pro:~# dpkg -i ./udm-boot-2x_1.0.1_all.deb
dpkg-deb: error: './udm-boot-2x_1.0.1_all.deb' is not a Debian format archive
dpkg: error processing archive ./udm-boot-2x_1.0.1_all.deb (--install):
dpkg-deb --control subprocess returned error exit status 2
Errors were encountered while processing:
./udm-boot-2x_1.0.1_all.deb
From what I have read, it should be compatible but seems it's not... is that the case? Should I build from source...?
@andylamp - Did you try following the instructions in the on-boot readme? https://github.com/unifi-utilities/unifios-utilities/blob/main/on-boot-script/README.md#install
yes, but for the manual installation. I do not want to use the remote one as it installs additional components which are unwanted in my case.
@gabbeltje
Are you sure your script is not working? systemd says it ran successfully. The reason it also says the process is inactive is because you have omitted to specify Type, which makes it default to simple. If you want it to list the process as active you need to set Type to oneshot and also RemainAfterExit to yes.
I recently updated the UDM-Pro at the office to 3.0 and had to mess with the multicast-relay yesterday to get Sonos working across VLANs again. In the end the cleanest solution I came up with was to embed everything directly in the service, including always downloading the latest version of the relay code when the service starts up:
[Unit]
Description=SonosNet Relay
After=network-online.target
[Service]
ExecStart=/usr/bin/python3 /tmp/sonosnet.py --noMDNS --interfaces br16 br32
ExecStart=/usr/bin/python3 /tmp/sonosnet.py --noMDNS --interfaces br16 br64
ExecStartPre=/usr/bin/curl -o /tmp/sonosnet.py \
https://raw.githubusercontent.com/alsmith/multicast-relay/master/multicast-relay.py
ExecStop=/bin/kill $(/usr/bin/pgrep -d " " -f " /tmp/sonosnet\.py ")
Type=oneshot
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
Heads up: I've started a discussion at https://github.com/orgs/unifi-utilities/discussions/564 to get some feedback on how the migration to 3.x went for the more adventurous users among us. This might help users who are still stuck on 2.x make up their minds.
I am attempting to install the on-boot package per the steps described at https://github.com/unifi-utilities/unifios-utilities/blob/main/on-boot-script/README.md#manually-install-steps.
My UDM is running UniFi OS 3.1.16
.
When the first command is run this is the result:
root@homegw:~# unifi-os shell
-bash: unifi-os: command not found
root@homegw:~#
Ignoring that error and moving to the next step has this result:
root@homegw:~# wget https://unifi.boostchicken.io/udm-boot_1.0.7_all.deb
--2023-08-30 11:30:35-- https://unifi.boostchicken.io/udm-boot_1.0.7_all.deb
Resolving unifi.boostchicken.io (unifi.boostchicken.io)... 2606:4700:3108::ac42:290b, 2606:4700:3108::ac42:2af5, 172.66.41.11, ...
Connecting to unifi.boostchicken.io (unifi.boostchicken.io)|2606:4700:3108::ac42:290b|:443... connected.
HTTP request sent, awaiting response... 500 Internal Server Error
2023-08-30 11:31:01 ERROR 500: Internal Server Error.
That seems related to https://github.com/unifi-utilities/unifios-utilities/issues/565
The install steps in the first linked page above also have a link to https://github.com/unifi-utilities/unifios-utilities/blob/main/on-boot-script/packages/udm-boot_1.0.7_all.deb. If I download that and attempt to install it the result is:
root@homegw:~# dpkg -i udm-boot_1.0.7_all.deb
dpkg-deb: error: 'udm-boot_1.0.7_all.deb' is not a Debian format archive
dpkg: error processing archive udm-boot_1.0.7_all.deb (--install):
dpkg-deb --control subprocess returned error exit status 2
Errors were encountered while processing:
udm-boot_1.0.7_all.deb
So, yea, somethings busted.
I realised I hadnt tried grabbing the package from https://udm-boot.boostchicken.dev/.
root@homegw:~# wget https://udm-boot.boostchicken.dev/ -O udm-boot_1.0.7_all.deb
--2023-08-30 12:52:25-- https://udm-boot.boostchicken.dev/
Resolving udm-boot.boostchicken.dev (udm-boot.boostchicken.dev)... 2600:9000:24b9:3800:1c:e8ba:f980:93a1, 2600:9000:24b9:8000:1c:e8ba:f980:93a1, 2600:9000:24b9:ba00:1c:e8ba:f980:93a1, ...
Connecting to udm-boot.boostchicken.dev (udm-boot.boostchicken.dev)|2600:9000:24b9:3800:1c:e8ba:f980:93a1|:443... connected.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: https://raw.githubusercontent.com/unifi-utilities/unifios-utilities/master/on-boot-script/packages/udm-boot_1.0.7_all.deb [following]
--2023-08-30 12:52:25-- https://raw.githubusercontent.com/unifi-utilities/unifios-utilities/master/on-boot-script/packages/udm-boot_1.0.7_all.deb
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 2606:50c0:8000::154, 2606:50c0:8003::154, 2606:50c0:8002::154, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|2606:50c0:8000::154|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 3502 (3.4K) [application/octet-stream]
Saving to: ‘udm-boot_1.0.7_all.deb’
udm-boot_1.0.7_all.deb 100%[====================================================>] 3.42K --.-KB/s in 0.007s
2023-08-30 12:52:26 (473 KB/s) - ‘udm-boot_1.0.7_all.deb’ saved [3502/3502]
Installing though has thrown an error:
root@homegw:~# dpkg -i udm-boot_1.0.7_all.deb
Selecting previously unselected package udm-boot.
(Reading database ... 46283 files and directories currently installed.)
Preparing to unpack udm-boot_1.0.7_all.deb ...
Unpacking udm-boot (1.0.7) ...
Setting up udm-boot (1.0.7) ...
Created symlink /etc/systemd/system/multi-user.target.wants/udm-boot.service → /lib/systemd/system/udm-boot.service.
Failed to start udm-boot.service: Unit udm-boot.service has a bad unit file setting.
See system logs and 'systemctl status udm-boot.service' for details.
Output from systemctl status udm-boot.service
● udm-boot.service - Run On Startup UDM
Loaded: bad-setting (Reason: Unit udm-boot.service has a bad unit file setting.)
Active: inactive (dead)
Aug 30 12:49:17 homegw systemd[1]: /lib/systemd/system/udm-boot.service:11: Executable name contains special characters: mkdir -p /mnt/data/on_boot.d && find -L /mnt/data/on_boot.d -mindepth 1 -maxdepth 1 -type f -print0 | sort -z | xargs -0 -r -n 1 -- sh -c 'if test -x "$0"; then echo "udm-boot.service: running $0"; "$0"; else case "$0" in *.sh) echo "udm-boot.service: sourcing $0"; . "$0";; *) echo "udm-boot.service: ignoring $0";; esac; fi'
Aug 30 12:49:17 homegw systemd[1]: udm-boot.service: Unit configuration has fatal error, unit will not be started.
Aug 30 12:49:17 homegw systemd[1]: /lib/systemd/system/udm-boot.service:11: Executable name contains special characters: mkdir -p /mnt/data/on_boot.d && find -L /mnt/data/on_boot.d -mindepth 1 -maxdepth 1 -type f -print0 | sort -z | xargs -0 -r -n 1 -- sh -c 'if test -x "$0"; then echo "udm-boot.service: running $0"; "$0"; else case "$0" in *.sh) echo "udm-boot.service: sourcing $0"; . "$0";; *) echo "udm-boot.service: ignoring $0";; esac; fi'
Aug 30 12:49:17 homegw systemd[1]: udm-boot.service: Unit configuration has fatal error, unit will not be started.
@ausfestivus - Try the install script. https://github.com/unifi-utilities/unifios-utilities/tree/main/on-boot-script#install
Ubiquiti recently made 3.0.19 available in EA at https://community.ui.com/releases/UniFi-OS-Dream-Machines-3-0-19/aae685bb-4b96-4016-9125-29e57d7f2844
Known aspects of 3.x
/data
is preserved in the upgrade, butudm-boot_1.0.7_all.deb
needs to be installed again;ssh-proxy
is not present in 3.x, but manual install runs correctly;CONFIG_BPF_SYSCALL
(tracked in https://github.com/unifi-utilities/unifios-utilities/issues/510), meaning podman cannot run on it without a custom kernel;/etc/apt/sources.list
is still pointing tostretch
~. It now points tobullseye
.UPDATE: I've started a discussion at https://github.com/orgs/unifi-utilities/discussions/564 to get some feedback on how the migration to 3.x went for the more adventurous users among us.