Closed Anexgohan closed 6 months ago
pbs need kernel 4k page size
pbs need kernel 4k page size
Thanks, I got PBS working on a VM on proxmox on RPi5,
Testing it now, although can't get PBS to "wipe the Disk" and "Initilize the disk" gives errors everytime "failed to execute sgdisk". Same error with adding a directory.
Any ideas why this sgdisk issue. ZFS Online: Passed through disk 1tb crucial ct1000p SSD: the backups, all good and verified: Proxmox-backup-server as storage on the host:
Proxmox Backup Server - VM: Pihole LXC: Host on Raspberry Pi 5:
Successfully setup a working instance of proxmox-backup-server on an lxc container running on the Raspberry Pi 5.
so getting USB disk passthrough to an LXC containers has been a spectacular failure up until this point, I see the disks in the GUI but nothing under '/dev/disk/by-id/" the entry doesn't even exist, tried the following till now but no success:
lxc.cgroup2.devices.allow: c 10:* rwm
lxc.mount.entry: /dev/disk/by-id/usb-CT1000P3_SSD8_012345678935-0:0 dev/disk/by-id/usb-CT1000P3_SSD8_012345678935-0:0 none bind,optional,create=file
lxc.mount.entry: /dev/disk/by-id/usb-CT1000P3_SSD8_012345678935-0:0-part1 dev/disk/by-id/usb-CT1000P3_SSD8_012345678935-0:0-part1 none bind,optional,create=file
this lists the disks at path "/dev/disk/by-id/" , but mounting it gives permission errors in the lxc, even after setting chmod 777, chown 0:0
mount /dev/disk/by-id/usb-CT1000P3_SSD8_012345678935-0:0-part1 /mnt/lxc/ssd/ssd-ct1000p3
Posting here in hopes of of someone having some insights on how to properly passthrough USB connected disk/ssd to an lxc container. Aim is to get a disk to pass through and use the excellent "ZFS" feature in PBS and all the other benefits that come along with it, like compression and de-duplication.
update: Unable to passthrough disks to LXC but bind mounts of zfs from host to lxc works fine but the zfs ends up using the entire 8 gb RAM on the RPi5 so this is a non solution unless i can manually limit the zfs ram consumption.
PBS on the VM is working successfully for the past few days with disk passed through via simple USB port passthrough via GUI no configuration needed. Restore and backups work fine, note: cannot backup the vm itself (kinda obvious but just putting it here if I ever get the same stupid idea again :-)
will post the VM setup after a bit more testing.
Instructions to get ZFS and proxmox-backup-server working on a VM: you need a VM running in proxmox ve. tested for Debian & ubuntu VM's
Get cloud image from here: Debian:
you need the genericcloud-arm64 image
Ubuntu:
you need the QCow2 UEFI/GPT Bootable disk image
In the container summary memory usage and swap usage always shows 0?
Recently RPi 5 updates moved this file to:
nano /boot/firmware/cmdline.txt
old file location: (use which-ever works for you)
nano /boot/cmdline.txt
put the following parameters to the end of the line:
cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1
everything after "curl" is optional "ifupdown2" is needed for Debian if you have network issues "apt install ifupdown2" as well. Only install "ifupdown2' after trying basic troubleshooting, this will either completely fix or completely bork your VM.
apt update && \
apt install nano wget curl openssh-server rsync bash-completion parted usbutils pciutils qemu-guest-agent -y
echo 'deb [arch=arm64] https://mirrors.apqa.cn/proxmox/debian/pve bookworm port'>/etc/apt/sources.list.d/pveport.list && \
echo 'deb https://mirrors.apqa.cn/proxmox/debian/pbs bookworm port'>/etc/apt/sources.list.d/pbs-port.list && \
curl https://mirrors.apqa.cn/proxmox/debian/pveport.gpg -o /etc/apt/trusted.gpg.d/pveport.gpg && \
apt update && apt full-upgrade -y
sudo sed -i 's/^/# /' /etc/apt/sources.list && \
echo "deb http://deb.debian.org/debian/ bookworm main contrib non-free non-free-firmware" | tee -a /etc/apt/sources.list && \
echo "deb http://deb.debian.org/debian/ bookworm-updates main contrib non-free non-free-firmware" | tee -a /etc/apt/sources.list && \
echo "deb http://security.debian.org/debian-security bookworm-security main contrib non-free non-free-firmware" | tee -a /etc/apt/sources.list && \
echo "# see https://www.debian.org/doc/manuals/debian-reference/ch02.en.html#_updates_and_backports" | tee -a /etc/apt/sources.list && \
apt update
apt install proxmox-backup-server
reboot
if you encounter any error or some package does not install or you get a "ifupdown2" error, reboot and run "apt install proxmox-backup-server" again, If the update fails due to unable to reach repos or "Failed to fetch" error then disable the enterprise repo as shown below.
sed -i 's/^/#/' /etc/apt/sources.list.d/pbs-enterprise.list
echo "deb https://mirrors.apqa.cn/proxmox/debian/kernel sid port" | tee -a /etc/apt/sources.list.d/kernel-port-sid.list && \
apt update
this should give you a long list of "pve-kernel-*" Proxmox PVE Kernel Image's available
apt update && apt search pve-kernel-6.1*
this will take a long while and will feel like its stuck or crashed, but its not, its working in the background doing its thing, just be patient On my RPi 5 with a 2 core 2GB Ram VM, it takes 5-10 minutes aprox.
apt install pve-kernel-6.5.11-generic
you can also, install newer versions 6.5.11 was the latest at the time of writing. use this to list: apt update && apt search pve-kernel-6.1 | grep -Pi '^(?=.generic)(?=.*arm64)'
you want to disable this if you do not wish to do kernel updates. Also, suite 'Sid' should not be used in production. kernel updates can induce instability if done improperly so be careful.
sed -i 's/^/#/' /etc/apt/sources.list.d/kernel-port-sid.list
this will take a long while and will feel like its stuck or crashed, but its not, its working in the background doing its thing, just be patient On my RPi 5 with a 2 core 2GB Ram VM, it takes 10 minutes aprox.
apt update && apt install dpkg-dev zfs-dkms zfsutils-linux -y
Reboot: (honestly a simple 30-60 sec reboot will save you a lot of headache and hours of troubleshooting)
reboot
If this does not throw an error your zfs is good to go
modprobe zfs
apt update && apt full-upgrade -y
apt autoremove && \
apt autopurge && \
apt autoclean
get the VM's host ip address:
ip -c a
or with filter:
ip -c a | grep -E 'eth*|vmbr*|ensp*' -A 1 | grep 'inet' | awk '{print $2}'
log in to PBS : https://ip-address-of-VM:8007/
Pass any disk by passing the entire USB port to the VM from the Proxmox VE GUI, now you can use ZFS inside PBS and get the awesomeness that is ZFS and Deduplication
Recently RPi 5 updates moved this file to:
nano /boot/firmware/cmdline.txt
old file location: (use which-ever works for you)
nano /boot/cmdline.txt
put the following parameters to the end of the line:
cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1
This is the fix for me, thanks!
Recently RPi 5 updates moved this file to:
nano /boot/firmware/cmdline.txt
old file location: (use which-ever works for you)nano /boot/cmdline.txt
put the following parameters to the end of the line:cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1
This is the fix for me, thanks!
glad it helped you.
Hello, first off amazing work, I managed to setup Proxmox on my new RPi5 withing an hour and have an lxc up and running, it was so easy and error free. thanks for this port.
secondly and my main query, I'm unable to setup proxmox backup server on a running lxc, is there any similar easy to follow guide/instructions for me to follow to set it up?