jiangcuo / Proxmox-Port

Proxmox VE arm64 riscv64 loongarch64
GNU Affero General Public License v3.0
790 stars 44 forks source link

Instructions for setting up PBS on LXC or VM #16

Closed Anexgohan closed 6 months ago

Anexgohan commented 10 months ago

Hello, first off amazing work, I managed to setup Proxmox on my new RPi5 withing an hour and have an lxc up and running, it was so easy and error free. thanks for this port.

secondly and my main query, I'm unable to setup proxmox backup server on a running lxc, is there any similar easy to follow guide/instructions for me to follow to set it up?

jiangcuo commented 10 months ago

pbs need kernel 4k page size

Anexgohan commented 10 months ago

pbs need kernel 4k page size

Thanks, I got PBS working on a VM on proxmox on RPi5,

Testing it now, although can't get PBS to "wipe the Disk" and "Initilize the disk" gives errors everytime "failed to execute sgdisk". Same error with adding a directory.

The only thing that works for now is "create:ZFS" from the PBS GUI, with that backups and restore work.

Any ideas why this sgdisk issue. ZFS Online: pStWfeVCuh Passed through disk 1tb crucial ct1000p SSD: UhnTvDleIA the backups, all good and verified: wVK7ALarFs Proxmox-backup-server as storage on the host: XONceg0Gx8

Also, the dashboard keeps on forever loading, this occurs after adding zfs support on proxmox host.

8Cm50cDYOd

I'll post an "how to" later after testing and also I'm curious to get PBS working on LXC since it works on VM now.

Proxmox Backup Server - VM: 6Dk9zpideb Pihole LXC: u784SNZns6 Host on Raspberry Pi 5: GRGuP7r8cM

Anexgohan commented 10 months ago

Success:

Successfully setup a working instance of proxmox-backup-server on an lxc container running on the Raspberry Pi 5.

Faliure/ In-progress:

so getting USB disk passthrough to an LXC containers has been a spectacular failure up until this point, I see the disks in the GUI but nothing under '/dev/disk/by-id/" the entry doesn't even exist, tried the following till now but no success: msedge_LocQuti2xk

lxc.cgroup2.devices.allow: c 10:* rwm
lxc.mount.entry: /dev/disk/by-id/usb-CT1000P3_SSD8_012345678935-0:0 dev/disk/by-id/usb-CT1000P3_SSD8_012345678935-0:0 none bind,optional,create=file
lxc.mount.entry: /dev/disk/by-id/usb-CT1000P3_SSD8_012345678935-0:0-part1 dev/disk/by-id/usb-CT1000P3_SSD8_012345678935-0:0-part1 none bind,optional,create=file

this lists the disks at path "/dev/disk/by-id/" , but mounting it gives permission errors in the lxc, even after setting chmod 777, chown 0:0

mount /dev/disk/by-id/usb-CT1000P3_SSD8_012345678935-0:0-part1 /mnt/lxc/ssd/ssd-ct1000p3

Posting here in hopes of of someone having some insights on how to properly passthrough USB connected disk/ssd to an lxc container. Aim is to get a disk to pass through and use the excellent "ZFS" feature in PBS and all the other benefits that come along with it, like compression and de-duplication.

Anexgohan commented 9 months ago

update: Unable to passthrough disks to LXC but bind mounts of zfs from host to lxc works fine but the zfs ends up using the entire 8 gb RAM on the RPi5 so this is a non solution unless i can manually limit the zfs ram consumption.

PBS on the VM is working successfully for the past few days with disk passed through via simple USB port passthrough via GUI no configuration needed. Restore and backups work fine, note: cannot backup the vm itself (kinda obvious but just putting it here if I ever get the same stupid idea again :-)

will post the VM setup after a bit more testing.

Anexgohan commented 8 months ago

Instructions to get ZFS and proxmox-backup-server working on a VM: you need a VM running in proxmox ve. tested for Debian & ubuntu VM's

Get cloud image from here: Debian:

you need the genericcloud-arm64 image

Ubuntu:

you need the QCow2 UEFI/GPT Bootable disk image

A).- this is done in Raspberry 5 host

Raspberry Pi 5 should use the Kernel with 4K pagesize:

In the container summary memory usage and swap usage always shows 0?

please modify cmdline.txt like

Recently RPi 5 updates moved this file to:

nano /boot/firmware/cmdline.txt

old file location: (use which-ever works for you)

nano /boot/cmdline.txt

put the following parameters to the end of the line:

cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1

Install some things and utils (the utilities are optional):

everything after "curl" is optional "ifupdown2" is needed for Debian if you have network issues "apt install ifupdown2" as well. Only install "ifupdown2' after trying basic troubleshooting, this will either completely fix or completely bork your VM.

apt update && \
apt install nano wget curl openssh-server rsync bash-completion parted usbutils pciutils qemu-guest-agent -y

Setup Repos

echo 'deb [arch=arm64] https://mirrors.apqa.cn/proxmox/debian/pve bookworm port'>/etc/apt/sources.list.d/pveport.list && \
echo 'deb https://mirrors.apqa.cn/proxmox/debian/pbs bookworm port'>/etc/apt/sources.list.d/pbs-port.list && \
curl https://mirrors.apqa.cn/proxmox/debian/pveport.gpg -o /etc/apt/trusted.gpg.d/pveport.gpg && \
apt update && apt full-upgrade -y

add non-free source:

sudo sed -i 's/^/# /' /etc/apt/sources.list && \
echo "deb http://deb.debian.org/debian/ bookworm main contrib non-free non-free-firmware" | tee -a /etc/apt/sources.list && \
echo "deb http://deb.debian.org/debian/ bookworm-updates main contrib non-free non-free-firmware" | tee -a /etc/apt/sources.list && \
echo "deb http://security.debian.org/debian-security bookworm-security main contrib non-free non-free-firmware" | tee -a /etc/apt/sources.list && \
echo "# see https://www.debian.org/doc/manuals/debian-reference/ch02.en.html#_updates_and_backports" | tee -a /etc/apt/sources.list && \
apt update

Install proxmox-backup-server:

apt install proxmox-backup-server
reboot

if you encounter any error or some package does not install or you get a "ifupdown2" error, reboot and run "apt install proxmox-backup-server" again, If the update fails due to unable to reach repos or "Failed to fetch" error then disable the enterprise repo as shown below.

Disable the enterprise repo, since it will halt apt updates if you don't have a subscription:

sed -i 's/^/#/' /etc/apt/sources.list.d/pbs-enterprise.list

Add kernel repo.

echo "deb https://mirrors.apqa.cn/proxmox/debian/kernel sid port" | tee -a /etc/apt/sources.list.d/kernel-port-sid.list && \
apt update

Search for kernel-header for Rasberry Pi 5:

this should give you a long list of "pve-kernel-*" Proxmox PVE Kernel Image's available

apt update && apt search pve-kernel-6.1*

Install kernel-header for Rasberry Pi 5:

this will take a long while and will feel like its stuck or crashed, but its not, its working in the background doing its thing, just be patient On my RPi 5 with a 2 core 2GB Ram VM, it takes 5-10 minutes aprox.

apt install pve-kernel-6.5.11-generic

you can also, install newer versions 6.5.11 was the latest at the time of writing. use this to list: apt update && apt search pve-kernel-6.1 | grep -Pi '^(?=.generic)(?=.*arm64)'

Disable the kernel repo that was added above:

you want to disable this if you do not wish to do kernel updates. Also, suite 'Sid' should not be used in production. kernel updates can induce instability if done improperly so be careful.

sed -i 's/^/#/' /etc/apt/sources.list.d/kernel-port-sid.list

Setup ZFS:

this will take a long while and will feel like its stuck or crashed, but its not, its working in the background doing its thing, just be patient On my RPi 5 with a 2 core 2GB Ram VM, it takes 10 minutes aprox.

apt update && apt install dpkg-dev zfs-dkms zfsutils-linux -y

Reboot: (honestly a simple 30-60 sec reboot will save you a lot of headache and hours of troubleshooting)

reboot

If this does not throw an error your zfs is good to go

modprobe zfs

do an update and upgrade:

apt update && apt full-upgrade -y

Remove broken and unnecessary leftovers

apt autoremove && \
apt autopurge && \
apt autoclean 

Done

get the VM's host ip address:

ip -c a

or with filter:

ip -c a | grep -E 'eth*|vmbr*|ensp*' -A 1 | grep 'inet' | awk '{print $2}'

log in to PBS : https://ip-address-of-VM:8007/

Pass any disk by passing the entire USB port to the VM from the Proxmox VE GUI, now you can use ZFS inside PBS and get the awesomeness that is ZFS and Deduplication

gmenezes-sistemas11 commented 7 months ago

Recently RPi 5 updates moved this file to:

nano /boot/firmware/cmdline.txt

old file location: (use which-ever works for you)

nano /boot/cmdline.txt

put the following parameters to the end of the line:

cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1

This is the fix for me, thanks!

Anexgohan commented 7 months ago

Recently RPi 5 updates moved this file to: nano /boot/firmware/cmdline.txt old file location: (use which-ever works for you) nano /boot/cmdline.txt put the following parameters to the end of the line: cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1

This is the fix for me, thanks!

glad it helped you.