Open rogerxu opened 2 years ago
xiangfeidexiaohuo/ProxmoxVE-7.0-DIY: Proxmox VE 7.x 换源、关掉订阅提示、直通等相关教程。 (github.com)
Proxmox VE 6/7 配置源及关闭订阅提醒 - inSilen Studio
ivanhao/pvetools: proxmox ve tools script (github.com)
/etc/apt/sources.list
deb https://mirrors.ustc.edu.cn/debian bullseye main contrib
deb https://mirrors.ustc.edu.cn/debian bullseye-updates main contrib
deb https://mirrors.ustc.edu.cn/debian-security bullseye-security main contrib
deb https://mirrors.ustc.edu.cn/proxmox/debian/pve bullseye pve-no-subscription
/usr/share/javascript/proxmox-widget-toolkit/proxmoxlib.js
checked_cmd: function(orig_cmd) {
}
$ systemctl restart pveproxy
Proxmox VE 虚拟机磁盘的选择 (buduanwang.vip)
CIFS Backend - Proxmox VE Storage
Storage pool type: cifs
What is CIFS (Common Internet File System)? (techtarget.com)
server
temp
share
share
cp /usr/share/perl5/PVE/APLInfo.pm /usr/share/perl5/PVE/APLInfo.pm_back
sed -i 's|http://download.proxmox.com|https://mirrors.ustc.edu.cn/proxmox|g' /usr/share/perl5/PVE/APLInfo.pm
$ systemctl restart pvedaemon
$ pveam update
$ pveam available
LXC - Index of /images (canonical.com)
/etc/systemd/network/eth0.network
DHCP IPv4
[Match]
Name = eth0
[Network]
Description = Interface eth0 autoconfigured by PVE
DHCP = v4
IPv6AcceptRA = true
Static IPv4
[Match]
Name = eth0
[Network]
Description = Interface eth0 autoconfigured by PVE
Address = 192.168.31.101/24
Gateway = 192.168.31.1
DHCP = no
IPv6AcceptRA = true
mDNS
$ cp /etc/systemd/network/eth0.network /etc/systemd/network/10-eth0-mdns.network
$ echo 'MulticastDNS = true' >> /etc/systemd/network/10-eth0-mdns.network
Local devices or local directories can be mounted directly using bind mounts. This gives access to local resources inside a container with practically zero overhead. Bind mounts can be used as an easy way to share data between containers.
Bind mounts allow you to access arbitrary directories from your Proxmox VE host inside a container. Some potential use cases are:
Bind mounts are considered to not be managed by the storage subsystem, so you cannot make snapshots or deal with quotas from inside the container.
Unprivileged LXC containers - Proxmox VE
With unprivileged containers you might run into permission problems caused by the user mapping and cannot use ACLs.
$ pct set 100 -mp0 /mnt/bindmounts/shared,mp=/shared
However you will soon realise that every file and directory will be mapped to "nobody" (uid 65534).
All of the UIDs (user id) and GIDs (group id) are mapped to a different number range than on the host machine, usually root (uid 0) became uid 100000, 1 will be 100001 and so on.
$ pct fstrim 100
Running docker inside an unprivileged LXC container on Proxmox - du.nkel.dev
$ sudo apt install qemu-guest-agent
$ sudo systemctl start qemu-guest-agent
Qemu/KVM Virtual Machines (proxmox.com)
How to configure PCI(e) passthrough on Proxmox VE | Matthew DePorter
ProxmoxVE 开启硬件直通 - ZIMRI`Blog (zimrilink.com)
/etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT="quiet"
$ update-grub
You have to make sure the following modules are loaded.
/etc/modules
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd
$ lspci -nn
05:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Raven Ridge [Radeon Vega Series / Radeon Vega Mobile Series] [1002:15dd] (rev cb)
05:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Raven/Raven2/Fenghuang HDMI/DP Audio Controller [1002:15de]
05:00.2 Encryption controller [1080]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 10h-1fh) Platform Security Processor [1022:15df]
05:00.3 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] Raven2 USB 3.1 [1022:15e5]
05:00.6 Audio device [0403]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 10h-1fh) HD Audio Controller [1022:15e3]
/etc/modprobe.d/vfio.conf
options vfio-pci ids=1002:15dd,1002:15de
/etc/modprobe.d/kvm.conf
options kvm ignore_msrs=1
Update the initramfs
$ update-initramfs -u -k all
Reboot
$ dmesg | grep -e DMAR -e IOMMU -e AMD-Vi
AMD-Vi: Interrupt remapping enabled
PVE Node > System > Network
/etc/network/interfaces
auto vmbr0
iface vmbr0 inet static
address 192.168.31.53/24
gateway 192.168.31.1
bridge-ports enp1s0
bridge-stp off
bridge-fd 0
PVE Node > System > DNS
/etc/resolve.conf
search local
nameserver 192.168.31.101
nameserver 192.168.31.2
nameserver 192.168.31.1
Proxmox网桥通过SLAAC配置公网ipv6地址 - 海运的博客 (haiyun.me)
使用ipv6连接Proxmox VE – Ferrets家的Wordpress
Dual-stacking Proxmox Web UI (pveproxy) - Simon Mott
/etc/sysctl.conf
# SLAAC IPv6
net.ipv6.conf.all.accept_ra=2
net.ipv6.conf.default.accept_ra=2
net.ipv6.conf.vmbr0.accept_ra=2
net.ipv6.conf.all.autoconf=1
net.ipv6.conf.default.autoconf=1
net.ipv6.conf.vmbr0.autoconf=1
$ cat /proc/sys/net/ipv6/conf/vmbr0/accept_ra
2
$ cat /proc/sys/net/ipv6/conf/vmbr0/autoconf
1
$ cat /proc/sys/net/ipv6/conf/vmbr0/forwarding
1
$ proxmox-boot-tool status
Check EFI boot status
$ efibootmgr -v
/etc/default/grub
GRUB_DEFAULT=0
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT="quiet"
GRUB_CMDLINE_LINUX=""
/etc/grub.d/
$ ls -1 /etc/grub.d/
000_proxmox_boot_header
00_header
05_debian_theme
10_linux
20_linux_xen
20_memtest86+
30_os-prober
30_uefi-firmware
40_custom
41_custom
/etc/grub.d/40_custom
#!/bin/sh
exec tail -n +3 $0
menuentry 'Microsoft Windows' --class windows --class os --id 'win' {
insmod part_gpt
insmod fat
insmod chain
# search --label --set root --no-floppy EFI
# search --fs-uuid --set root --no-floppy DEC3-2445
search --file --set root --no-floppy /EFI/Microsoft/Boot/bootmgfw.efi
# set root=(hd0,1)
echo 'Start Windows...'
chainloader /EFI/Microsoft/Boot/bootmgfw.efi
}
Generate grub config
$ update-grub
/usr/sbin/update-grub
#!/bin/sh
set -e
exec grub-mkconfig -o /boot/grub/grub.cfg "$@"
/boot/grub/grub.cfg
### BEGIN /etc/grub.d/10_linux ###
menuentry 'Proxmox VE GNU/Linux' --class proxmox --class gnu-linux --class gnu --class os --id 'gnulinux-simple-xxx' {
}
submenu 'Advanced options for Proxmox VE GNU/Linux' --id 'gnulinux-simple-xxx' {
}
### END /etc/grub.d/10_linux ###
### BEGIN /etc/grub.d/20_memtest86+ ###
menuentry 'Memory test (memtest86+)' {
}
submenu 'Advanced options for Proxmox VE GNU/Linux' --id 'gnulinux-simple-xxx' {
}
### END /etc/grub.d/20_memtest86+ ###
### BEGIN /etc/grub.d/30_uefi-firmware ###
menuentry 'System setup' $menuentry_id_option 'uefi-firmware' {
fwsetup
}
### END /etc/grub.d/30_uefi-firmware ###
### BEGIN /etc/grub.d/40_custom ###
# This file provides an easy way to add custom menu entries. Simply type the
# menu entries you want to add after this comment. Be careful not to change
# the 'exec tail' line above.
### END /etc/grub.d/40_custom ###
$ apt install lm-sensors
$ sensors-detect
$ sensors
/usr/share/perl5/PVE/API2/Nodes.pm
$res->{pveversion} = PVE::pvecfg::package() . "/" .
PVE::pvecfg::version_text();
$res->{thermalstate} = `sensors -j`;
$res->{thermal_hdd} = `hddtemp /dev/sd?`;
my $dinfo - df('/', 1); # output is bytes
/usr/share/pve-manager/js/pvemanagerlib.js
Ext.define('PVE.node.StatusView', {
extend: 'PVE.panel.StatusView',
alias: 'widget.pveNodeStatus',
height: 400,
bodyPadding: '20 15 20 15',
}
{
title: gettext('PVE Manager Version'),
textField: 'pveversion',
value: '',
},
{
itemId: 'thermal',
colspan: 2,
printBar: false,
title: gettext('Thermal'),
textField: 'thermalstate',
renderer: function(value) {
const obj = JSON.parse(value);
const cpu = obj['k10temp-pci-00c3']['Tctl']['temp1_input'];
const ssd1 = obj['nvme-pci-0100']['Sensor 2']['temp3_input'];
const ssd2 = obj['nvme-pci-0200']['Composite']['temp1_input'];
return `CPU: ${cpu} ℃ || SSD1: ${ssd1} ℃ | SSD2: ${ssd2} ℃`;
},
},
{
itemId: 'thermal',
colspan: 2,
printBar: false,
title: gettext('Thermal'),
textField: 'thermalstate',
renderer: function(value) {
const obj = JSON.parse(value);
const package = obj['coretemp-isa-0000']['Package id 0']['temp1_input'];
const core0 = obj['coretemp-isa-0000']['Core 0']['temp2_input'];
const core1 = obj['coretemp-isa-0000']['Core 1']['temp3_input'];
return `CPU Package: ${package} ℃ || Core 0: ${core0} ℃ | Core 1: ${core1} ℃`;
},
},
$ systemctl restart pveproxy
第7章 使用RAID与LVM磁盘阵列技术 | 《Linux就该这么学》 (linuxprobe.com)
Complete Beginner's Guide to LVM in Linux [With Hands-on] (linuxhandbook.com)
PVE的local和local-lvm,并删除 (buduanwang.vip)
All in one小主机重装实录--Proxmox VE 6.3安装配置 | D2O | 重水 (d2okkk.net)
LVM之中,建了一个thinpool,名为data
$ lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-aotz-- 59.66g 27.26 2.40
root pve -wi-ao---- 27.75g
snap_vm-100-disk-0_docker pve Vri---tz-k 32.00g data vm-100-disk-0
swap pve -wi-ao---- 8.00g
vm-100-disk-0 pve Vwi-aotz-- 32.00g data 40.88
vm-100-state-docker pve Vwi-a-tz-- <4.49g data 27.20
vm-101-disk-0 pve Vwi-aotz-- 8.00g data 17.46
Storage configuration /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content iso,vztmpl,backup
lvmthin: local-lvm
thinpool data
vgname pve
content rootdir,images
List LVM vg (volume group)
$ pvesm scan lvm
pve
List LVM thin pool for a vg (volume group)
$ pvesm scan lvmthin pve
data
$ gdisk /dev/sda
$ pvcreate /dev/sda2
$ pvs
$ vgs
$ vgcreate vg /dev/sda2
$ vgextend vg /dev/sda2
$ lvs
$ lvdisplay
--- Logical volume ---
LV Path /dev/pve/vm-113-disk-0
LV Name vm-113-disk-0
VG Name pve
LV UUID lOyUiE-MPK0-lRxz-62me-mBBK-frmY-KstAdx
LV Write Access read/write
LV Creation host, time pve, 2022-05-22 18:42:16 +0800
LV Pool name data
LV Thin origin name base-112-disk-0
LV Status available
# open 1
LV Size 4.00 GiB
Mapped size 48.76%
Current LE 1024
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:29
$ lvcreate -n pool --type thin-pool -l 100%free vg
/dev/vg/pool
$ lvcreate -n lvol1 -V 10G --thin-pool pool vg
$ lvcreate -n lvol1 -V 10G vg/pool
/dev/vg/lvol1
Extend LV size
root@host$ lvextend -r -v -L +1G pve/vm-100-disk-1
root@host$ qm rescan
Restore LXC with the specified rootfs disk size
root@host$ pct restore 100 /data/dump/vzdump-lxc-100-2022_01_03-12_20_07.tar.zst --rootfs local-lvm:10
Extend LV size of the VM disk in the host.
root@host$ lvextend -v -L +1G pve/vm-104-disk-1
root@host$ qm rescan
rescan volumes...
VM 104 (scsi0): size of disk 'local-lvm:vm-104-disk-1' updated from 15G to 16G
Extend the partition table in the guest VM
user@guest$ sudo parted /dev/sda
(parted) print free
(parted) resizepart 2 100%
Check and fix file system error
user@guest$ sudo e2fsck -f /dev/sda2
Resize to file system
user@guest$ sudo resize2fs -p /dev/sda2
系统运维|如何在 Linux 中减少/缩小 LVM 大小(逻辑卷调整)
Unmount LV
$ umount /dev/pve/vm-100-disk-1
Check file system error
$ e2fsck -f /dev/pve/vm-104-disk-1
Resize to filesystem to shrink an unmounted file system located on device.
$ resize2fs -p /dev/pve/vm-113-disk-0 2G
Reduce LV size
root@host$ lvreduce -r -v -L -1G pve/vm-100-disk-1
root@host$ qm rescan
Restore LXC with the specified rootfs disk size
root@host$ pct restore 100 /data/dump/vzdump-lxc-100-2022_01_03-12_20_07.tar.zst --rootfs local-lvm:10
Trim
user@guest$ sudo fstrim -av
Unmount partition in the VM
user@guest$ sudo umount /dev/sdb1
Check and fix file system error
user@guest$ sudo e2fsck -f /dev/sdb1
Resize to file system to shrink an unmounted file system located on device.
user@guest$ sudo resize2fs -p /dev/sdb1 16G
Reduce partition size in the guest VM with gparted
.
user@guest$ sudo parted /dev/sdb
(parted) resizepart
Partition number: 1
End? [18.0GB]? 16G
(parted) print free
The unused partition should be larger than the size you want to reduce.
Check and fix file system error
user@guest$ sudo e2fsck -f /dev/sdb1
Resize to file system
user@guest$ sudo resize2fs -p /dev/sdb1
Reduce VM disk size in the host
root@host$ qemu-img info /dev/pve/vm-104-disk-1
root@host$ qemu-img resize --shrink /dev/pve/vm-104-disk-1 –1G
Shrink LV size of the VM disk in the host.
root@host$ lvreduce -v -L -1G pve/vm-104-disk-1
root@host$ qm rescan
rescan volumes...
VM 104 (scsi0): size of disk 'local-lvm:vm-104-disk-1' updated from 16G to 15G
Fix the partition table in the guest VM
user@guest$ sudo gdisk /dev/sdb
Command (? for help): v
Command (? for help): x
Command (? for help): e
Command (? for help): w
Command (? for help): q
user@guest$ sudo parted
(parted) print free
$ wget https://downloads.openwrt.org/releases/21.02.3/targets/x86/64/openwrt-21.02.3-x86-64-rootfs.tar.gz
$ wget https://fw.koolcenter.com/LEDE_X64_fw867/LXC%20CT%E6%A8%A1%E6%9D%BF/openwrt-koolshare-router-v3.2-r19470-2f7d60f0e5-x86-64-generic-rootfs.tar.gz
OpenWrt固件下载与在线定制编译 (supes.top)
$ wget https://op.supes.top/releases/targets/x86/64/openwrt-08.02.2022-x86-64-generic-rootfs.tar.gz
OpenWrt Wiki - OpenWrt in LXC containers
双网卡 pve8.0.3 lxc 运行 openwrt作为主路由 最靠谱简单教程 (leiyanhui.com)
ProxmoxVE 7.0 LXC下搭建openwrt软路由 - kangzeru的博客-CSDN博客
ProxmoxVE 7.0 LXC 环境下搭建OpenWrt软路由 - 4XU|思绪
最简单可行的pve LXC下搭建openwrt软路由,完全可以做主路由-软路由,x86系统,openwrt(x86),Router OS 等-恩山无线论坛 (right.com.cn)
pct create 101 local:vztmpl/openwrt-02.01.2024-x86-64-generic-rootfs.tar.gz \
--rootfs local-lvm:1 \
--ostype unmanaged \
--hostname openwrt \
--arch amd64 \
--cores 2 \
--memory 512 \
--swap 0 \
-net0 bridge=vmbr0,name=eth0
/etc/pve/lxc/101.conf
# openwrt.common.conf是PVE自带的openwrt配置文件示例,内含一些基本设置
lxc.include: /usr/share/lxc/config/openwrt.common.conf
# 将主机的网卡enp3s0分配给容器使用,根据自己的实际情况更改
lxc.net.1.type: phys
lxc.net.1.link: enp3s0
lxc.net.1.flags: up
lxc.net.1.name: eth1
# 挂载ppoe到lxc内
lxc.cgroup2.devices.allow: c 108:0 rwm
lxc.mount.entry: /dev/ppp dev/ppp none bind,create=file
# 挂载tun到lxc内
lxc.cgroup2.devices.allow: c 10:200 rwm
lxc.mount.entry: /dev/net/tun dev/net/tun none bind,create=file
# 取消 openwrt.common.conf 里面 对 cap的限制,不然openclash无法使用
lxc.cap.drop:
$ pct start 101
/etc/config/network
config interface 'loopback'
option device 'lo'
option proto 'static'
option ipaddr '127.0.0.1'
option netmask '255.0.0.0'
config globals 'globals'
option ula_prefix 'fd1e:6bd7:3a36::/48'
config device
option name 'br-lan'
option type 'bridge'
list ports 'eth0'
config interface 'lan'
option device 'br-lan'
option proto 'static'
option ipaddr '192.168.1.101'
option netmask '255.255.255.0'
option gateway '192.168.1.1'
option ip6assign '60'
list dns '192.168.1.1'
Mirror
/etc/opkg/distfeeds.conf
$ sed -i 's|downloads.openwrt.org|mirrors.ustc.edu.cn/openwrt|g' /etc/opkg/distfeeds.conf
$ opkg update
$ opkg list-upgradable
Install