TheGrandWazoo / freenas-proxmox

ZFS over iSCSI to FreeNAS API's from Proxmox VE
MIT License
236 stars 42 forks source link

unable to open file '/var/lib/dpkg/tmp.ci//stable': Is a directory when installing freenas-proxmox_2.0.3-1-beta1_all.deb #98

Closed AndreasAZiegler closed 3 years ago

AndreasAZiegler commented 3 years ago

I tried to update and later remove and install freenas-proxmox with apt install freenas-proxmox and get the following error

apt install freenas-proxmox
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following NEW packages will be installed:
  freenas-proxmox
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 0 B/3,418 B of archives.
After this operation, 0 B of additional disk space will be used.
dpkg: error processing archive /var/cache/apt/archives/freenas-proxmox_2.0.3-1-beta1_all.deb (--unpack):
 unable to open file '/var/lib/dpkg/tmp.ci//stable': Is a directory
Errors were encountered while processing:
 /var/cache/apt/archives/freenas-proxmox_2.0.3-1-beta1_all.deb
E: Sub-process /usr/bin/dpkg returned an error code (1)

lintian tells me

lintian -c /var/cache/apt/archives/freenas-proxmox_2.0.3-1-beta1_all.deb 
warning: the authors of lintian do not recommend running it with root privileges!
Skipping /var/cache/apt/archives/freenas-proxmox_2.0.3-1-beta1_all.deb: syntax error at line 18: Duplicate field source.

Any idea how I can resolve this issue?

AndreasAZiegler commented 3 years ago

I followed this tutorial to solve the problem with the debian package.

Now, I'm facing the next problem

dpkg -i deb.deb 
Selecting previously unselected package freenas-proxmox.
(Reading database ... 69349 files and directories currently installed.)
Preparing to unpack deb.deb ...
Unpacking freenas-proxmox (2.0.3-1-beta1) ...
Setting up freenas-proxmox (2.0.3-1-beta1) ...
Proxmox Version 6.3-1
Proxmox Major Version 6
Cloning proxmox-freenas github repo
Cloning into 'freenas-proxmox'...
remote: Enumerating objects: 203, done.
remote: Counting objects: 100% (203/203), done.
remote: Compressing objects: 100% (151/151), done.
remote: Total 612 (delta 75), reused 98 (delta 10), pack-reused 409
Receiving objects: 100% (612/612), 128.35 KiB | 4.01 MiB/s, done.
Resolving deltas: 100% (203/203), done.
Initiating 'configure' with arg ''
Configuring freenas-proxmox 
Patching /usr/share/perl5/PVE/Storage/ZFSPlugin.pm
Patching /usr/share/pve-manager/js/pvemanagerlib.js
Patching /usr/share/pve-docs/api-viewer/apidoc.js
Installing /usr/share/perl5/PVE/Storage/LunCmd/FreeNAS.pm
Restarting pvedaemon...
Attempt to reload PVE/Storage.pm aborted.
Compilation failed in require at /usr/share/perl5/PVE/AbstractConfig.pm line 9, <DATA> line 755.
BEGIN failed--compilation aborted at /usr/share/perl5/PVE/AbstractConfig.pm line 9, <DATA> line 755.
Compilation failed in require at /usr/share/perl5/PVE/QemuConfig.pm line 6, <DATA> line 755.
BEGIN failed--compilation aborted at /usr/share/perl5/PVE/QemuConfig.pm line 6, <DATA> line 755.
Compilation failed in require at /usr/share/perl5/PVE/HA/Resources/PVEVM.pm line 10, <DATA> line 755.
BEGIN failed--compilation aborted at /usr/share/perl5/PVE/HA/Resources/PVEVM.pm line 19, <DATA> line 755.
Compilation failed in require at /usr/share/perl5/PVE/HA/Env/PVE2.pm line 21, <DATA> line 755.
BEGIN failed--compilation aborted at /usr/share/perl5/PVE/HA/Env/PVE2.pm line 21, <DATA> line 755.
Compilation failed in require at /usr/share/perl5/PVE/API2/Cluster.pm line 14, <DATA> line 755.
BEGIN failed--compilation aborted at /usr/share/perl5/PVE/API2/Cluster.pm line 14, <DATA> line 755.
Compilation failed in require at /usr/share/perl5/PVE/API2.pm line 14, <DATA> line 755.
BEGIN failed--compilation aborted at /usr/share/perl5/PVE/API2.pm line 14, <DATA> line 755.
Compilation failed in require at /usr/share/perl5/PVE/Service/pvedaemon.pm line 8, <DATA> line 755.
BEGIN failed--compilation aborted at /usr/share/perl5/PVE/Service/pvedaemon.pm line 8, <DATA> line 755.
Compilation failed in require at /usr/bin/pvedaemon line 11, <DATA> line 755.
BEGIN failed--compilation aborted at /usr/bin/pvedaemon line 11, <DATA> line 755.
dpkg: error processing package freenas-proxmox (--install):
 installed freenas-proxmox package post-installation script subprocess returned error exit status 255
Errors were encountered while processing:
 freenas-proxmox
TheGrandWazoo commented 3 years ago

@AndreasAZiegler - Please refresh and rerun the upgrade. My apologies that I did not get to test this before the CI/CD scripts ran.

AndreasAZiegler commented 3 years ago

@TheGrandWazoo Thanks for the update. The installation works now.

Due to the fact that I did a apt purge freenas-proxmox (I assume) while troubleshooting, all my ZFS drives are gone and also my virtual machines that used the ZFS drives. I reconfigured the FreeNAS/Proxmox-connection by following these steps and also re-added the zfs-disks. I re-created the virtual machines and assigned the corresponding zfs-drives. Unfortunately this doesn't seem to work. In the terminal I can see the message that it can't boot from the drive. As I made several zfs-snapshots, I'm also wondering if I could have access to these again.

As I encountered my initial problem after/while I upgraded proxmox itself, my current issues could also come from the fact that I upgraded Proxmox. But I would still appreciate any help.

AndreasAZiegler commented 3 years ago

@TheGrandWazoo I also described my problem in the proxmox forum and hope that I get some help from there as well.

TheGrandWazoo commented 3 years ago

That should not delete your configurations of your VM nor removed any FreeNAS volumes, it should just remove the FreeNAS.pm and reinstall the original Proxmox files that get patched.

TheGrandWazoo commented 3 years ago

Does your /etc/pve/storage.cfg have "zfs:" entries with 'iscsiprovider freenas" under them?

zfs: iscsi-pve01-storage
        blocksize 4k
        iscsiprovider freenas
        pool StorageTank/Proxmox01/vDisks
        portal 172.31.69.90
        target iqn.2018-06.com.ksatechnologies.freenas-lab.ctl:proxmox
        content images
        freenas_apiv4_host 172.31.69.90
        freenas_password A_PASSWORD
        freenas_use_ssl 1
        freenas_user SOME_ID
        nowritecache 0
        sparse 0

It is also possible to edit your /etc/pve/qemu-server/.conf and add or edit the like that might look something like.

scsi0: iscsi-pve01-storage:vm-100-disk-0,size=32G

Where vm-100-disk-0 is the name of the disk on the TrueNAS server.

image

And make sure your boot device reflects the disk type (e.g. - scsi0). That can be however you setup your vm to present your disk (SCSI, IDE, VIRTIO). image

AndreasAZiegler commented 3 years ago

The zfs entries in my /etc/pve/storage.cfg are the ones which I re-added

dir: local
        path /var/lib/vz
        content rootdir,iso,vztmpl,images,snippets
        maxfiles 0

zfs: zwiki01-system
        blocksize 4k
        iscsiprovider freenas
        pool storage
        portal 172.16.0.1
        target iqn.2005-10.org.freenas.ctl:zwiki01-system
        content images
        freenas_apiv4_host 172.16.0.1
        freenas_password A_PASSWORDS
        freenas_use_ssl 1
        freenas_user root
        nowritecache 0
        sparse 0

zfs: zldap01-system
        blocksize 4k
        iscsiprovider freenas
        pool storage
        portal 172.16.0.1
        target iqn.2005-10.org.freenas.ctl:zldap01-system
        content images
        freenas_apiv4_host 172.16.0.1
        freenas_password A_PASSWORDS
        freenas_use_ssl 1
        freenas_user root
        nowritecache 0
        sparse 0

zfs: zfs02-system
        blocksize 4k
        iscsiprovider freenas
        pool storage
        portal 172.16.0.1
        target iqn.2005-10.org.freenas.ctl:zfs02-system
        content images
        freenas_apiv4_host 172.16.0.1
        freenas_password A_PASSWORDS
        freenas_use_ssl 1
        freenas_user root
        nowritecache 0
        sparse 0

The one vm I re-added so far has the following config

 cat /etc/pve/qemu-server/100.conf 
boot: c
bootdisk: scsi0
cores: 1
ide2: none,media=cdrom
memory: 2048
name: zldap01
net0: virtio=46:42:39:E2:A3:EF,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: zldap01-system:vm-100-disk-1,size=32G
scsihw: virtio-scsi-pci
smbios1: uuid=d11bb515-1c37-4c28-9f70-523b2680832a
sockets: 1
vmgenid: c2bd401a-b279-499d-bf30-409fa2ad9862
vmstatestorage: zldap01-system

with these zfs zvols present in FreeNAS

image

TheGrandWazoo commented 3 years ago

If you select your storage 'zldap01-system' do you see your disks? image

TheGrandWazoo commented 3 years ago

Looking at my 100.conf my 'boot' is a bit different.

boot: order=ide2;scsi0;net0

And I don't have a 'bootdisk' parameter at all. Not sure if it is a different way 6.3 setup is but my VM have gone over many updates so I could have old params. Or it could be that I am using OVMF.

AndreasAZiegler commented 3 years ago

I can see the same disks as in FreeNAS image

When I changed to scsi0: zldap01-system:vm-100-disk-0,size=15.25GB (from disk-1 to disk-0) I was able to boot. But I'm now confused about the versions. I assumed that for the same disk e.g. vm-100 the most current disk/snapshot is the bottom one. Do you know, how I could re-create the snapshot history?

TheGrandWazoo commented 3 years ago

Let me see how it is done and I'll post the config. It is stored in .cfg.

TheGrandWazoo commented 3 years ago

My VM 100 config with two snapshots...first with RAM and second without RAM.

#NET0 %3A LAN
#NET1%3A WAN
bios: ovmf
boot: order=ide2;scsi0;net0
cores: 2
cpu: host
efidisk0: local-vmimages:vm-100-disk-0,size=1M
ide2: none,media=cdrom
machine: q35
memory: 8192
name: OPNSense01-hq
net0: virtio=6E:A0:75:DE:4C:78,bridge=vmbr0,queues=4
net1: virtio=86:1C:5F:79:31:74,bridge=vmbr1,queues=4
net2: virtio=86:29:40:EE:20:36,bridge=vmbr9999,queues=4
net3: virtio=D2:57:9C:3F:2E:1B,bridge=vmbr0,queues=4,tag=200
net4: virtio=0E:DC:2F:D3:21:B9,bridge=vmbr0,queues=4,tag=400
net5: virtio=16:BD:C7:C1:F9:8F,bridge=vmbr0,queues=4,tag=401
net6: virtio=4E:EC:91:3F:84:36,bridge=vmbr0,queues=4,tag=402
net7: virtio=B2:3E:55:5B:59:10,bridge=vmbr0,queues=4,tag=410
net8: virtio=AA:D3:98:CB:9C:46,bridge=vmbr0,queues=4,tag=420
numa: 0
onboot: 1
ostype: other
parent: TestnoRam
protection: 1
scsi0: iscsi-pve01-storage:vm-100-disk-0,size=32G
scsihw: virtio-scsi-single
smbios1: uuid=5000dbe0-7473-48e5-b18b-2b7eeab0f482
sockets: 2
tablet: 0
vga: virtio

[Test]
#A test snapshot with RAM
bios: ovmf
boot: order=ide2;scsi0;net0
cores: 2
cpu: host
efidisk0: local-vmimages:vm-100-disk-0,size=1M
ide2: none,media=cdrom
machine: q35
memory: 8192
name: OPNSense01-hq
net0: virtio=6E:A0:75:DE:4C:78,bridge=vmbr0,queues=4
net1: virtio=86:1C:5F:79:31:74,bridge=vmbr1,queues=4
net2: virtio=86:29:40:EE:20:36,bridge=vmbr9999,queues=4
net3: virtio=D2:57:9C:3F:2E:1B,bridge=vmbr0,queues=4,tag=200
net4: virtio=0E:DC:2F:D3:21:B9,bridge=vmbr0,queues=4,tag=400
net5: virtio=16:BD:C7:C1:F9:8F,bridge=vmbr0,queues=4,tag=401
net6: virtio=4E:EC:91:3F:84:36,bridge=vmbr0,queues=4,tag=402
net7: virtio=B2:3E:55:5B:59:10,bridge=vmbr0,queues=4,tag=410
net8: virtio=AA:D3:98:CB:9C:46,bridge=vmbr0,queues=4,tag=420
numa: 0
onboot: 1
ostype: other
protection: 1
runningcpu: host,+kvm_pv_eoi,+kvm_pv_unhalt
runningmachine: pc-q35-5.2+pve0
scsi0: iscsi-pve01-storage:vm-100-disk-0,size=32G
scsihw: virtio-scsi-single
smbios1: uuid=5000dbe0-7473-48e5-b18b-2b7eeab0f482
snaptime: 1618674590
sockets: 2
tablet: 0
vga: virtio
vmstate: iscsi-pve01-storage:vm-100-state-Test

[TestnoRam]
#A test snapshot with NO RAM
bios: ovmf
boot: order=ide2;scsi0;net0
cores: 2
cpu: host
efidisk0: local-vmimages:vm-100-disk-0,size=1M
ide2: none,media=cdrom
machine: q35
memory: 8192
name: OPNSense01-hq
net0: virtio=6E:A0:75:DE:4C:78,bridge=vmbr0,queues=4
net1: virtio=86:1C:5F:79:31:74,bridge=vmbr1,queues=4
net2: virtio=86:29:40:EE:20:36,bridge=vmbr9999,queues=4
net3: virtio=D2:57:9C:3F:2E:1B,bridge=vmbr0,queues=4,tag=200
net4: virtio=0E:DC:2F:D3:21:B9,bridge=vmbr0,queues=4,tag=400
net5: virtio=16:BD:C7:C1:F9:8F,bridge=vmbr0,queues=4,tag=401
net6: virtio=4E:EC:91:3F:84:36,bridge=vmbr0,queues=4,tag=402
net7: virtio=B2:3E:55:5B:59:10,bridge=vmbr0,queues=4,tag=410
net8: virtio=AA:D3:98:CB:9C:46,bridge=vmbr0,queues=4,tag=420
numa: 0
onboot: 1
ostype: other
parent: Test
protection: 1
scsi0: iscsi-pve01-storage:vm-100-disk-0,size=32G
scsihw: virtio-scsi-single
smbios1: uuid=5000dbe0-7473-48e5-b18b-2b7eeab0f482
snaptime: 1618674843
sockets: 2
tablet: 0
vga: virtio

image

image

Looks like a No RAM is just under the 'snapshots' in FreeNAS.

I hope this helps!!!

AndreasAZiegler commented 3 years ago

@TheGrandWazoo Thanks a lot. I only had snapshots with RAM. Thanks to your help, I managed to re-create my configuration and should now be back where I was.