Open irishgordo opened 2 months ago
This just seems to be isolated to Tumbleweed. OS rel:
test@localhost:~> cat /etc/os-release
NAME="openSUSE Tumbleweed"
# VERSION="20240829"
ID="opensuse-tumbleweed"
ID_LIKE="opensuse suse"
VERSION_ID="20240829"
PRETTY_NAME="openSUSE Tumbleweed"
ANSI_COLOR="0;32"
# CPE 2.3 format, boo#1217921
CPE_NAME="cpe:2.3:o:opensuse:tumbleweed:20240829:*:*:*:*:*:*:*"
#CPE 2.2 format
#CPE_NAME="cpe:/o:opensuse:tumbleweed:20240829"
BUG_REPORT_URL="https://bugzilla.opensuse.org"
SUPPORT_URL="https://bugs.opensuse.org"
HOME_URL="https://www.opensuse.org"
DOCUMENTATION_URL="https://en.opensuse.org/Portal:Tumbleweed"
LOGO="distributor-logo-Tumbleweed"
Attempted again with a newer version. Additionally made it so Guest VM in vSphere had much more resources, meeting "system requirements" than previous trial:
Other distrobutions currently seem to be doing well for vSphere v7.0.3 -> Harvester v1.3.2-rc2 imports however.
@irishgordo do you observe the same behavior with similar setup with other distros like opensuse leap 15.6?
@ibrokethecloud -> I don't believe Leap encountered this, I will double-check on 9/3 -> and update this :+1: :smile:
@ibrokethecloud the same thing is happening with OpenSUSE Leap 15.6:
cc: @bk201 @khushboo-rancher
Describe the bug Note, this is not seen when using VM Imports w/ multi-disks from OpenStack (Devstack 2023.1/stable) seems to be present in vSphere imports. Test Guest OS was Tumbleweed, installed server minimal. Pretty much multi disk, two disks in this case, VM on vSphere v7.0.3 -> was successful but utterly broke the
/etc/fstab
as the UUID / "blkid" that was leveraged for building/etc/fstab
to tell the drive to mount to a certain point is now broken.To Reproduce Steps to reproduce the behavior:
fdisk /dev/
second diskmkfs.ext4 /dev/
mkidr -p /mnt/tinylittledisk
blkid
/etc/fstab
, w//mnt/tinylittledisk
ext4 defaults 0 0
(or whatever)/mnt/tinylittledisk
that ties in to the Volume viafdisk -l
/etc/fstab
when the VM boots up (into Harvester) is toastExpected behavior VM Imported vSphere multi-disk VMs that have UUIDs for Disks utilized in
/etc/fstab
to not breakPossible Workaround
The gist would be to mount a live iso and go through mounting all the points like:
But probably, after I got that all up and converted the /etc/fstab to just
/dev/sda2
instead of uuid -> I probably didn't call sync on the disk. Just exited chroot, which probably was the issue, because back on the screen now, through WebVNC the VM in Harvester is still posting the warning that the/dev/disk/by-uuid/UUID does not exist
... So I imagine, maybe if that was re-done but was ensuring to call "sync" maybe that would of helped? :shrug: idk. The data-integrity however looks good, fwiw, md5sums checkout on files.Environment
Additional context