Closed pcl-group-one closed 2 months ago
Thank you for the report. We are investigating. Can you let us know what your mount points look like on this server? Our pre-run checks should ensure that before starting leapp, you have the following disk space:
Mount points?
I have
/dev/sda5 1.8T 108G 1.7T 7% /
/dev/sda3 12G 33M 12G 1% /tmp
/dev/sda1 506M 146M 360M 29% /boot
/dev/sdb1 447G 125M 447G 1% /home2
But its the images that leapp creates that needs to be resized, that it mounts on loop devices, it does not look like it actually use it, but dnf or yum does use it to calculate if the install fits
There is enough space 😊
From: Todd Rinaldo @.> Date: Tuesday, 21 May 2024 at 17.27 To: cpanel/elevate @.> Cc: Peter Larsen @.>, Author @.> Subject: Re: [cpanel/elevate] [BUG] missing space for stage 3 /var/lib/leapp/scratch/mounts/root_ (Issue #442)
Thank you for the report. We are investigating. Can you let us know what your point points look like on this server? Our pre-run checks should assure that before staring leapp, you have the following disk space:
Hi, so we've spoken to some of the people who develop leapp and this is an issue with that code, which we wrap around. I don't see that there's anything we can do here to fix the problem.
They are aware that there is a problem and attempting to mitigate this in the future.
We actually had a similar problem reported through our channels - Leapp 0.16's disk space checks are not very good or precise. Turns out the more recent versions from upstream have better analysis methods, so we backported those until we can fully switch to 0.19. We'll be releasing a version with a fix this week.
Describe the bug missing space for stage 3 /var/lib/leapp/scratch/mounts/root_, resulting in not able to complete stage 3 on a CloudLinux 7 to CloudLinux 8 elevate
To Reproduce 1) yum reports space needed:
2) available space is: /dev/loop3 2.9G 388M 2.3G 15% /var/lib/leapp/scratch/mounts/root_
3) yum fails with
4) elevate stops
Expected behavior enough diskspace on loop interface for install, preferable a configuration or dropfile option.
Additional context workaround:
execute this on a very timed point in time while the elevate runs, so disk will be big enough
sh executeloopresize.sh
[content of file executeloopresize.sh] dd if=/dev/zero bs=1MiB of=/var/lib/leapp/scratch/diskimages/root_ conv=notrunc oflag=append count=2000 losetup -c /dev/loop3 resize2fs /dev/loop3
[/ content ]