Open senhalil opened 2 years ago
Thanks for your feedback. Yes, we do need spend some time to support LUKS as the source device when saving an image. For the moment, yes, you can only use partclone to save that manually after you have unlocked it.
Steven
Thanks for your reply @stevenshiau ! I have 0 experience with clonezilla source code but with guidance, I think I might be able to help (I am basing my logic on the fact that it is possible to accomplish it via cli, so I expect it would be possible to do it automatically via TUI), let me know if you want me to help and in that case I would need some pointers (which files and approximately where to add the logic). If not, you can close the ticket, unless you want to keep it open for a reminder.
Finally we have implemented a better mechanism to save and restore the LUKS device. Please give Clonezilla live >= 3.0.0-2 or 20220204-* a try: https://clonezilla.org/downloads.php //NOTE// It's the 1st support for LUKS in Clonezilla, so bugs are expected. Please backup important data before you try it. Please let us know the results. Thanks.
Steven
@stevenshiau , not sure if i should high jack this topic. I tried luks support but ran into issues: https://github.com/stevenshiau/clonezilla/issues/78
I was able to clone the disk, but had to resize the LVM manually
sudo pvresize /dev/mapper/nvme0n1p3_crypt
sudo lvextend -l +100%FREE /dev/mapper/vgubuntu-root
sudo resize2fs /dev/mapper/vgubuntu-root
@stevenshiau , not sure if i should high jack this topic. I tried luks support but ran into issues: #78 OK, have you tried newer Clonezilla live? Does this issue still remain?
Steven
I was able to clone the disk, but had to resize the LVM manually
sudo pvresize /dev/mapper/nvme0n1p3_crypt sudo lvextend -l +100%FREE /dev/mapper/vgubuntu-root sudo resize2fs /dev/mapper/vgubuntu-root
Which version of Clonezilla live did you use? Thanks.
Steven
I was able to clone the disk, but had to resize the LVM manually
sudo pvresize /dev/mapper/nvme0n1p3_crypt sudo lvextend -l +100%FREE /dev/mapper/vgubuntu-root sudo resize2fs /dev/mapper/vgubuntu-root
Which version of Clonezilla live did you use? Thanks.
Steven
Latest on the website: stable - 3.1.0-22
Adding to the above,
I just used the "beginner" mode and defined the src/dst drives
I'm using Ubuntu 22 with Luks (the installer sets up the LVM)
Maybe you can try Clonezilla live >= 3.1.1-27, and choose the option "-k1". it should do the LV resizing for you: https://clonezilla.org//clonezilla-live/doc/02_Restore_disk_image/images/ocs-10-2-1-fdisk-opt.png Ref: https://clonezilla.org//fine-print-live-doc.php?path=clonezilla-live/doc/02_Restore_disk_image
Steven
I experienced the same results as @Lusitaniae
Using clonezilla-live-3.1.2-9-amd64, I tried using part mode but didn't succeed, so I used it in disk mode, first I saved the disk to a USB key, then I restored from the USB key to the new disk.
The disk structure is as follows :
3 partitions :
The LUKS partition contains LVM. The LVM contains :
The results were :
pvresize /dev/mapper/nvme0n1p3_crypt
lvextend -l +100%FREE /dev/mapper/vgubuntu-root
resize2fs /dev/mapper/vgubuntu-root
Here are some relevant log files :
more Info-saved-by-cmd.txt blkdev.list dev-fs.list lvm*
::::::::::::::
Info-saved-by-cmd.txt
::::::::::::::
/usr/sbin/ocs-sr -luks yes -q2 -c -j2 -z9p -i 4096 -sfsck -scs -senc -p choose savedisk 2024-02-16-08-img nvme0n1
::::::::::::::
blkdev.list
::::::::::::::
KNAME NAME SIZE TYPE FSTYPE MOUNTPOINT MODEL
loop0 loop0 351.3M loop squashfs /usr/lib/live/mount/rootfs/filesystem.squashfs
sda sda 2.7T disk ST3000DM001-1CH166
sda1 `-sda1 2.7T part ext4
sdb sdb 114.6G disk SanDisk 3.2Gen1
sdb1 |-sdb1 114.6G part exfat
sdb2 `-sdb2 32M part
sdc sdc 116.6G disk SanDisk 3.2 Gen1
sdc1 `-sdc1 116.6G part exfat /home/partimag
sdd sdd 0B disk USB SD Reader
sde sde 0B disk USB CF Reader
sdf sdf 0B disk USB SM Reader
sdg sdg 0B disk USB MS Reader
nvme0n1 nvme0n1 232.9G disk Samsung SSD 970 EVO Plus 250GB
nvme0n1p1 |-nvme0n1p1 512M part vfat
nvme0n1p2 |-nvme0n1p2 732M part ext4
nvme0n1p3 `-nvme0n1p3 231.7G part crypto_LUKS
::::::::::::::
dev-fs.list
::::::::::::::
# <Device name> <File system> <Size>
# File system is got from ocs-get-dev-info. It might be different from that of blkid or parted.
/dev/nvme0n1p1 vfat 512M
/dev/nvme0n1p2 ext4 732M
/dev/vgubuntu/root ext4 230.7G
/dev/vgubuntu/swap_1 swap 976M
::::::::::::::
lvm_logv.list
::::::::::::::
/dev/vgubuntu/root Linux rev 1.0 ext4 filesystem data, UUID=2cd216b8-e8a8-4f11-98ea-8b0263b67c60 (extents) (64bit) (large files) (huge files)
/dev/vgubuntu/swap_1 Linux swap file, 4k page size, little endian, version 1, size 249855 pages, 0 bad pages, no label, UUID=2cdf620b-e59d-4ae2-b626-665495fc8
059
::::::::::::::
lvm_vg_dev.list
::::::::::::::
vgubuntu /dev/mapper/nvme0n1p3_crypt dZOM7s-pBAY-2t8p-jU7r-STvB-OqaW-QVSV4v
::::::::::::::
lvm_vgubuntu.conf
::::::::::::::
# Generated by LVM2 version 2.03.16(2) (2022-05-18): Fri Feb 16 08:04:10 2024
contents = "Text Format Volume Group"
version = 1
description = "vgcfgbackup -f /tmp/vgcfg_tmp.7ubpAK vgubuntu"
creation_host = "debian" # Linux debian 6.6.11-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.6.11-1 (2024-01-14) x86_64
creation_time = 1708070650 # Fri Feb 16 08:04:10 2024
vgubuntu {
id = "SwaY95-FDwn-7TTy-8DSC-Eikm-ZZkj-RM87XI"
seqno = 3
format = "lvm2" # informational
status = ["RESIZEABLE", "READ", "WRITE"]
flags = []
extent_size = 8192 # 4 Megabytes
max_lv = 0
max_pv = 0
metadata_copies = 0
physical_volumes {
pv0 {
id = "dZOM7s-pBAY-2t8p-jU7r-STvB-OqaW-QVSV4v"
device = "/dev/mapper/nvme0n1p3_crypt" # Hint only
status = ["ALLOCATABLE"]
flags = []
dev_size = 485814272 # 231.654 Gigabytes
pe_start = 2048
pe_count = 59303 # 231.652 Gigabytes
}
}
logical_volumes {
root {
id = "F7Nkvi-7QU5-hmNg-U2wg-87BH-Dj9l-mgycZm"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
creation_time = 1631034543 # 2021-09-07 17:09:03 +0000
creation_host = "ubuntu"
segment_count = 1
segment1 {
start_extent = 0
extent_count = 59050 # 230.664 Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 0
]
}
}
swap_1 {
id = "cuB6jw-72u4-iHjN-hp9p-0NJW-LFOI-U3m7Sc"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
creation_time = 1631034543 # 2021-09-07 17:09:03 +0000
creation_host = "ubuntu"
segment_count = 1
segment1 {
start_extent = 0
extent_count = 244 # 976 Megabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 59050
]
}
}
}
}
Here is a zipped file with all the text files from the dump : https://drive.google.com/file/d/1mDlaD6d-8tuIaxJkNqOlnXV5PBpdLJ3-/view?usp=sharing
Here is the restore command line :
/usr/sbin/ocs-sr -g auto -e1 auto -e2 -r -j2 -c -k1 -scr -icds -p choose restoredisk 2024-02-16-08-img nvme0n1
Hope this helps.
Do you keep the /var/log/clonezlla.log after your restore command was run? It should give us some clues.
Steven
Do you keep the /var/log/clonezlla.log after your restore command was run? It should give us some clues.
No I ran Clonezilla from a USB disc and I didn't think of saving the log file, sorry.
Hello, I managed to manually clone my partitions but I would like to ask if there are any plans to give this option in clonezilla-live. Since activating encryption on Ubuntu creates lvm on luks by default, I suspect there are many users who would from such a utility.
I tried to unlock and mount the volume group and then re-launch the clonezilla but clonezilla unmounts such partitions automatically and in the TUI only the physical partitions are shown and the LUKS partition is shown as one single partition.
This is an issue because even if the partition is mostly empty, due to LUKS protection the image takes up a huge space and
dd
takes so much time. A ~400GiB LUKS partition with only ~10GiB real data takes a good part of the day to clone via default settings (TUI); while unlocking the LUKS and cloning the LVM partitions manually viapartclone.ext4
takes up ~4GiB of space and the image generation takes few minutes (<5 minutes).Thanks for this great utility by the way. Cheers!