openshift / assisted-service

Apache License 2.0
115 stars 216 forks source link

save-partlabel and save-partindex coreos-installer arguments are not honored because the partition is formatted previously #3664

Closed alosadagrande closed 2 years ago

alosadagrande commented 2 years ago

The arguments that can be set to keep a partition of the installation disk are not honored because the installation disk is formated before running the coreos-installer utility here and then a disk performance test is run

From the assisted service I can see that the arguments are passed properly to the coreos-installer binary:

Apr 13 09:20:07 snonode.virt01.eko4.cloud.lab.eng.bos.redhat.com installer[8571]: time="2022-04-13T09:20:07Z" level=info msg="Writing image and ignition to disk with arguments: [install --insecure -i /opt/openshift/master.ign --image-url 
http://10.19.140.20/rhcos-4.10.3-x86_64-metal.x86_64.raw.gz --save-partlabel data --append-karg ip=ens3:dhcp /dev/vda]"

but the partition is not saved to /dev/vda5 as expected:

[root@snonode ~]# lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
loop0    7:0    0  15.7G  0 loop /run/ephemeral
loop1    7:1    0 882.7M  0 loop /sysroot
sr0     11:0    1   102M  0 rom  
vda    252:0    0   120G  0 disk 
├─vda1 252:1    0     1M  0 part 
├─vda2 252:2    0   127M  0 part 
├─vda3 252:3    0   384M  0 part 
└─vda4 252:4    0   3.3G  0 part 

It just works if I run the coreos-installer + args in the command line. The device format I think it is also that the coreos-installer does by checking the help flag:

        --preserve-on-error         
            Don't clear partition table on error

            If installation fails, coreos-installer normally clears the destination's partition table to prevent booting from invalid boot media.  Skip clearing the partition table as a
            debugging aid.
openshift-bot commented 2 years ago

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close. Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

mkowalski commented 2 years ago

Hi, we are currently working on a few stories related to the partition tables. Please ping us via internal slack if this is still an issue

openshift-bot commented 2 years ago

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity. Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten /remove-lifecycle stale

openshift-bot commented 2 years ago

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen. Mark the issue as fresh by commenting /remove-lifecycle rotten. Exclude this issue from closing again by commenting /lifecycle frozen.

/close

openshift-ci[bot] commented 2 years ago

@openshift-bot: Closing this issue.

In response to [this](https://github.com/openshift/assisted-service/issues/3664#issuecomment-1260532102): >Rotten issues close after 30d of inactivity. > >Reopen the issue by commenting `/reopen`. >Mark the issue as fresh by commenting `/remove-lifecycle rotten`. >Exclude this issue from closing again by commenting `/lifecycle frozen`. > >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.