Open ClementCastel opened 8 months ago
same problem
So I'm not sure if any or both of you managed to solve your issues (if so: please report back) or if you did what I did and just took one of the default installs to get the server up and running. Anyway, I tinkered with it and here's how I got it working:
1) bring your own image the more simpler option
use the resulting qcow2 as the image to provide - should go fine and the server should boot into arch
2) bring your own linux a bit more complicated due to OVH not provide how the partition works - but here you go: when using the manager interface just use the default partitions /boot and / - they will get mounted as such additional the install environment creates a ESP and mounts it to /boot/efi looks like this:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 1 0B 0 disk
sdb 8:16 1 0B 0 disk
sdc 8:32 1 0B 0 disk
sdd 8:48 1 0B 0 disk
sde 8:64 1 0B 0 disk
sdf 8:80 1 0B 0 disk
sdg 8:96 1 0B 0 disk
sdh 8:112 1 0B 0 disk
nbd0 43:0 0 0B 0 disk
nbd1 43:32 0 0B 0 disk
nbd2 43:64 0 0B 0 disk
nbd3 43:96 0 0B 0 disk
nbd4 43:128 0 0B 0 disk
nbd5 43:160 0 0B 0 disk
nbd6 43:192 0 0B 0 disk
nbd7 43:224 0 0B 0 disk
nvme0n1 259:0 0 476.9G 0 disk
|-nvme0n1p1 259:6 0 511M 0 part /boot/efi
|-nvme0n1p2 259:7 0 1G 0 part /boot
|-nvme0n1p3 259:10 0 475.4G 0 part /
`-nvme0n1p4 259:11 0 2M 0 part
nvme1n1 259:1 0 476.9G 0 disk
|-nvme1n1p2 259:2 0 100G 0 part
`-nvme1n1p3 259:3 0 375.9G 0 part
nbd8 43:256 0 0B 0 disk
nbd9 43:288 0 0B 0 disk
nbd10 43:320 0 0B 0 disk
nbd11 43:352 0 0B 0 disk
nbd12 43:384 0 0B 0 disk
nbd13 43:416 0 0B 0 disk
nbd14 43:448 0 0B 0 disk
nbd15 43:480 0 0B 0 disk
/dev/nvme0n1p3 on / type ext4 (rw,relatime)
/dev/nvme0n1p2 on /boot type ext4 (rw,relatime)
/dev/nvme0n1p1 on /boot/efi type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro)
devtmpfs on /dev type devtmpfs (rw,nosuid,size=16145696k,nr_inodes=4036424,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /dev/shm type tmpfs (rw,size=32612656k)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
tmpfs on /run type tmpfs (rw,nosuid,noexec,size=102400k,mode=755)
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=3261264k,mode=700)
The difference now becomes:
#!/bin/bash
grub-install --target=x86_64-efi --efi-directory=/boot/efi --no-nvram --removable
grub-mkconfig -o /boot/grub/grub.cfg
overall the current documentation lacks a lot of required information and a proper step-by-step guide - another option is just to use one of the default templates like debian 12 to get the server bootstrapped - login via ssh and add an entry to boot an install yourself
one could also use grub-install without --no-nvram but with --bootloader-id=
side note to ovh staff: I'm aware that your job is to keep an eye on your servers - but if you give us the option to do such installs but not the proper documentation - please train your stuff TO NOT INTERFERE while WE as the customer are actively monitoring the setup via KVM konsole I had to start over twice because your staff killed the install before I could analyze what went wrong - so either please update the doc or don't mess with the system while the customer is already on it
If you have a system with two or more drives: Be aware that the install environment will kill ANY ESP it finds on ALL drives. Restoring an already installed system on another drive can be a bit tricky as re-creating a new ESP also changes its serial so you have to check your FSTAB and maybe INITRD
@ClementCastel Unfortunately the output of make_image_bootable.sh
can't be checked via IPMI. However after unsuccessful install you can reboot into rescue image and chroot
into partially installed system.
For default partitions (boot and root partitions both in RAID1), e.g.:
mkdir -p /mnt
mount /dev/md3 /mnt
mkdir -p /mnt/{boot,proc,sys,dev/pts}
mount /dev/md2 /mnt/boot
mount /dev/nvme0n1p1 /mnt/boot/efi
mount -t proc proc /mnt/proc
mount -t sysfs sys /mnt/sys
mount -o bind /dev /mnt/dev
mount --bind /dev/pts /mnt/dev/pts
mount --bind /run /mnt/run
chroot /mnt /bin/bash
then you can run directly /root/.ovh/make_image_bootable.sh
and check the output.
There might be several issues:
echo "nameserver 1.1.1.1" > /etc/resolv.conf
at the begining of make_image_bootable.sh
mdadm
package (the script assumes it's always installed)@ClementCastel @deric @lolo6oT
Hello folks I see your problems with the images, I can provide some answers
/root/.ovh/make_image_bootable.sh
is mandatory. The BYOLinux workflow works exactly like the basic install workflow we use everyday for ovh-made images. We are aware of the problem to not have the output of the script if it fails, we try to find a way to do that, but it's not really easy.mdadm
, in the example, it's installed in the pre-install-script run within packer FYI the OVH DNS Servers cdns.ovh.net has address 213.186.33.99 cdns.ovh.net has IPv6 address 2001:41d0:3:163::1
To @n0xena, sorry, I thought the documentation (README.md) was clear enough.
Hello, The new BringYourOwnLinux feature seems very interesting so I wanted to test it. However, I followed the instructions in the README, but it always crashes at the "Configuring Boot" stage.
Here are the different tests I made:
Debian 12, without script make_image_bootable.sh
Debian 12, with script make_image_bootable.sh (the one supplied)
Debian 11, with and without script (since the script's comments state that it is intended for Debian 11, I preferred to test both)
Ubuntu 22.04 server cloudimg, with and without script.
My aim is to get the feature working with a custom Ubuntu 22.04 image.
Here is the hardware configuration of the dedicated server:
Please let me know if the problem is on my side or yours. If you need any further information, please do not hesitate to contact me.