Describe the bug
This is my first ever bug report, so please tell me if I can make it better.
This is potentially a duplicate of #21 .
While creating an array (Raid-5 in this case), the role runs correctly, including the arrays | Updating Initramfs task. The array is working correctly. However, after a reboot, the OS fails to mount the array because it was automatically renamed to md127.
Running update-initramfs -u manually and rebooting fixes the issue. Running this command after the role in Ansible seems to work too (cf. below).
To Reproduce
Steps to reproduce the behavior:
Fire up a VM with 1 disk for system and 3 disks for storage; install Debian
Run the role from this repo. As an example, I had the following play and variables:
- name: Raid-5 configuration
hosts: all # one single host
become: yes
tasks:
- name: Include mdadm role
include_role:
name: mrlesmithjr.mdadm
# - name: Update initramfs (bis)
# command: "update-initramfs -u"
# when: array_created.changed
(When the three commented lines are un-commented, update-initramfs is run a second time, and things work as expected even after a reboot.)
Reboot the VM
Expected behavior
The Raid array should keep the same name and therefore should be mounted without any problem.
Actual behavior
At boot, the VM displays something like
[ TIME ] Timed out waiting for device dev-md0.device - /dev/md0.
[DEPEND] Dependency failed for srv-md0.mount - /srv/md0.
[DEPEND] Dependency failed for local-fs.target - Local File Systems.
The system boots in emergency mode and the lsblk command shows that the array is now called md127.
Desktop (please complete the following information):
Additional context
Note that when running the playbook, the arrays | Updating Initramfs task from this role seems to work (std_out is the usual "Generating /boot/..." and std_err is empty). But somehow, it needs to be run a second time to actually work.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Describe the bug This is my first ever bug report, so please tell me if I can make it better. This is potentially a duplicate of #21 .
While creating an array (Raid-5 in this case), the role runs correctly, including the
arrays | Updating Initramfs
task. The array is working correctly. However, after a reboot, the OS fails to mount the array because it was automatically renamed tomd127
.Running
update-initramfs -u
manually and rebooting fixes the issue. Running this command after the role in Ansible seems to work too (cf. below).To Reproduce Steps to reproduce the behavior:
(When the three commented lines are un-commented,
update-initramfs
is run a second time, and things work as expected even after a reboot.)Expected behavior The Raid array should keep the same name and therefore should be mounted without any problem.
Actual behavior At boot, the VM displays something like
The system boots in emergency mode and the
lsblk
command shows that the array is now calledmd127
.Desktop (please complete the following information):
Additional context Note that when running the playbook, the
arrays | Updating Initramfs
task from this role seems to work (std_out is the usual "Generating /boot/..." and std_err is empty). But somehow, it needs to be run a second time to actually work.