Closed tigloo closed 12 years ago
Thanks for posting the files mentioned in the FAQ. The basic problem here is that the M1015 is bringing drives online after the ZFS driver is loaded. (The upstream ticket is zfsonlinux/zfs#330.)
On this computer, the ZoL driver is in the initrd because the system has a ZFS root. This means that the mpt2sas driver must also be in the initrd. Do this:
/etc/initramfs-tools/modules
file in a text editor.mpt2sas
line and save the file.update-initramfs -c -k all
.Afterwards, in the kernel log, check that all lines like this:
sd 6:0:3:0: [sdf] 5860533168 512-byte logical blocks: (3.00 TB/2.72 TiB)
Happen before this line:
ZFS: Loaded module v0.6.0.80-rc11, ZFS pool version 28, ZFS filesystem version 5
The mpt2sas driver now loads before the ZFS module, but it will still initialize the drives after the ZFS module is loaded. Is there a way I can delay loading the ZFS module?
The symptoms after booting are now as follows:
Is there a reason why mountall succeeds when it is run manually after boot, but fails when run in the init sequence? Looking at the logs, there is a 10 second delay between all drives being online and available and mountall being run.
Updated dmesg: https://www.dropbox.com/s/nxr7bjc2hm66nv0/dmesg2.txt
Calling mountall from rc.local works around the problem.
The mpt2sas driver now loads before the ZFS module, but it will still initialize the drives after the ZFS module is loaded. Is there a way I can delay loading the ZFS module?
Put a sleep 60
line above the wait_for_udev
line in the /usr/share/initramfs-tools/scripts
file, run update-initramfs -c -k all
, and reboot.
Is there a reason why mountall succeeds when it is run manually after boot, but fails when run in the init sequence?
ZoL doesn't handle hotplug events, per zfsonlinux/zfs#330, and it interacts poorly with many mpt2sas implementations. The IBM M1015 itself could also be dud hardware; it appears in many bug reports like dajhorn/pkg-zfs#53.
Thanks, that fixed it.
I have a fresh installation of Ubuntu Precise with a ZFS root pool. Booting works without any errors. A second data pool in a 4x3TB RAIDZ2 configuration attached to an IBM 1015 (in LSI 9211-IT mode) fails to be mounted after booting the system.
Attempting to import the pool only works after using -f. Running mountall also only mounts the pool after it has been imported once with -f. I followed the FAQ and tried recreating zpool.cache as well as inserting a delay into mountall.conf, but it did not change anything. I also tried to remove the root directory where the filesystems are going to be mounted to ensure that is empty. Looking at the logs, the ZFS module is always initialized before all hard drives come online.
dmesg: https://www.dropbox.com/s/qiqu93i8wkth96i/dmesg.txt fstab: https://www.dropbox.com/s/t45cnhkqvtj0ljp/fstab.txt kern.log: https://www.dropbox.com/s/6t13p17dds32lr5/kern.txt
Note that in these logs the timer is a bit short. I experimented with the values a bit. However, even when there is a 60 second delay or more there is no change. This is the initialization with a bigger timer:
zd0 is my swap device, so mountall runs at this time. If you need the output of mountall --verbose, I can provide it via mail.
Here is the ZFS status: