Closed GoogleCodeExporter closed 9 years ago
echo... now I have it's
pool: segment
state: DEGRADED
status: One or more devices could not be used because the label is missing or
invalid. Sufficient replicas exist for the pool to continue
functioning in a degraded state.
action: Replace the device using 'zpool replace'.
see: http://www.sun.com/msg/ZFS-8000-4J
scrub: resilver completed with 0 errors on Tue Jan 18 03:36:46 2011
config:
NAME STATE READ WRITE CKSUM
segment DEGRADED 0 0 0
raidz1 DEGRADED 0 0 0
disk3s2 ONLINE 0 0 0
3906703920274957285 FAULTED 0 0 0 was /dev/disk4s2
disk4s2 ONLINE 0 0 0
errors: No known data errors
Original comment by G.Veniamin
on 17 Jan 2011 at 8:50
This may indicate a faulty disk. Is it new? Is it the same physical disk in
both cases? If it's a USB drive, is it bus powered or externally powered? I've
seen problems where the power out isn't enough to supply all the disks.
Original comment by alex.ble...@gmail.com
on 20 Jan 2011 at 1:08
It's a truecrypt disks. I've make the pool with 3 virtual encrypted disks.
Original comment by G.Veniamin
on 20 Jan 2011 at 8:13
Are the truecrpy disks individual disks on the backend, or on another
filesystem? You could end up with a race condition at shutdown where the true
crypt disks are unmounted before ifs can unmount.
Note also that the 77.0.9 bits have a bug in the ifs unmount which could be
implicated in this issue.
Try seeing if unmounting zfs first followed by true crypt unmount solves the
problem on next reboot.
Original comment by alex.ble...@gmail.com
on 20 Jan 2011 at 12:29
I have it's the problem with old releases ZFS. But trucrypt disks has located
at the HFS+ file system.
My script for mounting attached
Original comment by G.Veniamin
on 21 Jan 2011 at 7:37
sh script
Original comment by G.Veniamin
on 21 Jan 2011 at 7:37
Attachments:
Ok, I suspect this isn't a zfs bug but rather a race condition between when the
first device is mounted by true crypt mounts its device and zfs notices. The
problem is the first mount will trigger pool loading, which will then look for
the other devices, which won't exist at that point in time.
Basically this is unlikely to work or be stable in practice so I strongly
recommend against doing it.
Original comment by alex.ble...@gmail.com
on 21 Jan 2011 at 8:06
Original comment by alex.ble...@gmail.com
on 21 Jan 2011 at 8:07
Original comment by alex.ble...@gmail.com
on 21 Jan 2011 at 8:09
Perhaps can I do sleep for loading zfs kext? Or without autoload kext
Original comment by G.Veniamin
on 21 Jan 2011 at 8:13
I suggest that the best way around this is to not have ZFS try and probe the
disks at all. You may be able to do this by removing the
/System/Library/Filesystems/zfs.fs contents, which is what OSX will use to try
and probe disks when they come on-line. You should still be able to use the
zpool import command to bring the disk up, but it may not work.
The alternative is to format the trucrypt disks as taking the entire disk,
rather than following the instructions on the Wiki to create it on a partition.
That way, OSX won't notice that it's a ZFS disk and therefore not mount it
until after all your trucrypt disks are up.
Since this isn't really an issue with ZFS, I'm going to close this bug as
WontFix. I hope the above suggestions are useful to you.
Original comment by alex.ble...@gmail.com
on 16 Feb 2011 at 9:22
Original issue reported on code.google.com by
G.Veniamin
on 17 Jan 2011 at 6:35