openzfs / zfs

OpenZFS on Linux and FreeBSD
https://openzfs.github.io/openzfs-docs
Other
10.6k stars 1.75k forks source link

ZVOL not showing up after reboot #599

Closed lembregtse closed 11 years ago

lembregtse commented 12 years ago

I've got a weird problem with some of my zvol's.

My main pool is 'data'.

I created disks with 'zfs create data/disks'.

Then I create two zvol's with 'zfs create -V 10G data/disks/router-root' and 'zfs create -V 1G data/disks/router-swap'

These devices show up in /dev/zd... and /dev/zvol/data/disks/...

Now when I reboot my host machine, the /dev/zd... and /dev/zvol/ devices disappear. The datasets still show up in 'zfs list'.

At first I thought it might have had something to do with the sub dataset 'disks' I used. But then I created a zvol directly under data, and it still is not showing up after reboot.

Anyone any idea why it's not showing up?

I'm using ubuntu oneiric with 3.0.0-16-server amd64, I've tried both ppa daily and stable. If any more info is needed, don't hesistate to ask.

behlendorf commented 12 years ago

Check if the zd device appears under /sys/block, if it does there's an issue with the zvol udev rules.

lembregtse commented 12 years ago

dm-0 dm-3 loop2 loop5 ram0 ram11 ram14 ram3 ram6 ram9 sdc sdf sdi dm-1 loop0 loop3 loop6 ram1 ram12 ram15 ram4 ram7 sda sdd sdg sdj dm-2 loop1 loop4 loop7 ram10 ram13 ram2 ram5 ram8 sdb sde sdh sdk

No zd devices there.

lembregtse commented 12 years ago

Oh yes, I don't know if this could be related but I used -o ashift=12 to create my data pool.

behlendorf commented 12 years ago

Any failure messages in dmesg after the zfs modules load? At module load time we should be registering all the majors and minors for the various zvols.

lembregtse commented 12 years ago

[ 65.553078] SPL: Loaded module v0.6.0.53, using hostid 0x007f0101 [ 65.556448] zunicode: module license 'CDDL' taints kernel. [ 65.559450] Disabling lock debugging due to kernel taint [ 65.595909] ZFS: Loaded module v0.6.0.53, ZFS pool version 28, ZFS filesystem version 5 [ 65.630370] udevd[671]: starting version 173

No error messages or warnings are shown by dmesg on boot or on modprobe. It's really vague, I'll destroy the pool and recreate it with normal ashift 9 to see if the problem persists.

lembregtse commented 12 years ago
  1. zpool create data raidz devices
  2. zfs create -V 10G data/fish
  3. The following exists: /dev/zd0 /dev/zvol/data/fish
  4. reboot
  5. /dev/zvol/data/fish and /dev/zd0 do not exist -> no zd device in /sys/block

So it's the same for ashift 9

zfs list shows NAME USED AVAIL REFER MOUNTPOINT data 10.3G 14.2T 48.0K /data data/fish 10.3G 14.2T 28.4K -

so the volume is still there.

lembregtse commented 12 years ago

Ok I've debugged this a little further. I created the same setup in a virtual machine and could not reproduce the error.

Now there is one big differnce between my vm and host machine, the disk id's.

On my VM I use /dev/sd* as disk for the pool. On my host I use /dev/disk/by-id/...

When I use /dev/sd* on my host, the problem goes away and zvol pops up after reboot. I hope this will help you find the problem.

behlendorf commented 12 years ago

Thanks for additional debugging on this I'm sure it will help us get to the bottom on the issue, particularly if we're able to reproduce this issue locally

Phoenixxl commented 12 years ago

I have the same problem . I'm using the daily snapshot of precise however due to driver needs.

Using /dev/sd* is impossible for me , due to the fact my mb controller will be having removables connected.

Another slight difference is I'm not using by-id but have configured a zdev.conf file.

Also , mounts aren't done upon reboot either.

Thank you for taking the time to look into this.

Phoenixxl commented 12 years ago

Looking at issues from the past where someone reported the same thing , I tried renaming the zvol . That made the zvol appear in /dev/ again.

This isn't really a workaround , but it might help pinpoint the issue.

lembregtse commented 12 years ago

What os are you using? I use the "mountall" specific zfs package. It mounts all zfs filesystems automatically (if specified).

Phoenixxl commented 12 years ago

Fresh precise pangolin install from daily snapsot. : http://cdimage.ubuntu.com/ubuntu-server/daily/20120315/precise-server-amd64.iso

(I usually don't go for development releases and stay away from daily builds as well. I like stable and tested. Error messages from my controller seemed to be unimplemented in kernels before 3.2 . And since this is going to be the LTS release and I'm planning to migrate to it on all my running machines I figured what the hell..)

Only sshd and samba are the installed packages.

Right after the OS I Installed zfs from Darik Horn's ZFS PPA. : ppa:zfs-native/daily

During install zfs-mountall seemed to be part of it and was configured , yet it's not doing it's thing.

I haven't edited /etc/default/zfs to do mounting either .

Does renaming the zvol make it show up in /dev/ for you again ??

lembregtse commented 12 years ago

I use oneiric, as precise had kvm errors on pci passthrough. I think AMD IOMMU was wrongly mapped by the kernel.

On oneiric I use "mountall=2.31-zfs1" to mount zfs for me. The regular method is not working, but someone offered a sollution somewhere in the issues.

I also reverted from /dev/disk/by-id to /dev/sd* to solve the problem. You could try export / import -d /dev/disk/by-id or /dev to switch between those.

Phoenixxl commented 12 years ago

I can't use /dev/sd* . My motherboard's controller has removables attached to it. depending on what's inserted at boot time the order changes. /dev/sdc to /dev/sdh are highly variable. sda and sdb are my cache.

I need to be able to use by-path. making a zdev.conf is the sane option in that case.

I have mountall installed :

root@Pollux:/home/zebulon# dpkg --list mountall Desired=Unknown/Install/Remove/Purge/Hold | Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend |/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad) ||/ Name Version Description +++-==========================================-===================== ii mountall 2.35-zfs1 filesystem mounting tool root@Pollux:/home/zebulon#

It just isn't mounting at boot . On natty it did so fine , not on precise. I've looked for configuration options for mountall , but don't see anything in /etc or /etc/default.

Zvol's not showing up in /dev is the bigger issue for me though. Making a script that renames my zvol's at boot time , then renames them back seems like a plaster on a wooden leg. And at this point I see no other way of making them pop up.

calling mountall a second time (assuming it gets called at boot) after boot does mount my storage , but that's just the same as "zfs mount -a" .. maybe it's a priority thing /shrug .. For mounting there's several things that can be done in a script that can pass for sane.(mountall working at boot would be the expected fix though)

Phoenixxl commented 12 years ago

As suggested here : http://groups.google.com/a/zfsonlinux.org/group/zfs-discuss/browse_thread/thread/5b25e2a172cd2616

checked if zfs was in : /etc/initramfs-tools/modules , which wasn't the case I checked if zfs.ko was in initrd , it wasn't . (does installing the package zfs-ubuntu do this on oneiric ?)

So I added zfs to modules , updated initrd , verified if it was in initrd

http://pastebin.com/qCa83Kyw

Rebooted .. no change .. my storage wasn't mounted.(no zvol's either)

Also as suggested in that thread , reinstalled gawk , spl-dkms , zfs-dkms , rebooted

dkms status uname -a cat /proc/partitions

http://pastebin.com/Am8RexPe

No change still .. no zvol , no storage

I just tried setting mount to 'y' in /etc/default/zfs. It doesn't doesn't mount on startup either ... manual zfs mount -a works... something must be really off.

stephane-chazelas commented 12 years ago

Maybe not related. But the issue I have here is that upon booting, udev fires some blkid and zvol_id concurrently for ever /sys/block/zdxxx, and as there are hundreds and every zfs operation takes a long time to complete is normal conditions where only one command is run at a time, udev eventually times out and kills those commands resulting in missing /dev/zvol files.

Work around for me is to run for f (/sys/block/zd* ) (udevadm trigger -v --sysname-match=$f:t; sleep 1)

(zsh syntax) manually after booting to restore the missing zvols, one at a time with a 1 second delay in between each.

rssalerno commented 12 years ago

I just tried setting mount to 'y' in /etc/default/zfs. It doesn't doesn't mount on startup either ... manual zfs mount -a works... something must be really off.

I am having the same issue as Phoenixxl. It started after going from ppa stable to ppa daily. I suspect this is related to /etc/init.d/zfs being changed into /etc/init.d/zfs-mount and zfs-share. I tried the following:

update-rc.d zfs-mount defaults

but it didn't work, plus this caused "zpool status -x" to show corruption. Once the rc(n).d links were removed, the "corruption" disappeared and still had to to be mounted manually after boot.

dajhorn commented 12 years ago

There seem to be several potentially conflated problems reported in this ticket:

  1. You can get spurious "OFFLINE" or "FAULTED" errors if the /etc/zfs/zpool.cache file is stale. Try regenerating it.
  2. Always use zpool create ... /dev/disk/by-id/... and zpool import -d /dev/disk/by-id. If you found a circumstance where using /dev/sd* is actually required, then please open a new ticket for it. KVM and VMware have bugs that can cause missing links in /dev/disk/by-id.
  3. ZoL is incompatible with lazy drive spin-up. Set the BIOS, HBA, or virtual machine host to always fully spin-up all drives. KVM and/or Virtual Box sometimes issues a hotplug event on INT13 disks after POST, which never happens in a sane system, which breaks ZoL.
  4. Please, therefore, try to reproduce any bug involving KVM without KVM.
  5. Calling zfs mount -a in an init script is futile on a fast or big system that can race upstart. Manually running zfs mount -a afterwards while the system is quiescent is not diagnostic.
  6. Ubuntu Natty and Ubuntu Precise need the zfs-mountall package. If the /sbin/mountall that is patched for ZFS is failing -- while the init.d scripts are disabled -- then please open a separate ticket for it.
  7. udev trips over its shoelaces when more than a few hundred ZoL device nodes are instantiated.
  8. udev can call zpool_id on a storage node before the ZFS driver is ready for the request, which spews warnings onto the system console and can hang dependent services. The provisos about lazy drive spin-up and virtualized environments apply here too.

The last two points might resolve by making the zfs.ko module load and plumb itself synchronously.

Phoenixxl commented 12 years ago

Not wanting to make the faux pas of creating a duplicate ticket , having an issue with the same symptoms , IE zvol's not showing up at boot , I added to this one.

Having both zvol's not showing up and mounting not happening I didn't want to assume both issues were unrelated. Hence I mentioned both here.

I for one am not using any form of virtualization .

As to point 5 , it being very unrelated to diagnosing anything , I am in the very real situation of needing to rename zvol's , and doing a mount -a in a batch file since it's the only way to get my machine started for doing it's job . I am willing to try whatever you ask for diagnostic purposes , I'm already happy these things are being looked into.

As for point 6 , I did a clean precise install , installed zfs using the ppa / daily . I saw something scroll by mentioning zfs mountall , so i'm assuming that was the patched mountall being installed .. I will start a new ticket relating to that.

Before any boot specific things get started my computer spends 30 seconds detecting my first 8 drives , where I see drive activity , then another minute spinning up a further 12 on another controller . I assume everything is spun up by the time anything zfs related gets loaded.

In syslog the only thing I see with the keyword "zfs" is : "ZFS: Loaded module v0.6.0.54, ZFS pool version 28, ZFS filesystem version 5" Am I supposed to see more after that if things run correctly?

Also , -And I think this is somewhat related but if it isn't I'll make a new ticket , unless it's normal behavior- I created a zvol , which showed up as zd0 in /dev at creation time . During the next 2 days , to be able to use it I had to rename it on every reboot . Now for the weird bit , Today I created a second zvol which showed up as zd16 .. 16 sounds about right when it comes to the amount of reboots combined with renames since zd0 was made.. Is there some kind of counter that gets incremented with every rename ?

Should I start 2 new tickets ? One for zvol's not showing up , the other for zfs-mountall not starting , both specific to precise. Maybe it was wrong to think mr lembregtse and my own zvol issue were the same.

Kind Regards Phoenixxl.

dajhorn commented 12 years ago

@Phoenixxl:

Not wanting to make the faux pas of creating a duplicate ticket

Don't worry about that. The regular ZoL contributors are friendly and duplicate tickets are easy to handle.

I am in the very real situation of needing to rename zvol's , and doing a mount -a in a batch file since it's the only way to get my machine started for doing it's job

If you are renaming volumes, then perhaps you have bug #408. @schakrava proposes a fix in pull request #523, but you would need to rebuild ZoL to try it.

In syslog the only thing I see with the keyword "zfs" is : "ZFS: Loaded module v0.6.0.54, ZFS pool version 28, ZFS filesystem version 5" Am I supposed to see more after that if things run correctly?

No, this is normal. Anything more than that usually indicates a problem.

Today I created a second zvol which showed up as zd16 .. 16 sounds about right when it comes to the amount of reboots combined with renames since zd0 was made.. Is there some kind of counter that gets incremented with every rename ?

This is also normal and is caused by an implementation detail.

The number on each bare zvol device node increments by sixteen. Notice how these numbers correspond to the minor device number that is allocated by the system. If you partition /dev/zdN, then each partition node will get a minor device number between N+1 and N+15.

For example, this system has four zvols, the first of which is partitioned:

# ls -l /dev/zd*
brw-rw---- 1 root disk 230,  0 2012-03-18 13:16 /dev/zd0
brw-rw---- 1 root disk 230,  1 2012-03-18 13:16 /dev/zd0p1
brw-rw---- 1 root disk 230,  2 2012-03-18 13:16 /dev/zd0p2
brw-rw---- 1 root disk 230,  3 2012-03-18 13:16 /dev/zd0p3
brw-rw---- 1 root disk 230, 16 2012-03-18 13:09 /dev/zd16
brw-rw---- 1 root disk 230, 32 2012-03-18 13:09 /dev/zd32
brw-rw---- 1 root disk 230, 48 2012-03-18 13:09 /dev/zd48

Also note that device nodes are created for snapshots and persist after a zfs rename. If the system is running something that automatically creates snapshots, or if you frequently rename zvols, then the /dev/zd* number can become large, which is normal.

Note that manually renaming any /dev/zd* device node, or using one that is stale, will confuse udev and break the system. Always use the /dev/zvol/ aliases instead. (Which are sometimes incorrect per bug #408.)

Should I start 2 new tickets ? One for zvol's not showing up , the other for zfs-mountall not starting , both specific to precise.

Yes, but please check whether you affected by the problem described in bug #408.

toadicus commented 12 years ago

I've been running in to this issue: after creating a zvol, it appears as /dev/zdX (but not as /dev/pool/path), but after a reboot it doesn't appear either as /dev/zdX or /sys/block/zdX. I'm running 64-bit Gentoo on baremetal. Is there any information I can provide to help diagnose the issue?

toadicus commented 12 years ago

After some further investigation, it looks like this might be something to do with order of operations, perhaps in module loading? After a fresh reboot, the ZVOLs did not appear in /dev or /sys/block, but if I manually zfs stop, rmmod zfs, zfs start, they appear. For now I've removed the zfs module entry from /etc/conf.d/modules, and I'll let the zfs script load the modules. I'll let you know if that solved it next time I get a chance to reboot.

craig-sanders commented 12 years ago

FYI, for anyone else encountering this problem...

I ran into this again this morning on one system. did the export followed by import trick to get the /dev/zd* devices to show up, but still no /dev/zvol/ or /dev/poolname/ symlinks.

this seems to be the easiest way to get the symlinks to show up again:

udevadm trigger -v --subsystem-match=block --sysname-match=zd* --action=change

the '-v' is optional, for verbose output. also optional is '-n', for --dry-run.

ryao commented 12 years ago

This appears to be a duplicate of issue #441.

Phoenixxl commented 12 years ago

It isn't the same.

I don't need to import or export anything , renaming the zvol works.

This showed up after using rc6 on natty just fine , it popped up when using precise ..

Zpool status shows zpools being ok , just nothing is mounted at boot.

Phoenixxl commented 12 years ago

This still is happening .. Today after having resolved another unrelated issue , I am confronted with this again.

zfs release last tested is : 0.6.0.62 . ubuntu server with kernel 3.2.0-24-generic

I have been exporting and importing my pools all week due to another issue , whatever ryao was suffering from , for me , unlike issue 441 , it doesn't resolve when exporting/importing. unless ryao means he has to export / import on every reboot it might still be the same issue , since doing that , like renaming , would make them show up as well I suppose.

Would uploading a dmesg output or any other log / diagnostic output help in finding the cause for this ? I would gladly provide.

Kind regards.

bitloggerig commented 12 years ago

Tested on Arch Linux, as drukargin suggested:

the ZVOLs did not appear in /dev or /sys/block, but if I manually zfs stop, rmmod zfs, zfs start, they appear

Symlinks in /dev/zvol appear as well.

behlendorf commented 12 years ago

Commit fe2fc8f6d383f1621446f98bb277c12f6b457b8f might help with this.

lembregtse commented 12 years ago

I'll take a look into this in a couple of days and let you guys know if this behaviour has changed.

ryao commented 12 years ago

@behlendorf Commit fe2fc8f6d383f1621446f98bb277c12f6b457b8f does not appear to exist.

behlendorf commented 12 years ago

Indeed, sorry about that. I just added this to master and pushed it to github.

cjdelisle commented 12 years ago

I think I have the same issue. I've got zfs running over a LUKS encrypted block device and everything works perfect except for the zvols not showing up after reboot. /etc/init.d/zfs stop && modprobe -r zfs && modprobe zfs && /etc/init.d/zfs start works around the issue. /etc/init.d/zfs stop && /etc/init.d/zfs start does not work. Given my block device simply doesn't exist until I type my passwords, my guess is its a race condition and it makes sense that it would happen with /dev/disk/by-id/... too.

In my /var/log/messages ( https://ezcrypt.it/Zh5n#jRgE1ZyVVg50FB39tzR9GRrA ) I see zfs.ko being loaded earlier than I would expect. My filesystem is mounted in rc.local just after tun0 is set up (shown in log/messages) so I expect the module to be loaded much later. Note: the module loading and unloading at the bottom of the paste is from me debugging.

Thanks for such a great project, I will debug further and report results.

ryao commented 12 years ago

As I posted in issue #441:

Do people affected by this see any errors in dmesg?

I noticed that zfs_ioc_pool_import() in ./module/zfs/zfs_ioctl.c calls zvol_create_minors() iff error == 0. It looks like it is possible for a SPL replay error to set error to a non-zero value and if zc->zc_nvlist_dst == 0, then zvol_create_minors() will never be called, causing the zvols to disappear.
Phoenixxl commented 12 years ago

Nop. Upgraded today , still not showing up.

The only line with zfs in it in dmesg is : [ 15.019142] ZFS: Loaded module v0.6.0.66-rc9, ZFS pool version 28, ZFS filesystem version 5

cjdelisle commented 12 years ago

No errors in dmesg, I get the same as him. I think the problem is that the zfs module is loaded automatically and doesn't call udevadm settle when loading. I have added:

/sbin/modprobe -r zfs || exit 1
/etc/init.d/zfs start || exit 2

To my /etc/rc.local and it seems to solve the problem (the last test was inconclusive because my main disk reached it's max mount count and needed to be scanned).

ryao commented 12 years ago

Which distributions are affected by this issue?

Phoenixxl commented 12 years ago

I am using ubuntu server 12.04 LTS

My temporary solution as stated above , is to rename my volumes , then renaming them back in a batch file. Then restarting all daemons which are dependent on them.

Halfwalker commented 12 years ago

Possible additional information for this, but not directly related to zvols. Missing symlinks in /dev/disk/by-id and zfs filesystems not auto-mounting at boot.

I just rebuilt a backup box to use ubuntu 12.04 and zfs - it was an mdadm based box. It has 8x 1TB drives run off a 3ware controller and 6x 750 drives off an old adaptec CERC controller. udev does not see or process the drives on the 3ware card, so /dev/disk/by-id does not show them.

I'm currently using /dev/disk/by-path for zfs, which works so long as the controllers do not change slots. Not likely, but I would prefer a failsafe setup rather than having to remember if (when) I modify the system. Anyone else using 3ware controllers ? Do you have symlinks showing up correctly ?

Regardless of this, the zfs filesystems aren't mounting at boot - I have to sudo zfs mount -a at reboot. Per this thread, I just now rebuilt the initrd to include zfs and as soon as a copy process finishes (about 2 hours) will reboot to check.

cjdelisle commented 12 years ago

I'm using debian wheezy. I did some further testing and I am quite convinced that it is a matter of disks not being ready when the kernel module is loaded. I am unloading the module now then calling /etc/init.d/zfs start and it works perfectly. Two solutions I can think of:

  1. prevent the module from being auto-loaded by the os until the init script loads it (perhaps using a blacklist)
  2. execute the udev rules from the init script rather than on module load.

I realize my weird LUKS based configuration is probably not officially supported but I think it is the same issue.

dajhorn commented 12 years ago

I just now rebuilt the initrd to include zfs

To get the desired behavior, all storage devices must be online and the /etc/zfs/zpool.cache file must be available before the ZFS driver is loaded.

This means that putting zfs.ko into the initramfs causes problems. To get the best result, try to load zfs.ko as late as possible during system start.

ryao commented 12 years ago

@dajhorn That would be a Ubuntu-specific issue. Putting zfs.ko into the initramfs does not cause any problems on Gentoo. I assume that this is because of Gentoo's initramfs generation software, which is called genkernel. The command that I use on my desktop is as follows:

genkernel all --makeopts=-j7 --no-clean --no-mountboot --zfs --bootloader=grub2 --callback="module-rebuild rebuild"

That will run the kernel build system before building the initramfs. The --zfs --bootloader=grub2 --callback="module-rebuild rebuild" flags are mandatory while the rest are optional. I plan to modify genkernel so that only the --zfs --bootloader=grub2 flags are needed. Currently, module-rebuild rebuilds out-of-tree kernel modules in the sequence that they were installed, which is not always spl first and zfs second, which is something else that I need to fix.

Anyway, I hope this might give people some insight on how to avoid this problem on Ubuntu.

dajhorn commented 12 years ago

@ryao: I responded to an Ubuntu user, and most of the people tracking this ticket are using deb or rpm systems.

Debian and Ubuntu users should not put the zfs.ko module into the initramfs or otherwise try to load it early.

madpenguin commented 11 years ago

Removing all traces of zfs from module configs has no effect on Ubuntu (12.04). If however you install "rmmod zfs" just before the line "zfs mount -a" in your zfs mount script, this seems to do the trick.

ryao commented 11 years ago

A gentoo user in IRC had this problem. Some of the files from sys-fs/zfs package were missing from his system and /lib/udev/rules.d/60-zvol.rules was among them. He reinstalled the sys-fs/zfs package, ran udevadm trigger and the problem went away.

madpenguin commented 11 years ago

Mm, not a problem .. I've discovered there is a major problem with ZVOL and performance that everyone seems to know about, but strangely seems to omit when recommending ZFS. ZVOL volumes run at less than 1/4 speed of comparible LVM volumes, hence they're not really usable. In this instance, all I want from ZFS are logical volumes, so I'm going to revert to LVM.

ZFS is great under certain circumstances and the stability issues seem fixed .. but this ZVOL issue seems to trace it's origins back to Solaris, i.e. it looks like a design flaw .. which is worrying on so many levels, not least the Linux porting people are unlikely to be able to fix it.

It's interesting that ZFS seems to have made it's way onto many storage type devices, but now people want to use it's ZVOL functionality to host virtual machines, they're a bit stuffed !!

----- Original Message -----

A gentoo user in IRC had this problem. Some of the files from sys-fs/zfs package were missing from his system and /lib/udev/rules.d/60-zvol.rules was among them. He reinstalled the sys-fs/zfs package, ran udevadm trigger and the problem went away.

— Reply to this email directly or view it on GitHub .

Halfwalker commented 11 years ago

If you only want logical volumes, then LVM probably is a (somewhat) better choice. But ZFS offers so much more ... Given the size of the volumes (mine at least), I think that the block-checking capability is crucial. Truly KNOWING that the data on disk is good is a nice I-can-sleep-at-night feeling.

madpenguin commented 11 years ago

Indeed .. what is worrying me is that if they have the design of ZVOL's "SO" wrong, what else isn't what it should be ????

Really, if some of it's headline features simply don't work, does ZFS have a future?

----- Original Message -----

If you only want logical volumes, then LVM probably is a (somewhat) better choice. But ZFS offers so much more ... Given the size of the volumes (mine at least), I think that the block-checking capability is crucial. Truly KNOWING that the data on disk is good is a nice I-can-sleep-at-night feeling.

— Reply to this email directly or view it on GitHub .

dajhorn commented 11 years ago

Indeed .. what is worrying me is that if they have the design of ZVOL's "SO" wrong, what else isn't what it should be ????

Really, if some of it's headline features simply don't work, does ZFS have a future?

I think that this is the first "ZoL is dying" comment that I've seen. Let me be the first to congratulate all ZoL contributors for reaching this important project milestone. ZFS on Linux is gaining momentum and going mainstream.

madpenguin commented 11 years ago

Turning my question into a statement is a perhaps a little mis-leading. If I could expand a little;

From what I've read and experienced (and please correct me if I'm wrong) whereas ZoL works well (and I've been running it live on around six servers for over a year) it appears that there is a design flaw in ZVOL (apparently a ZFS design flaw, not a ZoL issue) that causes it to run at less than 1/4 of the speed one might expect.

I now find with the rise of Cloud based systems I need to use device based volumes on these machines, and since ZVOL simply isn't practical at such a reduced speed (again, please, somebody correct me if I'm wrong here!) I seem to have no option but to migrate all the systems currently running ZFS, back to LVM - which is what they were running prior to moving over to ZFS.

Although this "might just be me", my impression is that "the cloud" is becoming quite popular, and my reasons for needing device based volumes are going to apply to other people .. indeed my expectation is that they will apply to a lot of people.

If have found ZFS generally to be excellent and it carries all the features I want / need, whereas the alternative BTRFS is immature, unstable and lacking in features - by comparison. (and indeed some of the design choices they seem to have made don't "seem" to be as smart as some of the choices made by ZFS) , indeed BTRFS doesn't (as far as I know) currently have a ZVOL type feature.

So, to expand on my "question" a little, given logical volumes are a fact of life for many people, especially with the rise of cloud and virtualisation technology, is ZFS's lack of workable ZVOL functionality going to exclude it as an option? Furthermore, once this functionality is added to BTRFS (not to mention the page cache .vs. ARC issues) is ZFS going to remain "mainstream"? Is there any chance that the people at Oracle will address the issue and release the patches to the Open Source community ??

If you could invent an electric car that would do 1000 miles on one charge, it would be a very attractive vehicle. If on the other hand it was limited to 25 miles per hour, applications for it may be limited ... (!)

----- Original Message -----

Indeed .. what is worrying me is that if they have the design of ZVOL's "SO" wrong, what else isn't what it should be ????

Really, if some of it's headline features simply don't work, does ZFS have a future?

I think that this is the first "ZoL is dying" comment that I've seen. Let me be the first to congratulate all ZoL contributors for reaching this important project milestone. ZFS on Linux is gaining momentum and going mainstream.

— Reply to this email directly or view it on GitHub .

Phoenixxl commented 11 years ago

Please don't hijack this thread .

I would like to see the issue where zvols don't show up after reboot resolved and am really not interested in all this other opinionated drivel.

Not to mention I get an email any time something gets added to this thread.

Thank you in advance for keeping to the subject at hand .

Feel free to start a discussion about anything you want somewhere else.

Regards Phoenixxl.