Closed artiomn closed 6 years ago
While I can't get it to fail, I still think the strpos call in the if statement is going to exclude something it shouldn't. Can you explain why that is needed and how it won't exclude something the if statement would have previously included?
Problem is concluded in the multiple mountpoints (look at the issue, noticed in my first message). strpos() call is used to verify, that this is not a pool, as in line 244 in this file. The pool is not a file system, and I don't need to do anything with them, despite the fact that it has a mountpoint.
I did look at the issue and I understand that it fixes your problem. What I am saying is that it may break other systems since the if statement now mandates that there is a slash in the name to run code within the if statement (adding to the current array). Does the code work on your system without the strpos part?
The configuration object 'conf.system.filesystem.mountpoint' is not unique. An object with the property 'dir' and value '/' already exists.
Checking for the mountpoint needs to prevent datasets, which is not ZFS mountpoints (like ZFS /var) to be written into the monitd config. But root pool has mountpoint and will be include into the list (look at the article, noticed in the issue described below).
Look at this: https://github.com/OpenMediaVault-Plugin-Developers/openmediavault-zfs/issues/38 Is it only my problem?
And, of course, if you can suggest a better variant of the entity type detection, please, use it (and I think, it will need to be replaced in the other places).
Wow, you are totally missing my point. I didn't say that no one else would have the same issue you are having. I am also not saying the strpos is a bad function to use. I am just saying that if someone's zfs filesystem name does not have a slash in it, your change will break it. Is that possible? I have no idea since I don't actively use zfs but I was trying to make sure it wasn't a problem...
We can only support so many situations. One of the big assumptions in many plugins is that you are using ext4 for the root filesystem. Since you have been filing multiple issues, sometimes it is better to hash it out in the forum for better visibility before creating the issues. There are very few people who see them here and even less people working on these plugins.
If someone use "one-level" pool without datasets? I'm not an expert in ZFS too. I can't answer your question. But first, I repeat: code, based on this assumption is already used in the plugin. Secondly, pool and all filesystems are visible in the interface. Pool is not a filesystem itself, it's a "set of datasets". And I suppose, that bottom logic doesn't know how to work with such entities. Even if you lost some functionality, I think, better way to include this changes, because now the plugin totally doesn't work with this type of configurations.
One of the big assumptions in many plugins is that you are using ext4 for the root filesystem.
It's a very strange assumption. It's a NAS. Many people use XFS, perhaps even Btrfs, not only ext4. And OMV is a Debian package. I can install it on any Debian system.
I think it's better to fix the current problem than to have a non-working plugin. If someone catches a situation where it will not work, it will open up the problem.
It isnt a strange assumption since you cant install zfs for root with the OMV install ISO or even the debian installer (last time i looked). Personally I dont think the OS filesystem needs zfs and if you are sharing disks with the OS and data, that goes against the design of OMV.
Just to be done with this, I will merge the code and push the update to the repo when I get a chance.
I don't share OS and data, but I want to use mirrored root without md, lvm, etc: ZFS can do that.
@artiomn Please remember that your setup is a really special scenario because it requires manual steps for installation. Because ZFS will never be part of the Debian project because of the incompatible license this setup can be named exotic. Now that the code is merged we will see what happens and if it breaks existing installations. I personally can not validate it because i never used ZFS because i prefer BTRFS which is official part of the Linux kernel.
Is it possible that someone takes a look into the mentioned issue that 'Pools' are listed in the filesystem list. If it's true that this is incorrect, then it should be fixed.
Yes, I understand.
Because ZFS will never be part of the Debian project because of the incompatible license this setup can be named exotic.
Never say "never". There are many Debian-based distributions. FreeNAS uses root on ZFS (it's not about licence conditions, it's about convenience). Why can't OMV include ZFS in the distribution? Is it necessary to follow all Debian policies?
Now that the code is merged we will see what happens and if it breaks existing installations.
Ok.
I personally can not validate it because i never used ZFS because i prefer BTRFS which is official part of the Linux kernel.
Is it stable?
I have created another ZFS pool (on the hard disks, for the storage), and it works.
Never say "never". There are many Debian-based distributions. FreeNAS uses root on ZFS (it's not about licence conditions, it's about convenience). Why can't OMV include ZFS in the distribution? Is it necessary to follow all Debian policies?
Do you have any background knowledge about that? Seems to me no. ZFS will never ever be included into the Linux kernel with the current lincese because GPL and CDDL do not match. Because of that Debian will not include it. Ubuntu is on thin ice because they ship ZFS with their distro. FreeNAS is based on FreeBSD which has ZFS in its kernel and userland.
Because it will violate the GPL, OMV will never ship ZFS.
See https://sfconservancy.org/blog/2016/feb/25/zfs-and-linux/.
I use Debian every day for about seven years, and of course I know something about licensing restrictions (and several years ago I have used FreeBSD on the my work machine). In addition, ZFS already included in Debian, but in the contrib repository (yes, currently they can't use it in the installer).
Ubuntu is on thin ice because they ship ZFS with their distro.
ORLY? Ubuntu has a huge community all around the world, and most of them don't know about licences (yes, most people, who want to use ZFS, know about it, but not all of them want to remember).
Because it will violate the GPL, OMV will never ship ZFS.
It's political "problem", not technical. There are many way how people can solve this. If it will be really necessary.
This PR now prevents the pool to be a Fstab database object now. Don't know if is good or not....but:
@artiomn You need to fix the webui to gray (disable) the delete button in case a pool is selected to be deleted, otherwise it comes with error that no mntent entry exists for that.
Ok, I'll try on the weekend.
You need to fix the webui to gray (disable) the delete button in case a pool is selected to be deleted
How can I delete a pull, if the delete button will be disabled?
@artiomn This PR fixes your setup, this commit breaks adding the POOL to the database as the plugin has always been working like this, because the actual POOL is a filesystem also.
Please test this line in your setup (I am not doing ZFS VM root debian server for this)
if (($mntpoint != "none") && ($mntpoint !== "legacy") && ($mntpoint !== "/")) {
I tested the proposed change in /usr/share/omvzfs/Utils.php and it still seems to solve my old problem. That is, with the proposed change I can open the ZFS pane in OMV web console without errors and it apparently allows me access to all the mountpoints.
Thanks for the time and anergy you put into this.
@subzero79:
because the actual POOL is a filesystem also
I understand. But usually pool is not mounted. I can imagine only one case, where this fix may not work: user created only a pool without filesystems on it.
What's the problem with this fix? Do you have any broken configurations?
About pool deletion: how can I delete the pool (not by hands), if I disable this in UI?
And I don't know OMV internals enough to propose better solution.
Bug was described here: https://github.com/openmediavault/openmediavault/issues/73