Closed mdziekon closed 6 years ago
Just did a quick try on my testing VM seems ok,
the correction of destroying a pool your're going to submit it in a separate PR?
can ask @votdev about the test units.
@subzero79
the correction of destroying a pool your're going to submit it in a separate PR?
It's in the same PR. The PR can be seen in these parts:
@subzero79
can ask @votdev about the test units.
The thing is, these tests are already loosely based on the OMV's repo tests, which to my surprise, doesn't have that many tests anyway. While I took a different approach with mocks & expectations loading (external files rather than inline code, mostly because these mocks & expectations are pretty long, so I didn't want to pollute the tests themselves with the data too much), I think the rest is more or less similar.
I mean, obviously any constructive criticism is more than welcome here, but looking at what there already is in OMV's repo, I think @votdev won't be able to help us that much.
I’ll test again. Because I got the same error as the backend attempts to clear the labels on dm-11 and dm-21 when using device mapper (crypto devices).
I’ll test again. Because I got the same error as the backend attempts to clear the labels on dm-11 and dm-21 when using device mapper (crypto devices).
Please retest. Personally, I was able to successfully remove a pool with a mirror using two crypto devices. If it's still reproducible, please provide exact "steps to reproduce" so I can try to do the same thing and see if I've missed some edge case.
Edit: again, I was able to properly create & the destroy a pool using these combinations:
/dev/disk/by-id/XXX
path without partition numbers)/dev/disk/by-path
path without partition numbers)/dev/sdX
path without partition numbers)/dev/disk/by-id/XXX
path without partition numbers)/dev/dm-X
path without partition numbers, which in this case are not created)The "by-path" aliasing won't work for dm devices, simply because dm devices are not listed in /dev/disk/by-path
directory. But... it's not neither ZFS'es nor this plugins fault.
Ok, at this point I think I have nothing else to add or change in this PR, so please consider it to be COMPLETED (at least until someone does an actual code review and leaves comments for me to fix or change something) and READY FOR CODE REVIEW.
Changes since the PR creation date:
LC_ALL
env when calling zpool status
(OMV takes care of translations already using LANG=C
export).editorconfig
based on current formatting of the project's filescomposer.json
file to include phpunit in dev buildszfsutils-linux
of version at least 0.7.0 (which introduced the -P
flag)Also, a note about the whole parser thing - as an alternative to zpool status
, it is also possible to fetch some informations about a pool using zdb
command. However, even though it has many options, it seems it does not contain all the details (eg. it does not have any information about caches nor spares, vdev statuses or vdev errors count). Maybe they will enhance that in the future, but for now it seems we're stuck with the status
command, which is unfortunately not that "parser-friendly".
@mdziekon I will look into setting up the travis CI stuff in the next couple of days unless @subzero79 has a chance to do it before then.
@mdziekon I will look into setting up the travis CI stuff in the next couple of days unless @subzero79 has a chance to do it before then.
I just pulled the updated PR, no errors this time removing the pool.
Once thing i did notice and i don't know when it got lost, must be the new storage implementation changes in core omv some time ago is the capability of formating zfs volumes (virtual block devices) in the omv filesystem section, once formatted in cli they appear. Is not critical for now. But it will be critical if one day the iscsi plugin gets revived.
@subzero79:
Once thing i did notice and i don't know when it got lost, must be the new storage implementation changes in core omv some time ago is the capability of formating zfs volumes (virtual block devices) in the omv filesystem section, once formatted in cli they appear. Is not critical for now. But it will be critical if one day the iscsi plugin gets revived.
I haven't changed anything related to things other than pool creation and deletion, so I'm pretty sure it's not related to this PR (just like you said, most likely something that has been gone for a long time since the migration to newer version of OMV). So as long as it is not a regression (caused by this PR), we shouldn't bother with it here. You should raise a new issue here on Github, someone might want to take that up later.
Yes, forgot to mention i am sure is nothing to do with this PR, it was from before. We can merge.
I've already set up travisCI for this repo, i have no experience with this. Once i merge automated builds/tests are gonna get created there?
Here, I've force-updated (no code changes) my latest commit. From now on, Travis will pick up push hooks and automatically run tests in the background.
This Pull Request reworks the two most basic operations performed on a zpool - its creation, and its destruction. Most of the works was done on the "low-level", by adding a new, more powerful parser of the
zpool status
command, which allowed to fix some issues with certain types of drives not being properly cleared of the ZFS label. It should also be really useful in the future, as it easily reads all informations about devices added to the pool, preserving their hierarchy, states & statistics.Some background:
There was a discussion on OMV's forum about some problems related to creation of Zpools on LUKS-encrypted devices (relevant part starts around here: https://forum.openmediavault.org/index.php/Thread/24456-ZFS-Mirror-Pool-creation-fails-on-LUKS-encrypted-devices/?postID=185301#post185301). The original problem was caused by the fact that ZFS plugin tries to create GPT labels for each device before creating the pool itself (but that was it, no new partitions were actually created on the device). This lead to a situation where a previously decrypted LUKS device would get the GPT label, but then, when zpool's creation has started, it would lead to an error and automatic re-encryption of the device. The most likely cause is that ZOL will partition the device if it thinks it's necessary, but since the plugin actually did not create any partitions, they both collide somehow creating a problem.
Since ZOL can handle the partitioning itself, it looks like the pre-labeling step is unnecessary and can be delegated to ZOL.
There's was the problem of incorrect ZFS labels removal at pool's destruction. This problem is related to plugin's devices retrieving method - it's pretty old and does not handle more exotic devices like NVMe drives (see https://github.com/OpenMediaVault-Plugin-Developers/openmediavault-zfs/issues/50) or DeviceMapper devices (see https://forum.openmediavault.org/index.php/Thread/24456-ZFS-Mirror-Pool-creation-fails-on-LUKS-encrypted-devices/?postID=185288#post185288). First of all - it always assumes that the device is partitioned (which might not be true); second - it assumes the
sd<device_letter><partition_no>
partitioning scheme (eg. for NVMe drives it will be eg.nvme0n<device_letter>p<partition_no>
). Both of these assumptions basically restrict us to SATA devices only, otherwise pool's deletion will always fail and the labels will have to be cleared manually (or the device wiped completely). On top of that, the previous implementation completely ignored SLOGs, Caches & Spares (the plugin does not handle them properly anyway, but you can still add them manually).What was done in this Pull Request:
The main part of this PR is... the pool's status parser. The pool's creation bugfix is a very simple change, in total only 6 removed lines. The pool's destruction however required to change how we're enumerating all devices added to the pool, to make it more flexible. I'm aware that this looks like a huge overkill (after all, I could simply fix the previous implementation with additional special cases), but I still think it was worth it if we consider that the plugin needs a make-over. This piece should be seen as the first step of that make-over - now we have a pretty good mechanism of dealing with basic informations about the pool. With this parser, the rest of the plugin should easily understand the hierarchy of a pool and be able to manage its special vdevs more easily (eg. adding/removing logs or caches on demand, or even enumerating them without using that ugly "SHOW ALL THE DETAILS" modal).
There's another new thing in this PR: tests. Since the status displaying command is internally pretty complicates (as in - it has many different cases to handle in a parser), I've made the effort to at least partially ensure that we're parsing it correctly, and further changes will be less likely to accidentally break something. Overall, I've added ~60 tests, most of them related to the parsing itself (with multiple different scenarios tested). But... please note that I originally come from a different environment, where tests are written in a different way. I've never worked with PHPUnit before, and I'm not sure if my approach is actually correct. However, if that's an acceptable solution, I had a plan to maybe setup a Travis configuration for this repo (again, never done this before, I'm more of a Gitlab CI guy, but at a first glance it looks rather simple to do something similar with Travis), we'll see how this goes.
Other things to note / discuss:
assemblePool
method because I didn't want to break other things that depend on its format. However, it should be noted that some of the zpool class methods are never used...Update 2018-10-21: I've finished my own final code review. This PR should now be considered as COMPLETED (at least until someone does an actual code review and leaves comments for me to fix or change something)
Closes #50