openmediavault / openmediavault

openmediavault is the next generation network attached storage (NAS) solution based on Debian Linux. Thanks to the modular design of the framework it can be enhanced via plugins. openmediavault is primarily designed to be used in home environments or small home offices.
https://www.openmediavault.org
Other
5.14k stars 480 forks source link

Error mounting a file system whose label contains blanks... #566

Closed BWPr closed 4 years ago

BWPr commented 4 years ago

Description of issue/question -------------------------------------------------------- I labelled an SSD drive as "HP SSD M700 240GB" (on a Windows PC). When I attempt to mount this SSD (OMV5->Storage->File Systems->Mount) I get a diagnostic error message (see below). However when I rename the drive to HP_SSD_M700_240GB the drive mounts (and I can access it normally).

Versions report ----------------------------------------------------------------------- OMV5 : 5.2.2-1 (Usul) - No other Plugins Rasbian : Linux OMV5 4.19.75-v7l+ #1270 SMP - Clean install hardware : Raspberry Pi 4 B

Diagnostic report -------------------------------------------------------------------- Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; omv-salt deploy run fstab 2>&1' with exit code '1': /usr/lib/python3/dist-packages/salt/utils/path.py:265: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working if not isinstance(exes, collections.Iterable): /usr/lib/python3/dist-packages/salt/utils/decorators/signature.py:31: DeprecationWarning: formatargspec is deprecated since Python 3.5. Use signature and the Signature object directly *salt.utils.args.get_function_argspec(original_function) raspberrypi: ---------- ID: create_filesystem_mountpoint_3e0c7d74-8be3-410a-882a-a4a3140fdeaf Function: file.accumulated Result: True Comment: Accumulator create_filesystem_mountpoint_3e0c7d74-8be3-410a-882a-a4a3140fdeaf for file /etc/fstab was charged by text Started: 03:42:09.906440 Duration: 1.513 ms Changes: ---------- ID: mount_filesystem_mountpoint_3e0c7d74-8be3-410a-882a-a4a3140fdeaf Function: mount.mounted Name: /srv/dev-disk-by-label-HP SSD M700 240GB Result: False Comment: mount: bad usage Try 'mount --help' for more information. Started: 03:42:09.909577 Duration: 554.41 ms Changes: ---------- ID: append_fstab_entries Function: file.blockreplace Name: /etc/fstab Result: True Comment: Changes were made Started: 03:42:10.465874 Duration: 13.144 ms Changes: ---------- diff: --- +++ @@ -3,3 +3,6 @@ PARTUUID=6c586e13-02 / ext4 noatime,nodiratime,defaults 0 1 # a swapfile is not a swap partition, no line here # use dphys-swapfile swap[on|off] for that +# >>> [openmediavault] +/dev/disk/by-label/HP\x20SSD\x20M700\x20240GB /srv/dev-disk-by-label-HP\040SSD\040M700\040240GB ntfs defaults,nofail 0 2 +# <<< [openmediavault] Summary for raspberrypi ------------ Succeeded: 2 (changed=1) Failed: 1 ------------ Total states run: 3 Total run time: 569.067 ms

votdev commented 4 years ago

Duplicate of https://github.com/openmediavault/openmediavault/issues/524. This is a known issue and must be fixed by SaltStack. In the meanwhile use labels without blanks.

gangstanthony commented 4 years ago

there is no edit button. how can i rename the label without deleting it? it contains data

ryecoaaron commented 4 years ago

Plug it back into your Windows box and rename it.

mrwensveen commented 4 years ago

Plug it back into your Windows box and rename it.

Assuming you have windows box to plug it (back? as if...) into. Another way is to ssh into your omv box and use ntfslabel: ntfslabel /dev/sdXX Label as root (or using sudo).

gaelalso commented 4 years ago

Plug it back into your Windows box and rename it.

Assuming you have windows box to plug it (back? as if...) into. Another way is to ssh into your omv box and use ntfslabel: ntfslabel /dev/sdXX Label as root (or using sudo).

Thanks you so much, work fine!!! A fix is scheduled for next update?

gaelalso commented 4 years ago

Work with ntfs disk, but on a RAID group disk (old btrfs of synology) the command doesn't work; have you any idea?

gaelalso commented 4 years ago

So i find solution, for anyone like me search a solution, is: sudo btrfs filesystem label /dev/DiskDeviceNumber "NewLabelWithoutBlank

in my case: sudo btrfs filesystem label /dev/md127 Synology

votdev commented 4 years ago

A fix is scheduled for next update?

The bug must be fixed by the SaltStack project, so i can not say when it will be fixed.

mrwensveen commented 4 years ago

It doesn't like there's much activity on the bug you reported in SaltStack. I don't know SaltStack at all, nor how hard it would be to fix the bug. Maybe there's a way to warn OMV users when they try to use labels with blanks, or even to allow renaming them?

votdev commented 4 years ago

Fixing the issue in SaltStack is not possible, it's to complicated because the whole state needs to be refactored. Workaround the issue in OMV is no solution. OMV already notifies the user when a filesystem with blanks in labels will be mounted.

the-Arioch commented 3 years ago

How can user override this notification amd make OMV proceed further with mounting?

There seems to be one bug in Salt with parsing mount -l output in case of mountpoint having spaces in names. https://github.com/saltstack/salt/issues/54508#issuecomment-808881089

It was trivial to fix, and it also probably is not related here: after i changed NTFS partition label and mount it - the mountpoint had no relation to the said label at all and used partition ID instead: /dev/sdb1 on /srv/dev-disk-by-uuid-C6705D84705D7BDD type fuseblk

IOW, it seems OMV5 long ago used to create mount points by label, and then could not negotiate with Salt the format of communicating those mount points.

https://github.com/openmediavault/openmediavault/issues/524#issuecomment-560312187

But it seems to be no more doing so and there is no point to keep prohibiting spaces in the label attribute which is not used anyway. At least not for NTFS partitions.

So, the question is to override the notification and reproduce the said behavior. And find pecific place in OMV5 codebase which calls into Salt to make that mount. It would also be of help to reprodice that Salt's behavior from bash command line, using salt-call or something.

votdev commented 3 years ago

How can user override this notification amd make OMV proceed further with mounting?

There is no way to do that. OMV will not support blanks in filesystem labels before the issue has not been fixed upstream by SaltStack. Because OMV does not maintain Debian packages of external projects, this MUST be fixed upstream first.

It was trivial to fix, and it also probably is not related here: after i changed NTFS partition label and mount it - the mountpoint had no relation to the said label at all and used partition ID instead: /dev/sdb1 on /srv/dev-disk-by-uuid-C6705D84705D7BDD type fuseblk

OMV tries to always use predictable device files. Using labels (/dev/disk/by-label/) is the last one before using non-predictable and unstable canonical device files. If OMV detects device files like /dev/disk/by-id or /dev/disk/by-uuid these are preferred. Using /dev/disk/by-label/ is only used when the former two types are not available for the device. Using /dev/disk/by-label/ is better than canonical device files because those might change with every boot because the kernel assigned them to the devices in the order they are shown/detected. So this really depends on timing. You can also roll the dice instead. Because of that non-predictable canonical device files are a no-go for a NAS.

the-Arioch commented 3 years ago

I am not convinved there is the issue in Salt as you imply it. To me it looks more like miscommunication and OMV sending wrong data that Salt was not hardened against. Granted, when there is no clear specifications of protocols this is always a guesswork of "who was in the wrong", but when no docs - then source is docs, and thus Salt sources are the de facto documentation of what data should be flowing down the Salt protocol.

I agree that their mount.py could be done better, as they clearly only make ad hoc jockeying over one very specific "special character" while ignoring all others. However even with OMV i do not thing i saw but report about having label with \t or \n or \ char, only about space.

I may be wrong, and iot may be that different modules of Salt do have different expectations about this data , and then it would be Salt bug indeed. But the more i was patchign Salt - the more and more it looked like broken data passed by OMV.

Of course this is just speculation too, i did not see the code path, the data exchage, etc.

What can i patch in OMV5 to make this notification gone? I wonder if making ext4 or some other partition on USB Thumb drive would be enough to trigger mounting-by-label code path instead of mouting-by-UUID.

And, how can i use bash to trigger Salt action as if it would had been done by OMV5 ?

If OMV detects device files like /dev/disk/by-id or /dev/disk/by-uuid these are preferred. Using /dev/disk/by-label/

Can you show me the specific place where it is done?

Also, that thinks that for partitions having those IDs there is no any reason to ban spaces in labels. Even considering perceived "bug in Salt". It seems to me "filesystem" object of OMV should have "mountpoint_of_choice" function, implementing those preferences you describe in DRY way. Then the WebUI should check not for spaces in labels, but for spaces in that very mountpoint whatever it happens to be.


Because of that non-predictable canonical device files are a no-go for a NAS.

IDE/SATA HDDs have a lot of IDs - serial number, firmware, model.... The are quite plug-n-play friendly in persistent identification (USB drives not so much, USB mice and keyboards not at all).

Granted, this may be too much bother to fetch driver serial num by SMART/hdparm and thenmixing it into partition name. I just show that it is doable.

And i agree my use case is less of hardcore NAS and more like family multimedia hub. I bought 6-HDD NAS case, pulled out mini-ITX ION2 mainboard too weak to be used for desktop (it was already too weak when dad purchased it, for my tastes). And i am more interested in SATA hotplug access to family media archive, than to "normal" raid and incremental backups.

If to dream wild, then i would like to eventually value add more services to this OMV box, like Pydio, like NextCloud or any other dropbox-clone, maybe Yggrasil or some other mesh net. Perhaps would remain jsut dreams, but to showcase how much of orthodoxy is lost on me.

So you can call me a heretic, or a pioneer :-)

votdev commented 3 years ago

I am not convinved there is the issue in Salt as you imply it. To me it looks more like miscommunication and OMV sending wrong data that Salt was not hardened against. Granted, when there is no clear specifications of protocols this is always a guesswork of "who was in the wrong", but when no docs - then source is docs, and thus Salt sources are the de facto documentation of what data should be flowing down the Salt protocol.

OMV is providing data as it is standard and widely used by systemd and other userland tools. Salt does not handle it correct.

I may be wrong, and iot may be that different modules of Salt do have different expectations about this data , and then it would be Salt bug indeed. But the more i was patchign Salt - the more and more it looked like broken data passed by OMV.

You're right, there are situations where blanks do not need to be escaped, but Salt does not handle the situations correct where it is essentially necessary.

What can i patch in OMV5 to make this notification gone?

Why do you want to know that? What do you expect to improve?

If OMV detects device files like /dev/disk/by-id or /dev/disk/by-uuid these are preferred. Using /dev/disk/by-label/ Can you show me the specific place where it is done?

https://github.com/openmediavault/openmediavault/blob/master/deb/openmediavault/usr/share/php/openmediavault/system/filesystem/filesystem.inc#L454

Also, that thinks that for partitions having those IDs there is no any reason to ban spaces in labels. Even considering perceived "bug in Salt". It seems to me "filesystem" object of OMV should have "mountpoint_of_choice" function, implementing those preferences you describe in DRY way.

Why should OMV provide such a feature? Which device file is used under the hood should not be important to the user. OMV does not work this way, it's a out-of-the-box solution which is doing the job, and it's unimportant for the user how it is done.

The more i think about it the more i come to the conclusion to completely remove file system label support in OMV6 because it simply makes more troubles than it helps. It is absolutely uninteresting and unimportant for the user how the device file looks like.

Spaces are a bad habit that windows users have brought into Linux. No real Linux admin will use them because of reasons.

IDE/SATA HDDs have a lot of IDs - serial number, firmware, model.... The are quite plug-n-play friendly in persistent identification (USB drives not so much, USB mice and keyboards not at all).

I don't want to be rude, but I think you should first deal with Linux and how everything works there. There are already rules which unique identifiers are taken into action by UDEV to create and populate device files.

If to dream wild, then i would like to eventually value add more services to this OMV box, like Pydio, like NextCloud or any other dropbox-clone, maybe Yggrasil or some other mesh net. Perhaps would remain jsut dreams, but to showcase how much of orthodoxy is lost on me.

That is already doable by using Docker containers.

the-Arioch commented 3 years ago

The more i think about it the more i come to the conclusion to completely remove file system label support in OMV6

I would support this, in general, if there will be no sudden problems. In database terms it is the old dispute between "natural keys" and "surrogate/synthetic/artificial keys". ID is just that, ID. Everything human-readable should be an attribute. Because humans tend to change human-readable attributes to make them look cute or to reflect last hour change or any other reason.

...but you said you already down that way, and you only use labels when IDs and UUIDs are not available.

Basically, what we have is some specific disk device and some partition on it. So, if we can fetch/create some unique ID for a disk and then some unique disk-local id for partition - then we have such artificial unique per given OMV-host ID for partition.

Yet more, hypothetically there would be way to export all partition-related settings to an ini/json/xml/yaml file and import them in another OMV box, so moving disks around without loosing much of setup. Granted, this is not normal use of NAS. OTOH imagine some backup or cold storage farm. Usually disks are lying on some shelf disconnected. But once someone needs to plug in specific backup (identified by service, date, etc) it is just plugged into any HDD slot of any online OMV box - and familiar network path or something is already there (short of different OMV boxes having slightly different host names). Just fantasizing.

One can identify a particion by its order number in MBR or GPT chains. Though i had cases where partition management tools were reordering partitions in MBR. But you can consider re-partitioning disks as very deep "surgery" after which all gloves are off. Another option, in CAS (content-addressable storage) fashion would be to generate partition's ID as combination of their starting sector and length/ending sector. Then even if some tool or reorders partitions (all those volatile digits in /dev/sdx[1-9]) the ID still does not change.

There are already rules which unique identifiers are taken into action by UDEV to create and populate device files.

I don't want to be rude but you seem to be loosing the track. Dev-files and symlinks to them do NOT contain any blanks regardless of what partition label is, or do they?

Those were mount points, arbitrary folders created in /mnt or /media or /srv or whereever it would be trendy in next distro that allegedly keeps thing falling down between OMV and Salt

To quote from you, name: "/srv/dev-disk-by-label-My Passport Blue" - and this path was not created by UDEV or systemd, or was it? After this path was created - it was not UDEV or systemd communicating this path to Salt or was it?

Spaces are a bad habit that windows users have brought into Linux. No real Linux admin will use them because of reasons.

LOL. Sounds very communist. Our citizens do not have demands for X whatever they think themselves.

This notion had a point when moving from DOS (which could not had spaces in file names anyway) to Windows 95/NT. But UNIX-likes could had filenames with spaces for ages, it was nothign new there.

From consumer PoV i see the problem elsewhere: Linux having no standard for representing spaces in filenames and inventing ad hoc brand new (read: mutually incompatible) bicyces on every road junciton. xHEX in UDEV, ad hoc pseudo-octal constants in fstab, "\ " in bash, and probably URI-like %HEX escapements somewhere where, all of this scarcely documented if at all (and that Salt ticket shows it, Salt guys could either fix it or reject fixing it, but they seem unable to make their mind whether OMV sent correct or incorrect argument values to Salt's "function").

Now you might say whatever is right or wrong in Linux it is reality that Linux admins just have to deal with. And that is correct but it is equally correct that users habits are also part of reality, right or wrong.


And, in this particular case, users did NOT name any files with spaces. They made disk label, not file. It was OMV's choice to turn disk label into folder name verbatim. OMV could replace space with underscores or ~ tildes or anything when creating mount points. I had this thought yesterday but did not sound it, the discussion already was getting a bit too personal.

but Salt does not handle the situations correct where it is essentially necessary.

...and i want to see those situations in my own eyes, when i would have time for it. And be able at least rudimentary debuggng, like adding log command and running bahs commands to see which codepaths are taken and which variables values are set.

What makes OMV and Salt fall apart is NOT device file but mount point.

And my current hypothesis (which i can not verify) is that OMV failed to provide Salt with correct data: Instead of name: "/srv/dev-disk-by-label-My Passport Blue" it should had been name: "/srv/dev-disk-by-label-My\ Passport\ Blue" And Salt probably failed to provide any documentaiton about incoming data expectations/formats other than Salt sources themseves, which are poor documentation for many reasons.

systemd and other userland tools. Salt does not handle it correct

Salt is not systemd is it? And "other userland tools" is nothing of certain. What is for sure is that bash - a specific and versatile userland tool - does not consume unescaped spaces and that data (supposedly OMV generated) is just wrong for "other userland tools" angle of view: "/srv/dev-disk-by-label-My Passport Blue"

Would you like idea that OMV failes to keep its internal data to Salt standards? I think your implied accusation that Salt fails to keep their data formats to the comfort of OMV sounds no better.


by using Docker containers.

and all the overload it pulls. Even learning Docker and all its requirements and all it fault lines is some bruden too and given the box memory and CPU, Docker is not even OpenVZ...

the-Arioch commented 3 years ago

With your link, the dumb patch looks to change line 598

$result = unescape_path($parts[1]); => $result = escapeshellarg(unescape_path($parts[1]));

However it would be a knee-jerk reaction. Correct approach i think would be to keep data raw/verbatim/unescaped within OMV internals and apply escapeshellarg immediately before calling/configuring Salt external tool (wherever that place is in OMV sources). So the Salt's convention would not get hardcoded into OMV's internals and you can always switch external tools later.

votdev commented 3 years ago

Dev-files and symlinks to them do NOT contain any blanks regardless of what partition label is, or do they?

Right, because they are encoded, and that is one of the problem of Salt, it does not decode or encode them where it is necessary.

Instead of name: "/srv/dev-disk-by-label-My Passport Blue" it should had been name: "/srv/dev-disk-by-label-My\ Passport\ Blue"

Right, and that is what OMV is doing since ages, e.g.

the-Arioch commented 3 years ago

Now i look at line 723 public function mount($options = "", $dir = "") { and specifically at $cmd = new \OMV\System\Process("mount", $cmdArgs);

If this funciton is still used and is not dead code, then why not provide BOTH arguments to mount and avoid tampering fstab and avoid Salt ?

the-Arioch commented 3 years ago

Right, because they are encoded, and that is one of the problem of Salt, it does not decode or encode them where it is necessary.

OMV is doing since ages

They were NOT encoded in the Salt ticket you created - i quoted the lines there and i quoted the line twice above, iut is NOT encoded.

That line is NOT encoded.

As for the rest, i asked several times for bash commands so i can replay what OMV feeds to Salt there. I want to see data and command calls, to make my idea if data is correct or wrong.

problem of Salt, it does not decode or encode them where it is necessary

why that "is necessary", because OMV developers say OMV format of file path is different than Salt format and so Salt format should be changed to the comfort of OMV ?

or is there inconsistency within Salt itself when different Salt modules interprete the same piece of data differently ?

There was one inconsistency in Salt parsing mount -l output. And at least on Linux there was no need to even call it at all (but maybe there is on other UNIX-likes). But other than that error the rest of that Salt module looked internally consistent regarding spaces in file names (but not regarding 3 other special fstab characters)

the-Arioch commented 3 years ago

https://github.com/openmediavault/openmediavault/blob/master/deb/openmediavault/usr/share/php/openmediavault/system/filesystem/filesystem.inc#L385

escape_path - i did not look for that funciton, but by context it is creating that xHEX esacping in UDEV format. It is not immediately relevant for /srv/xxx mount points and it is not relevant for bash/Salt convention of escapings.

https://github.com/openmediavault/openmediavault/blob/master/deb/openmediavault/usr/share/openmediavault/unittests/php/test_openmediavault_functions.php#L267

Same here.

Plus, it manifests the OMV's decision to create space-containing mount points, while it could replace spaces with any char: underscor, dash, tilde, any.

https://github.com/openmediavault/openmediavault/blob/master/deb/openmediavault/srv/salt/omv/deploy/fstab/10filesystem.sls#L33

Now we hopefully are getting to the essence. Here OMV should create data in Salt format. And you believe it does, Salt the software believes it does not. Salt programmers say nothing.

- name: {{ mountpoint.dir }}

Line 39

I am not PHP guy but i believe correct command would had been

- name: {=escapeshellarg(mountpoint.dir) } or something like that

And probably the same holds for

   - device: {{ fsname }}
    - fstype: {{ mountpoint.type }}
    - opts: {{ mountpoint.opts }}

It is just for lucky coincidence they do not have spaces in them. But from software correctness point, it had to be done, probably.

the-Arioch commented 3 years ago

I still believe when creating mountpoint you have to replace or escape or even prohibit those extra three special characters, which i quoted from Linux sources in your Slat ticket.

Sure, as of now probably no one met them. But oneday someone might win the lottery. Those were, AFAIR, "\t", "\r" and "\\". Replace them with any placeholder (same or diferent ones) when creating mountpoint filename. Because you rely on Salt and Salt does not provide for those special cases.

And frankly, i feel discontent you did not do it since 2019. Clash of egoes with Salt black hole is understandable, but ones who suffered those months were OMV users not Salt users or maintainers.

votdev commented 3 years ago

I'm sorry, I stop the conversation here because it is not target-oriented. I can't follow you anymore, everything seems weird to me. I really don't know what your intention of all of this is.

With your link, the dumb patch looks to change line 598

$result = unescape_path($parts[1]); => $result = escapeshellarg(unescape_path($parts[1]));>

However it would be a knee-jerk reaction. Correct approach i think would be to keep data raw/verbatim /unescaped within OMV internals and apply escapeshellarg immediately before calling/configuring Salt external tool (wherever that place is in OMV sources). So the Salt's convention would not get hardcoded into OMV's internals and you can always switch external tools later.

Why the hell should escapeshellarg be called here? Do you know what this function is doing? Do you know what the intention of the getMountPoint method is? To be honest, it does not look like you know it. The function is doing exactly what you are suggesting, keep data raw/verbatim/unescaped within OMV. It parses the output of findmnt and replaces the escaped blanks to real blanks, because consumers of this method don't expect encoded/escaped data.

Ranting about the PHP code doesn't make it any better and reinforces my impression of you. Salt is based on Python, not PHP. The code you're ranting about never runs in the Salt context.

If you think you could do it better and want to help the community, then please open PRs that try to fix issues you think which are going wrong.

Have a good day.