openhab / openhab-docker

Repository for building Docker containers for openHAB
https://www.openhab.org/
Eclipse Public License 2.0
212 stars 128 forks source link

cannot use Zigbee when userdata is mounted on a noexec filesystem #286

Closed hauntingEcho closed 4 years ago

hauntingEcho commented 4 years ago

I am currently running openHAB in a docker container managed by OpenMediaVault. By default, openmediavault mounts its data drives as noexec. However, the userdata/tmp folder must be executable in order to use a HUSBZB-1 (supported as Ember). Otherwise, I get errors when it is configured:

Launching the openHAB runtime...
/openhab/userdata/tmp/libNRJavaSerialv8_HF_openhab_0/libNRJavaSerialv8_HF.so: /openhab/userdata/tmp/libNRJavaSerialv8_HF_openhab_0/libNRJavaSerialv8_HF.so: failed to map segment from shared object
/openhab/userdata/tmp/libNRJavaSerialv8_openhab_0/libNRJavaSerialv8.so: /openhab/userdata/tmp/libNRJavaSerialv8_openhab_0/libNRJavaSerialv8.so: cannot open shared object file: No such file or directory
/openhab/userdata/tmp/libNRJavaSerialv7_HF_openhab_0/libNRJavaSerialv7_HF.so: /openhab/userdata/tmp/libNRJavaSerialv7_HF_openhab_0/libNRJavaSerialv7_HF.so: failed to map segment from shared object
/openhab/userdata/tmp/libNRJavaSerialv7_openhab_0/libNRJavaSerialv7.so: /openhab/userdata/tmp/libNRJavaSerialv7_openhab_0/libNRJavaSerialv7.so: cannot open shared object file: No such file or directory
/openhab/userdata/tmp/libNRJavaSerialv6_HF_openhab_0/libNRJavaSerialv6_HF.so: /openhab/userdata/tmp/libNRJavaSerialv6_HF_openhab_0/libNRJavaSerialv6_HF.so: failed to map segment from shared object
/openhab/userdata/tmp/libNRJavaSerialv6_openhab_0/libNRJavaSerialv6.so: /openhab/userdata/tmp/libNRJavaSerialv6_openhab_0/libNRJavaSerialv6.so: cannot open shared object file: No such file or directory
/openhab/userdata/tmp/libNRJavaSerialv5_openhab_0/libNRJavaSerialv5.so: /openhab/userdata/tmp/libNRJavaSerialv5_openhab_0/libNRJavaSerialv5.so: cannot open shared object file: No such file or directory
java.lang.UnsatisfiedLinkError: gnu.io.RXTXCommDriver.nativeGetVersion()Ljava/lang/String; thrown while loading gnu.io.RXTXCommDriver
java.lang.NoClassDefFoundError: Could not initialize class gnu.io.RXTXCommDriver thrown while loading gnu.io.RXTXCommDriver
java.lang.NoClassDefFoundError: Could not initialize class gnu.io.RXTXCommDriver thrown while loading gnu.io.RXTXCommDriver

using a bind mount for userdata/tmp on the host does work as a workaround, but Karaf does not handle symlinked tmp correctly.

setting java.io.tmpdir to something internal to the docker container, for whatever's currently using userdata/tmpdir, would resolve this issue. I'm not familiar enough with docker to know if an internal bind mount could be used on top of the userdata mount, but that could be another idea.

see also:

wborn commented 4 years ago

Why then not remove the noexec flag or create a filesystem for your Docker volumes that doesn't have the noexec flag? If you're going to use more Docker containers (e.g. Plex) you'll run into similar issues:

https://forum.openmediavault.org/index.php?thread/17590-why-mount-filesystems-with-noexec/

It'll be easier to provide containers proper filesystems than having to debug these issues everytime and having to reconfigure them for this.

I had a look at my Synology filesystem options and it doesn't have the noexec flag.

hauntingEcho commented 4 years ago

It's another thin layer of additional security - this serverfault thread has a bit more detail. I haven't really run into this issue with other containers yet, although I'm using Jellyfin rather than Plex.

Reading back through it, I think the source of my confusion was that addons seem to be going into userdata/tmp rather than addons. I'll split that into a separate ticket, though. It makes sense to me that the addons folder needs to be exec-capable.