jorenn92 / Maintainerr

Looks and smells like Overseerr, does the opposite. Maintenance tool for the Plex ecosystem
https://maintainerr.info
MIT License
732 stars 12 forks source link

Synology Permission Query - Could not create or access (files in) the data directory. Please make sure the necessary permissions are set #1081

Closed Sn3ider closed 3 months ago

Sn3ider commented 3 months ago

Describe the bug Synology DS920 DSM 7.2.1-69057 Update 5, Container Manager. Maintainerr does not have access to the data folder to save its configuration. User account is set by adding PUID and PGID. IDs are correct and checked through SSH. With my admin IDs the issues remains the same. Starting with Privileged user won't have any effect. Privilege inspector on the folder shows R/W rights for both the User and the Group. Adding them as Owners won't make any difference.

I redacted the PUID and PGID in the following configuration.

Configuration { "CapAdd" : [], "CapDrop" : [], "cmd" : "", "cpu_priority" : 50, "enable_publish_all_ports" : false, "enable_restart_policy" : true, "enable_service_portal" : null, "enabled" : false, "env_variables" : [ { "key" : "PATH", "value" : "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" }, { "key" : "NODE_VERSION", "value" : "20.11.1" }, { "key" : "YARN_VERSION", "value" : "1.22.19" }, { "key" : "NODE_ENV", "value" : "production" }, { "key" : "DEBUG", "value" : "false" }, { "key" : "GIT_SHA", "value" : "995d01eb2a212f0167b96c274a26137c3f395b3a" }, { "key" : "VERSION_TAG", "value" : "stable" }, { "key" : "YARN_INSTALL_STATE_PATH", "value" : "/tmp/.yarn/install-state.gz" }, { "key" : "YARN_GLOBAL_FOLDER", "value" : "/tmp/.yarn/global" }, { "key" : "YARN_CACHE_FOLDER", "value" : "/tmp/.yarn/cache" }, { "key" : "UV_USE_IO_URING", "value" : "0" }, { "key" : "HOME", "value" : "/" }, { "key" : "PUID", "value" : "" }, { "key" : "PGID", "value" : "" }, { "key" : "TZ", "value" : "Europe/London" } ], "exporting" : false, "id" : "075a7425dc4ba798aa38f8dd8c1dfaf98952bb47b088dd1afcfe3b0d5a226133", "image" : "jorenn92/maintainerr:latest", "is_ddsm" : false, "is_package" : false, "labels" : { "org.opencontainers.image.created" : "2024-03-25T14:41:52.660Z", "org.opencontainers.image.description" : "Looks and smells like Overseerr, does the opposite. Maintenance tool for the Plex ecosystem", "org.opencontainers.image.licenses" : "MIT", "org.opencontainers.image.revision" : "995d01eb2a212f0167b96c274a26137c3f395b3a", "org.opencontainers.image.source" : "https://github.com/jorenn92/Maintainerr", "org.opencontainers.image.title" : "Maintainerr", "org.opencontainers.image.url" : "https://github.com/jorenn92/Maintainerr", "org.opencontainers.image.version" : "main" }, "links" : [], "memory_limit" : 0, "name" : "Maintainerr", "network" : [ { "driver" : "bridge", "name" : "arrbridge" } ], "network_mode" : "arrbridge", "port_bindings" : [ { "container_port" : 6246, "host_port" : 6246, "type" : "tcp" } ], "privileged" : false, "service_portals" : [], "shortcut" : { "enable_shortcut" : false, "enable_status_page" : false, "enable_web_page" : false, "web_page_url" : "" }, "use_host_network" : false, "version" : 2, "volume_bindings" : [ { "host_volume_file" : "/docker/maintainerr/opt/data", "is_directory" : true, "mount_point" : "/opt/data", "type" : "rw" } ] }

BaukeZwart commented 3 months ago

Is your volume mount to set correct. Needs to be something like /volume1/docker/appdata/maintainerr:/opt/data And make sure /volume1/docker/appdata/maintainerr or whatever you use exists before starting the container.

Sn3ider commented 3 months ago

@BaukeZwart Thanks for the quick reply. The mount volume is selected from the Synology Container manager GUI, and volume1 never added there. The volume mounting shows correctly within the UI, so that shouldn't be a problem. image

Strange that volume1 is not showing when I export the config.

jorenn92 commented 3 months ago

The volume should exist on your host before starting the container. Also, make sure the root volume directory (in your case: /volumel/docker/maintainerr/opt/data) is owned by the correct owner & group.

Sn3ider commented 3 months ago

@jorenn92 Thanks for jumping in.

1 Volume exists, I used Maintainerr before v2.0

2 Root Volume Directory (/volumel/docker/maintainerr/opt/data) has the docker user set up as owner.

In the editor it only allows me to assign either the Group or the user. Tried with the user and the group set as the owner, no luck. image

3 Setup

image image

4 With the root account I ssh'd into the NAS, used CHOWN to have the user and group as owner yet exactly the same issue.

image

This is a typical example that I am missing a minor thing which is causing me a major headache :D

BaukeZwart commented 3 months ago

Can't help you any further. I never used container manager. I do everything with docker -compose on my synology.

jorenn92 commented 3 months ago

You could try to temporarily test 777 permissions. If it still won’t come up, there’s something else at play. If it does, you’d need to figure out why maintainerr’s user is not able to write in the folder.

Sn3ider commented 3 months ago

Can't help you any further. I never used container manager. I do everything with docker -compose on my synology.

@BaukeZwart I thought you can only manage Docker with Portainer or Container Manager with Synology. Are you referring to the User Defined Script way, or there is another version that I am not aware of? :D

EDIT: Too much thinking. It is simply using the Container Manager with Project, rather than installing the image directly with variables.

Sn3ider commented 3 months ago

You could try to temporarily test 777 permissions. If it still won’t come up, there’s something else at play. If it does, you’d need to figure out why maintainerr’s user is not able to write in the folder.

@jorenn92 Tested. With 777 it works like a charm. Here comes the hard part, figuring out why on earth my user does not have access while it is set to be the owner of this folder.

Sn3ider commented 3 months ago

Starting the container from Container Manager Image

Issue identified. UID:GID (PUID:PGID) is 1000:1000 even if I set UID and GID as environmental variables. I even added user as variable which still had the same issue. This then caused the access issue where the default user obviously should not have access to the given folder.

I checked it by using container terminal with /bin/sh command and id How to: https://www.youtube.com/watch?v=995uUSleHsg

Using Project and compose file

I used chmod and chown to change the opt directory with recursive to 0755 which did the trick. As a result, the user defined in the compose file could read/write the mounted folder, and it even picked up the original configuration.

Conclusion

Variables are not amending the GID and UIDs and the container starts with the default 1000:1000. Currently the only workable solution is using the compose file which had been mentioned by @BaukeZwart.

@jorenn92 would it be possible to define UID and GID as environmental variables in future releases? It would help with the default Synology container manager to simply add them during the installation and would not require to tweak the compose file.

Thanks :)

Sn3ider commented 3 months ago

I am marking it as complete as the original issue is investigated and resolved. A feature request would be good to add PGID and PUID for the environmental variables.

Sn3ider commented 3 months ago

Feature request: https://github.com/jorenn92/Maintainerr/discussions/1084

jorenn92 commented 3 months ago

@Sn3ider, sorry to bring this issue up again, but weren't you able to use Docker's built-in 'user' directive? We opted to support this mechanism instead of PUID and PGID at some point, as you still have to start the container as root using that approach.

With the user directive, the image is fully rootless from the start. The user you specify in this directive has access to the files and folders of the 'node' user, which is why the file ownership doesn't change.

Sn3ider commented 3 months ago

@Sn3ider, sorry to bring this issue up again, but weren't you able to use Docker's built-in 'user' directive? We opted to support this mechanism instead of PUID and PGID at some point, as you still have to start the container as root using that approach.

With the user directive, the image is fully rootless from the start. The user you specify in this directive has access to the files and folders of the 'node' user, which is why the file ownership doesn't change.

@jorenn92 technically there is no default user in Synology Container Manager, or I did not find the default IDs on the Synology support side.

The UID and GID needs to be defined for every container upfront if I want the container to save on shared locations. However, if I am not defining any IDs then the 1000:1000 user will be used. That is not a default user in my Synology NAS, but I could ssh and add the necessary permissions to the folders, but I don't want to provide accesses where I have no proper control over it.