Closed geodose closed 5 months ago
This seems to be causing folks a bad time. Partially my fault in that I work with containers as throw away assets and never interface with them directly, so I don't care who owns files as things like backup happen outside of the container environment anyway. I just don't want things running as root for security reasons.
I will take a look at making this optionally definable for folks who need that flexibility.
The better fix is that folks mount their backend storage where they are doing backups, replication, etc to the location where docker/podman creates volumes and then have docker/podman do the volume create instead of doing a bind mount of directory. This would also solve this issue.
@geodose how would you feel if I made the change as a build parameter and you built the image locally with the UID and GID that you needed? I have been trying to build something that worked without a system to broker the processes like supervisor and I just don't think it is possible. I have been trying to be creative but I always land on the root process being root and at that point I might as well migrate everything to being managed by something like supervisor.
My latest release allows you to pass a build argument to the image to specify the UID and GID. So if you'd like to run the container processes as a different user, you can build the image locally and set those IDs to what you prefer.
Would it be possible to expose the container permissions configuration as environment variables instead of being hard locked to 10000/10000? Having trouble integrating this into TrueNAS Scale, and I suspect it'd be tough to do with container hosts that are already setup with users/groups.