Closed pcktdmp closed 4 years ago
Mounting a persistent storage volume into /fah is mandatory, not optional. There should be one for each container and it should have the config.xml preloaded - see "Running on a Cluster" in the README.
The container can also be run under any UID, either by Docker (from README):
docker run --gpus all --name fah0 -d --user "$(id -u):$(id -g)" \
--volume $HOME/fah:/fah foldingathome/fah-gpu:latest
Or in K8s securityContext or other container orchestrators.
The .deb package already does this. It adds a user fahclient
and the client software drops privileges.
The .deb package already does this. It adds a user
fahclient
and the client software drops privileges.
There are no services in containers (in general), so the fahclient user is not what we're talking about in this case. The user a container runs as is controlled by docker/K8s/etc. The .deb is only being used to get the FAHClient binary installed and trigger dependencies.
Ah, right it just runs the client process in the container. You should be able to add the option <run-as v="fahclient"/>
in the config.xml
or --run-as=fahclient
on the command line.
Ah, right it just runs the client process in the container. You should be able to add the option
<run-as v="fahclient"/>
in theconfig.xml
or--run-as=fahclient
on the command line.
It's the --user "$(id -u):$(id -g)"
part of the docker command. or in the K8s securityContext
. The fahclient just kinda tagged along with the .deb.
Mounting a persistent storage volume into /fah is mandatory, not optional. There should be one for each container and it should have the config.xml preloaded - see "Running on a Cluster" in the README.
The container can also be run under any UID, either by Docker (from README):
docker run --gpus all --name fah0 -d --user "$(id -u):$(id -g)" \ --volume $HOME/fah:/fah foldingathome/fah-gpu:latest
Or in K8s securityContext or other container orchestrators.
With "omitted" I meant that you could achieve the same effect with docker volume create data1 && docker run -ti -v data1:/data alpine df -h
.
In the above example it is assumed that the running uid
and gid
has permissions to a volume attached inside the container, which typically not the case when it's a fresh volume(claim) inside Kubernetes.
$ docker volume create data1
$ docker run --user "9999:9999" -v data1:/fah -ti foldingathome/fah-gpu:latest
20:18:31:ERROR:Exception: Failed to open 'log.txt': Failed to open 'log.txt': Permission denied: iostream error: Permission denied
Hence the VOLUME
statement inside Dockerfile
can be omitted since it's being overridden on the command line (in the example) with the --volume $HOME/fah:/fah
statement.
Meanwhile I have also kind of solved the whole problem as a whole by using an init
container inside Kubernetes that straightens everything out privileged before the unprivileged container is started.
I'll create a PR for the README.md with a reference in due time when it's of substance. :-)
You are correct the UID the user chooses to run the container as, has to have read-write permissions to the persistent storage directory. The init container is a good solution in your specific case, but may not work in others. E.g. the permissions can be set during setup while creating one config.xml for each container.
How that is done varies a lot by storage system, but I should have called it out explicitly in the README. I'll fix it in the morning.
Thanks.
Compute clusters have the tendency of sharing resources with mixed workloads where Folding at Home could be one of them.
To guarantee high(er) integrity of the compute clusters security and mainly integrity of the data, workload (containers) should not run in privileged mode.
In the case of Kubernetes this means that upfront inside the resource specification it needs to be known under which UID and GID the process will run.
The
VOLUME
statement inside the container could be omitted if attached during runtime of thedocker
command instead of defining this in theDockerfile
to make the image portable (across schedulers).Dropping privileges can also be done by creating an
entrypoint.sh
wrapper script, but that makes the solution less elegant and also rules out Kubernetes support with a (stricter)securityContext
set since the container starts asroot
, but later on drops it out of Kubernetes-es line of sight.