Open KnightDoom opened 2 weeks ago
Hi, I was having the same issue in my cluster and by applying the workaround the pod stopped from going into a crash loop. Regardless, it seems that when you try to create a lane or a card, the server tries to execute a chown, which still produces an error
I think that it will be easier to change the user of the container which executes all commands to other than root, so when the directories/files are created they have already assign the correct owner
I am making tests in a local environment with docker by using the flag --user
to set the user and group that executes the commands inside the container and I think that it is working after the deletion of the chown executions. I will make some more tests (also in kubernetes) and upload a PR with the changes if it's ok with you
Great app btw
I dont seem to have that issue @jgomez14 . Exploring the filesystem shows proper owner ownership by my user code of 568. While you are correct, chown is done in the app. But if you try to chown a file/dir already owned by the container owner, the chown passes without a hitch.
I would be concerned that the deployment isn't correct or the persistent storage backing the deployment is creating new files with the wrong owner.
Can you share your deployment.yaml?
Hi, while reviewing the deployment and pod i discover that for some reason, one of the PUID/PGID env variables was not loaded into the pod and that was the cause of the problem. After deployment recreation all works fine.
I have also set a security context for executing the pod as a regular user instead of root, but I don't know if it is neccesary. The env variables and security context is set to 1000:1000 ids. This is the deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: tasks-md
namespace: tasks-md
labels:
app: tasks-md
spec:
selector:
matchLabels:
app: tasks-md
replicas: 1
template:
metadata:
labels:
app: tasks-md
spec:
securityContext:
runAsUser: 1000
runAsGroup: 1000
containers:
- name: tasks-md
image: baldissaramatheus/tasks.md
imagePullPolicy: IfNotPresent
command:
- sh
- -c
args:
- |
mkdir -p /config/stylesheets && echo "stylesheets directory created"
mkdir -p /config/images && echo "images directory created"
mkdir -p /config/sort && echo "sort directory created"
cp -r /config/stylesheets/. /stylesheets/ && echo "copied config_stylesheets to stylesheets"
cp -r /stylesheets/. /config/stylesheets && echo "copied stylesheets to config_stylesheets"
node /api/server.js
env:
- name: TITLE
valueFrom:
configMapKeyRef:
name: tasks-md
key: TITLE
- name: PUID
valueFrom:
configMapKeyRef:
name: tasks-md
key: PUID
- name: PGID
valueFrom:
configMapKeyRef:
name: tasks-md
key: PGID
ports:
- containerPort: 8080
name: http
volumeMounts:
- name: tasks
mountPath: /tasks
- name: config
mountPath: /config
volumes:
- name: tasks
nfs:
server: ip-address
path: /tasks.md/tasks
- name: config
nfs:
server: ip-address
path: /tasks.md/config
Thank you!
Thanks for creating this app.
Wanted to share my experience with the deployment of this application in my K3S cluster; Bascially, the Entry point of this application uses the chown to ensure that files within the container are owned by the ENV provided PUID and PGID.
For some reason, K3S would not allow the usage of chown, while not allowing privilaged access to the container. This would cause the container to continously restart.
After ensuring, that copying from the image to the task/config directories retained/set the PUID/PGID correctly, the following overrides were placed in the deployment.yaml
additionally some logs were provided to see if any point failed.