Closed rabi112 closed 4 years ago
@rabi112 is that postgresql pods is working ? Also I'm not reproducing the error
@benoitf yes postgresql pod is up. Is it related to user permission of persistence volume mount directory?
Issues go stale after 180
days of inactivity. lifecycle/stale
issues rot after an additional 7
days of inactivity and eventually close.
Mark the issue as fresh with /remove-lifecycle stale
in a new comment.
If this issue is safe to close now please do so.
Moderators: Add lifecycle/frozen
label to avoid stale mode.
Found same bug when starting container. My docker-compose.yaml contains:
volumes:
- ./keycloak/db/:/opt/jboss/keycloak/standalone/data/
So, I had to mkdir -p ./keycloak/db/content
which would create /opt/jboss/keycloak/standalone/data/
before running the container, and that worked. I just can notice this occurs since I've added x509 certificates to Keycloak to run it over https.
Same bug, I solved the problem with:
chmod 777 ./keycloak/db/ -R
If you run the docker to wake up a server, probabilty your user isn't root privileges ( and it is right). The problem is caused because the folder structure is created with root privileges and when jboss wants to run a command to create the folder structure, an error is thrown. So, it's only run chown
command to the directory giving "write powers" to the user.
For example
sudo chown $(whoami) -R volume-folder
BUT, it's a not good idea to run it in a production environment, because your user can't be a safe user.
Describe the bug
Che version
che-server:7.4.0 chectl/7.4.0 linux-x64 node-v10.17.0
Runtime
kubectl version
) kubernate version: v1.15.2Installation method
Environment
Additional context