Open hadfoo opened 8 years ago
I had the same problem and I solved by removing the user nexus commands
RUN useradd -r -u 200 -m -c "nexus role account" -d ${SONATYPE_WORK} -s /bin/false nexus
and
USER nexus
in the docker file and build your own image.
Os: centos 7 Docker: 1.12.6, build 96d83a5/1.12.6 Docker nexus 3 image: c66e39c805c9
Unable to start nexus docker file with command:
docker run -d -p 8081:8081 -p 5000-5005:5000-5005 -v /var/lib/nexus-data:/nexus-data --name nexus sonatype/nexus3
I created the directory: mkdir -p /var/lib/nexus-data/ chown -R 200:200 /var/lib/nexus-data/
The container fail for permissions startup:
docker logs -f nexus bin/nexus: line 80: /nexus-data/.install4j: Permission denied bin/nexus: line 81: /nexus-data/.install4j: Permission denied chmod: cannot access '/nexus-data/.install4j': No such file or directory mkdir: cannot create directory '../sonatype-work/nexus3/log': Permission denied mkdir: cannot create directory '../sonatype-work/nexus3/tmp': Permission denied Java HotSpot(TM) 64-Bit Server VM warning: Cannot open file ../sonatype-work/nexus3/log/jvm.log due to No such file or directory Warning: Cannot open log file: ../sonatype-work/nexus3/log/jvm.log Warning: Forcing option -XX:LogFile=/tmp/jvm.log Unable to update instance pid: Unable to create directory /nexus-data/instances /nexus-data/log/karaf.log (No such file or directory) Unable to update instance pid: Unable to create directory /nexus-data/instances
Only solution I found is to start it with z switch for root access over that mounted directory.
M B
using su-exec to solve the issue:
install su-exec & tini
start with a run.sh: chown -R ${NEXUS_USER}:${NEXUS_USER} ${NEXUS_HOME} ${NEXUS_DATA} exec su-exec ${NEXUS_USER}:${NEXUS_USER} /sbin/tini -- bin/nexus run
there is a temporary solution , plz refer to https://github.com/OpenShiftDemos/nexus/issues/5#issuecomment-320607240
FROM sonatype/nexus3
RUN chown -R ${NEXUS_USER}:${NEXUS_USER} ${NEXUS_HOME} ${NEXUS_DATA} ${SONATYPE_WORK}
Was running into this issue in Kubernetes, solved by adding a securityContext
to my deployment:
securityContext:
fsGroup: 2000
An ugly fix: Run it as root user.. docker run -u 0 ....
Note to future readers: the correct security context is
securityContext:
fsGroup: 200
I have created the following volume with cockpit Volumes: /data/docker/nexus3/nexus-data:/nexus-data:rw /data/docker/nexus3/sonatype-work/:/sonatype-work:rw
i am trying to run a nexus instance on a k8s cluster provided by rancher 2.x. the problem is that the volume mount for /nexus-data
(in my case a pvc bound to a pv pointing to a local node directory) has an ownership of root:root
and a mode of drwxr-xr-x
. it is sufficient to change the ownership of /nexus-data
to nexus:nexus
(couldn't this be done in the original Dockerfile
? - see also @Yavin 's suggestion - which mandates a derived image to be maintained), but as the Dockerfile
specifies USER nexus
it is - out of the box - impossible to do this e.g. by overriding the command (i have tried to chown
the directory on the command, but this fails even with ALL
privileges). therefore there are imho two possibilities left, both requiring to ssh into the node on which the pod is running:
a) find the directory of the mount in the local file system and change the file mode (assuming you have named the created subdirectory <nexusdir>
):
$ find / -xdev 2>/dev/null -name "<nexusdir>"
one of the listed directories is the mounted volume. then change the ownership with chmod
:
$ chmod -R 777 <dir>
as you may have observed, the mode is now 777
, which is not a really satisfactory solution. therefor you may resort to
b) execute a shell in the container as root and change ownership:
the container command must be changed to sh
to let it start
get into the container:
$ docker ps | grep nexus_nexus
which gives you the container id. now execute a shell as follows:
$ docker exec -it -u 0 <containerid> sh
then execute chown
:
$ chown -R nexus:nexus /nexus-data
then quit, remove the sh
command from the pod and upgrade.
see also:
https://github.com/sonatype/docker-nexus3/blob/master/Dockerfile
Thanks @cilindrox !!
changing secgroup doesn't work for me. works better on k8s with:
initContainers:
- name: volume-mount
image: busybox
command: ["sh", "-c", "chown -R 200:200 /nexus-data"]
volumeMounts:
- name: <your nexus pvc volume name>
mountPath: /nexus-data
user below parameter in deployment file. spec: securityContext: fsGroup: 200
I had the same problem and I solved by removing the user nexus commands
RUN useradd -r -u 200 -m -c "nexus role account" -d ${SONATYPE_WORK} -s /bin/false nexus
andUSER nexus
in the docker file and build your own image.
I did the same thing and solved it. But I am not sure if it would cause security and/or stability issues
You just need to change the ownership of the directory where you are trying to mount, nothing else.
chown -R 200:200 dir_path
@nik0811 Thank you for posting that. That's exactly the type of command I was trying to figure out. It worked perfectly for me.
Official Docker container description mentions: https://hub.docker.com/r/sonatype/nexus/ :
Mount a host directory as the volume. This is not portable, as it relies on the directory existing with correct permissions on the host.
mkdir /some/dir/nexus-data && chown -R 200 /some/dir/nexus-data
In case you have separate VM for Nexus container, you could also change UID/GUID of host user to 200.
For us, running on OpenShift, we got it working by using:
spec:
template:
spec:
securityContext:
fsGroup: 200
(And then had to ensure our serviceAccount was permitted to use this ID via the appropriate SCC).
We also tried using fsGroup: 2000
, however we got errors in the logs:
2020-06-05 11:58:31,930+0000 ERROR [main] *SYSTEM com.sonatype.insight.brain.service.NewInstancePopulator - Unable to import Reference Policies from HDS
org.apache.openjpa.persistence.PersistenceException: The transaction has been rolled back. See the nested exceptions for details on the errors that occurred.
at org.apache.openjpa.kernel.BrokerImpl.newFlushException(BrokerImpl.java:2470)
at org.apache.openjpa.kernel.BrokerImpl.flush(BrokerImpl.java:2308)
..........
Suppressed: org.apache.openjpa.persistence.PersistenceException: Database is already closed (to disable automatic closing at VM shutdown, add ";DB_CLOSE_ON_EXIT=FALSE" to the db URL) [90121-196]
at org.apache.openjpa.jdbc.sql.DBDictionary.narrow(DBDictionary.java:5250)
at org.apache.openjpa.jdbc.sql.DBDictionary.newStoreException(DBDictionary.java:5210)
at org.apache.openjpa.jdbc.sql.SQLExceptions.getStore(SQLExceptions.java:134)
at org.apache.openjpa.jdbc.sql.SQLExceptions.getStore(SQLExceptions.java:107)
at org.apache.openjpa.jdbc.sql.SQLExceptions.getStore(SQLExceptions.java:59)
.........
Caused by: org.h2.jdbc.JdbcSQLException: Database is already closed (to disable automatic closing at VM shutdown, add ";DB_CLOSE_ON_EXIT=FALSE" to the db URL) [90121-196]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:345)
... 23 common frames omitted
............
Caused by: org.apache.openjpa.persistence.PersistenceException: Database is already closed (to disable automatic closing at VM shutdown, add ";DB_CLOSE_ON_EXIT=FALSE" to the db URL) [90121-196]
at org.apache.openjpa.jdbc.sql.DBDictionary.narrow(DBDictionary.java:5250)
you have to make sure the directory you mount to nexus has 200:200 as owner as well. So not only the blobs, cache, etc directories, but the root directory mounted to the container
Try disabling selinux , it always works for me
FWIW, here's how I solved it on docker compose:
version: "3.3"
services:
portainer:
image: sonatype/nexus3:latest
command: ['sh', '-c', 'chown -R 200:200 /nexus-data && $${SONATYPE_DIR}/start-nexus-repository-manager.sh']
I'm using following correct deployment config for k8s. Also there are some issues with kvaps/nfs-server-provisioner and you should use v1.4.0 with default NFS v3
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nexus
name: nexus
namespace: dev-tools
spec:
replicas: 1
selector:
matchLabels:
app: nexus
template:
metadata:
labels:
app: nexus
spec:
initContainers:
- name: chown-nexusdata-owner-to-nexus
image: busybox:1.34.1
command: [ "/bin/sh","-c" ]
args: [ "chown -R 200:200 /nexus-data" ]
volumeMounts:
- name: data-vol
mountPath: /nexus-data
containers:
- image: sonatype/nexus3:3.37.0
imagePullPolicy: IfNotPresent
name: nexus
env:
- name: NEXUS_SEARCH_INDEX_REBUILD_ON_STARTUP
value: "true"
- name: INSTALL4J_ADD_VM_PARAMS
value: "-Xms1g -Xmx2g -XX:MaxDirectMemorySize=2g -Djava.util.prefs.userRoot=${NEXUS_DATA}/javaprefs"
resources:
requests:
cpu: "0.25"
memory: "512M"
limits:
cpu: "0.5"
memory: "2304M"
ports:
- containerPort: 8081
volumeMounts:
- name: data-vol
mountPath: /nexus-data
securityContext:
runAsUser: 200
volumes:
- name: data-vol
persistentVolumeClaim:
claimName: nexus-pvc
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nexus-pvc
namespace: dev-tools
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: nfs
---
apiVersion: v1
kind: Service
metadata:
name: nexus
spec:
ports:
- port: 18082
targetPort: 8081
selector:
app: nexus
I worked around the issue using the initContainer solution originally proposed by chz8494. Using fsGroup did not work for me as well. I'm deploying in AWS Kubernetes.
I could be mistaken. But it seems like to me that is problem has a wide impact. All users that attempt to deploy Nexus3 in a container or container orchestrator with a persistent volume under /nexus-data will waste some time troubleshooting why Nexus can't write to the folder. And the prevention of the issue also seems to me rather simple on the Sonatype side. Just needs one line of code in the Dockerfile as proposed by Yavin.
Therefore it seems to me a low hanging fruit on the Sonatype side that can save a few minutes (or hours) from everyone deploying Sonatype3 on containers for the first time. In the 6 years between issue reporting and today I would assume a large number of wasted hours have accumulated by everyone that first deployed it in containers.
Also I suggest updating the issue description to: Can't start container if /nexus-data mounted from a persistent volume
If /sonatype-work volume is mounted from a dedicated data container :
sudo docker create --name nexus-data -v /var/lib/nexus:/sonatype-work busybox
sudo docker create --name nexus-server --volumes-from nexus-data sonatype/nexus
Container cannot start :
sudo docker start nexus-server
Data container, as the server container itself, must be created from sonatype/nexus docker image, what is weird IMO.
sudo docker create --name nexus-data -v /var/lib/nexus:/sonatype-work sonatype/nexus
As far as I know, Nexus is the only docker image which require such a constraint.