Closed scphantm closed 5 years ago
oh, and when i try to update the license file thru the UI, i get
Unable to install license. java.io.IOException: File '/opt/jfrog/artifactory/etc/artifactory.lic' exists but is a directory
I calmed down and thought about it some more. I think i see whats happening, i just don't understand why.
artifactory:
license:
secret: artifactory-license
dataKey: license-key
I think something is wrong with the chart and its not capable of reading the dataKey
value within the secret. I think its similar to what i encountered with postgres, where it was able to load the secret, but not pull the data.
heres my secret
apiVersion: v1
data:
license-key: >-
{bla bla bla}
kind: Secret
metadata:
creationTimestamp: '2019-04-08T19:04:11Z'
name: artifactory-license
namespace: artifactory
resourceVersion: '1135586'
selfLink: /api/v1/namespaces/artifactory/secrets/artifactory-license
uid: 17dc9bbd-5a31-11e9-af31-0cc47a51e1de
type: Opaque
all of the secrets this chart is having a hard time reading are of type Opaque
. Should they be a different type?
so some more digging.
the secret itself is being mounted in correctly. the secret described above is being mounted as
/artifactory_extra_conf/artifactory.lic
when you cat that, you see the actual license. but why is that being brought in correctly, but /opt/jfrog/artifactory/etc/artifactory.lic
is all screwy?
artifactory-artifactory-0.log i dont see anything unusual here either.
Could you post the entire deployment config?
oc describe deploymentconfig
My guess is the mount point is putting the .lic file as a directory instead of one level up. Is there anything under that directory of artifactory.lic?
none found, not sure this has a deploy config
but i do have the pod
oc describe pod artifactory-artifactory
Name: artifactory-artifactory-0
Namespace: artifactory
Priority: 0
PriorityClassName: <none>
Node: okdnode6.lab.panasas.com/10.70.9.88
Start Time: Tue, 09 Apr 2019 11:06:07 -0400
Labels: app=artifactory
component=artifactory
controller-revision-hash=artifactory-artifactory-64f59d6f5f
release=artifactory
role=artifactory
statefulset.kubernetes.io/pod-name=artifactory-artifactory-0
Annotations: checksum/binarystore=e423233797d6d4a28bff74cf4225cdf2e604bd43e9350079694e8d8959ed4b9c
openshift.io/scc=hostmount-anyuid
Status: Running
IP: 10.130.0.10
Controlled By: StatefulSet/artifactory-artifactory
Init Containers:
remove-lost-found:
Container ID: docker://04b570fd28868565e231b11e38fb063ae7126ef9abc57f214241092a0179a17a
Image: alpine:3.8
Image ID: docker-pullable://docker.io/alpine@sha256:a4d41fa0d6bb5b1194189bab4234b1f2abfabb4728bda295f5c53d89766aa046
Port: <none>
Host Port: <none>
Command:
sh
-c
rm -rfv /var/opt/jfrog/artifactory/lost+found /var/opt/jfrog/artifactory/data/.lock
State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 09 Apr 2019 11:06:12 -0400
Finished: Tue, 09 Apr 2019 11:06:12 -0400
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/artifactory-backup from artifactory-backup (rw)
/artifactory-data from artifactory-data (rw)
/var/opt/jfrog/artifactory from artifactory-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from artifactory-artifactory-token-sdfqw (ro)
wait-for-db:
Container ID: docker://44d0bd8ad5d62657294b469db5aca11f6e487d62a56676638ba9c62e68f06824
Image: alpine:3.8
Image ID: docker-pullable://docker.io/alpine@sha256:a4d41fa0d6bb5b1194189bab4234b1f2abfabb4728bda295f5c53d89766aa046
Port: <none>
Host Port: <none>
Command:
sh
-c
until nc -z -w 2 10.130.0.8 5432 && echo database ok; do
sleep 2;
done;
State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 09 Apr 2019 11:06:15 -0400
Finished: Tue, 09 Apr 2019 11:06:15 -0400
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from artifactory-artifactory-token-sdfqw (ro)
Containers:
artifactory:
Container ID: docker://ad7b7f8abd8a281b22fb6cb885e63855d391673a32f20b45d567e7d040360103
Image: docker.bintray.io/jfrog/artifactory-pro:6.9.0
Image ID: docker-pullable://docker.bintray.io/jfrog/artifactory-pro@sha256:5bd0011c3cdb7adcc00ec5e64751b1fac02d021fb292259c68e44dcdc3972241
Port: 8081/TCP
Host Port: 0/TCP
Command:
/bin/sh
-c
mkdir -p /var/opt/jfrog/artifactory/access/etc; cp -Lrf /tmp/access/bootstrap.creds /var/opt/jfrog/artifactory/access/etc/bootstrap.creds; chmod 600 /var/opt/jfrog/artifactory/access/etc/bootstrap.creds; /entrypoint-artifactory.sh
State: Running
Started: Tue, 09 Apr 2019 11:06:18 -0400
Ready: True
Restart Count: 0
Limits:
cpu: 2
memory: 4Gi
Requests:
cpu: 500m
memory: 1Gi
Environment:
DB_TYPE: postgresql
DB_HOST: 10.130.0.8
DB_PORT: 5432
DB_USER: <set to the key 'user' in secret 'artifactory-postgres'> Optional: false
DB_PASSWORD: <set to the key 'password' in secret 'artifactory-postgres'> Optional: false
ARTIFACTORY_MASTER_KEY: <set to the key 'master-key' in secret 'artifactory-artifactory'> Optional: false
EXTRA_JAVA_OPTIONS: -Xms1g -Xmx4g
Mounts:
/artifactory-backup from artifactory-backup (rw)
/artifactory-data from artifactory-data (rw)
/artifactory_extra_conf/artifactory.lic from artifactory-license (rw)
/artifactory_extra_conf/info/installer-info.json from installer-info (rw)
/tmp/access/bootstrap.creds from access-bootstrap-creds (rw)
/var/opt/jfrog/artifactory from artifactory-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from artifactory-artifactory-token-sdfqw (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
artifactory-volume:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: artifactory-volume-artifactory-artifactory-0
ReadOnly: false
binarystore-xml:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: artifactory-artifactory-bs
Optional: false
installer-info:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: artifactory-artifactory-installer-info
Optional: false
artifactory-license:
Type: Secret (a volume populated by a Secret)
SecretName: artifactory-license
Optional: false
access-bootstrap-creds:
Type: Secret (a volume populated by a Secret)
SecretName: artifactory-artifactory-bootstrap-creds
Optional: false
artifactory-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: artifactory-artifactory-data-pvc
ReadOnly: false
artifactory-backup:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: artifactory-artifactory-backup-pvc
ReadOnly: false
artifactory-artifactory-token-sdfqw:
Type: Secret (a volume populated by a Secret)
SecretName: artifactory-artifactory-token-sdfqw
Optional: false
QoS Class: Burstable
Node-Selectors: node-role.kubernetes.io/compute=true
Tolerations: node.kubernetes.io/memory-pressure:NoSchedule
Events: <none>
Name: artifactory-artifactory-nginx-5989c9fdfc-hhvqg
Namespace: artifactory
Priority: 0
PriorityClassName: <none>
Node: okdnode1.lab.panasas.com/10.70.9.83
Start Time: Tue, 09 Apr 2019 11:06:07 -0400
Labels: app=artifactory
component=nginx
pod-template-hash=1545759897
release=artifactory
Annotations: openshift.io/scc=hostmount-anyuid
Status: Running
IP: 10.129.2.45
Controlled By: ReplicaSet/artifactory-artifactory-nginx-5989c9fdfc
Init Containers:
remove-lost-found:
Container ID: docker://48c59a175123db5ac2a6f5525b169e796ccf0b34dccdc9763ec81da7b3810d15
Image: alpine:3.8
Image ID: docker-pullable://docker.io/alpine@sha256:a4d41fa0d6bb5b1194189bab4234b1f2abfabb4728bda295f5c53d89766aa046
Port: <none>
Host Port: <none>
Command:
/bin/sh
-c
rm -rfv /var/opt/jfrog/nginx/lost+found
State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 09 Apr 2019 11:06:12 -0400
Finished: Tue, 09 Apr 2019 11:06:12 -0400
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/opt/jfrog/nginx from nginx-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from artifactory-artifactory-token-sdfqw (ro)
wait-for-artifactory:
Container ID: docker://0d701a822b2f902a7074eb901b76def1660fb72f897b5aee3484271991aa23ab
Image: alpine:3.8
Image ID: docker-pullable://docker.io/alpine@sha256:a4d41fa0d6bb5b1194189bab4234b1f2abfabb4728bda295f5c53d89766aa046
Port: <none>
Host Port: <none>
Command:
sh
-c
until nc -z -w 2 artifactory-artifactory 8081 && echo artifactory ok; do
sleep 2;
done;
State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 09 Apr 2019 11:06:15 -0400
Finished: Tue, 09 Apr 2019 11:06:23 -0400
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from artifactory-artifactory-token-sdfqw (ro)
Containers:
nginx:
Container ID: docker://ee1632b77aa6a2693ea020af51061e3eda5967c09ee1b433d155d32af7799d1a
Image: docker.bintray.io/jfrog/nginx-artifactory-pro:6.9.0
Image ID: docker-pullable://docker.bintray.io/jfrog/nginx-artifactory-pro@sha256:25b8249a3aa96e9be024829a30717536bc59c741d70505e376b3db3f656e354e
Ports: 80/TCP, 443/TCP
Host Ports: 0/TCP, 0/TCP
State: Running
Started: Tue, 09 Apr 2019 11:06:27 -0400
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Tue, 09 Apr 2019 11:06:25 -0400
Finished: Tue, 09 Apr 2019 11:06:25 -0400
Ready: True
Restart Count: 1
Limits:
cpu: 250m
memory: 500Mi
Requests:
cpu: 100m
memory: 250Mi
Liveness: http-get http://:80/artifactory/webapp/%23/login delay=60s timeout=10s period=10s #success=1 #failure=10
Readiness: http-get http://:80/artifactory/webapp/%23/login delay=60s timeout=10s period=10s #success=1 #failure=10
Environment:
ART_BASE_URL: http://artifactory-artifactory:8081/artifactory
SSL: true
SKIP_AUTO_UPDATE_CONFIG: false
Mounts:
/var/opt/jfrog/nginx from nginx-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from artifactory-artifactory-token-sdfqw (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
nginx-volume:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: artifactory-artifactory-nginx
ReadOnly: false
artifactory-artifactory-token-sdfqw:
Type: Secret (a volume populated by a Secret)
SecretName: artifactory-artifactory-token-sdfqw
Optional: false
QoS Class: Burstable
Node-Selectors: node-role.kubernetes.io/compute=true
Tolerations: node.kubernetes.io/memory-pressure:NoSchedule
Events: <none>
see the original comment, i did some LS's to those folders
Where ever that mount point is created, it shouldn't be a file name. Here is an example we have of a pod with a secret creating a file under that location:
Mounts:
/config from environment-properties-mydeployment (rw)
/data from gavc-mydeployment-pv-claim (rw)
/security from security-gavc-mydeployment (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-mpvcs (ro)
Volumes:
environment-properties-mydeployment:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: environment-properties-mydeployment
Optional: false
gavc-mydeployment-pv-claim:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: gavc-mydeployment-pv-claim
ReadOnly: false
security-gavc-mydeployment:
Type: Secret (a volume populated by a Secret)
SecretName: security-gavc-mydeployment
Optional: false
default-token-mpvcs:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-mpvcs
Optional: false
In that config, the /security directory receives all of the files in the secret with that name.
yea, mine is the same. my issue is that the secret is being mounted correctly (i can cat /artifactory_extra_conf/artifactory.lic
and see the license) but for some reason its not being mapped to /opt/jfrog/artifactory/etc/artifactory.lic
correctly. there should be symlink, copy function, something that puts it there so the artifactory war file can read it in. something somewhere is creating /opt/jfrog/artifactory/etc/artifactory.lic
as a directory.
keep in mind also, all im doing is running the standard helm, im not doing anything special here, so i don't understand why it isn't working on my machine. i had the same issue with it trying to connect to postgres as well. it wasn't reading in my secrets correctly and bumping to the default artifactory
user. I got past that one by rebuilding my postgres pod using artifactory
and then it linked up correctly.
@scphantm I am able to reproduce this issue. I will try to figure out the root cause and let you know.
@scphantm The same thing happened to me when I had a typo in the secret name which caused a discrepancy between the name I provided in the values.yaml to the name of the secret I created. When I fixed the typo, everything worked as expected. I would make sure that there's no such typo on your side. If there isn't any, it would be great if you could, as @Neumsy suggested, describe the statefulset (the artifactory release creates a statefulset and not a deployment) and post the output here.
yea, checked for type-o's. none i can find.
oc describe statefulset artifactory
Name: artifactory-artifactory
Namespace: artifactory
CreationTimestamp: Wed, 10 Apr 2019 09:21:22 -0400
Selector: app=artifactory,release=artifactory,role=artifactory
Labels: app=artifactory
chart=artifactory-7.13.7
component=artifactory
heritage=Tiller
release=artifactory
Annotations: <none>
Replicas: 1 desired | 1 total
Update Strategy: RollingUpdate
Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: app=artifactory
component=artifactory
release=artifactory
role=artifactory
Annotations: checksum/binarystore=e423233797d6d4a28bff74cf4225cdf2e604bd43e9350079694e8d8959ed4b9c
Service Account: artifactory-artifactory
Init Containers:
remove-lost-found:
Image: alpine:3.8
Port: <none>
Host Port: <none>
Command:
sh
-c
rm -rfv /var/opt/jfrog/artifactory/lost+found /var/opt/jfrog/artifactory/data/.lock
Environment: <none>
Mounts:
/artifactory-backup from artifactory-backup (rw)
/artifactory-data from artifactory-data (rw)
/var/opt/jfrog/artifactory from artifactory-volume (rw)
wait-for-db:
Image: alpine:3.8
Port: <none>
Host Port: <none>
Command:
sh
-c
until nc -z -w 2 10.130.0.8 5432 && echo database ok; do
sleep 2;
done;
Environment: <none>
Mounts: <none>
Containers:
artifactory:
Image: docker.bintray.io/jfrog/artifactory-pro:6.9.0
Port: 8081/TCP
Host Port: 0/TCP
Command:
/bin/sh
-c
mkdir -p /var/opt/jfrog/artifactory/access/etc; cp -Lrf /tmp/access/bootstrap.creds /var/opt/jfrog/artifactory/access/etc/bootstrap.creds; chmod 600 /var/opt/jfrog/artifactory/access/etc/bootstrap.creds; /entrypoint-artifactory.sh
Limits:
cpu: 2
memory: 4Gi
Requests:
cpu: 500m
memory: 1Gi
Environment:
DB_TYPE: postgresql
DB_HOST: 10.130.0.8
DB_PORT: 5432
DB_USER: <set to the key 'user' in secret 'artifactory-postgres'> Optional: false
DB_PASSWORD: <set to the key 'password' in secret 'artifactory-postgres'> Optional: false
ARTIFACTORY_MASTER_KEY: <set to the key 'master-key' in secret 'artifactory-artifactory'> Optional: false
EXTRA_JAVA_OPTIONS: -Xms1g -Xmx4g
Mounts:
/artifactory-backup from artifactory-backup (rw)
/artifactory-data from artifactory-data (rw)
/artifactory_extra_conf/artifactory.lic from artifactory-license (rw)
/artifactory_extra_conf/info/installer-info.json from installer-info (rw)
/tmp/access/bootstrap.creds from access-bootstrap-creds (rw)
/var/opt/jfrog/artifactory from artifactory-volume (rw)
Volumes:
binarystore-xml:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: artifactory-artifactory-bs
Optional: false
installer-info:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: artifactory-artifactory-installer-info
Optional: false
artifactory-license:
Type: Secret (a volume populated by a Secret)
SecretName: artifactory-license
Optional: false
access-bootstrap-creds:
Type: Secret (a volume populated by a Secret)
SecretName: artifactory-artifactory-bootstrap-creds
Optional: false
artifactory-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: artifactory-artifactory-data-pvc
ReadOnly: false
artifactory-backup:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: artifactory-artifactory-backup-pvc
ReadOnly: false
Volume Claims:
Name: artifactory-volume
StorageClass: managed-nfs-storage
Labels: <none>
Annotations: <none>
Capacity: 20Gi
Access Modes: [ReadWriteOnce]
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 7m statefulset-controller create Pod artifactory-artifactory-0 in StatefulSet artifactory-artifactory successful
Name: artifactory-postgres-postgresql
Namespace: artifactory
CreationTimestamp: Mon, 08 Apr 2019 14:32:40 -0400
Selector: app=postgresql,release=artifactory-postgres,role=master
Labels: app=postgresql
chart=postgresql-3.16.1
heritage=Tiller
release=artifactory-postgres
Annotations: <none>
Replicas: 1 desired | 1 total
Update Strategy: RollingUpdate
Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: app=postgresql
chart=postgresql-3.16.1
heritage=Tiller
release=artifactory-postgres
role=master
Init Containers:
init-chmod-data:
Image: docker.io/bitnami/minideb:latest
Port: <none>
Host Port: <none>
Command:
sh
-c
chown -R 1.000090501e+09:1.000090501e+09 /bitnami
if [ -d /bitnami/postgresql/data ]; then
chmod 0700 /bitnami/postgresql/data;
fi
Requests:
cpu: 250m
memory: 256Mi
Environment: <none>
Mounts:
/bitnami/postgresql from data (rw)
Containers:
artifactory-postgres-postgresql:
Image: docker.io/bitnami/postgresql:9.6.11
Port: 5432/TCP
Host Port: 0/TCP
Requests:
cpu: 250m
memory: 256Mi
Liveness: exec [sh -c exec pg_isready -U "artifactory" -d "artifactory" -h localhost] delay=30s timeout=5s period=10s #success=1 #failure=6
Readiness: exec [sh -c exec pg_isready -U "artifactory" -d "artifactory" -h localhost] delay=5s timeout=5s period=10s #success=1 #failure=6
Environment:
PGDATA: /bitnami/postgresql
POSTGRES_USER: artifactory
POSTGRES_PASSWORD: <set to the key 'postgresql-password' in secret 'artifactory-postgres-postgresql'> Optional: false
POSTGRES_DB: artifactory
Mounts:
/bitnami/postgresql from data (rw)
Volumes: <none>
Volume Claims:
Name: data
StorageClass: managed-nfs-storage
Labels: <none>
Annotations: <none>
Capacity: 50Gi
Access Modes: [ReadWriteOnce]
Events: <none>
@scphantm thanks. Can you please post the following:
The yaml manifest for the statefulset, retrieved by: kubectl get statefulset -o yaml
the yaml for the secret and the secret name, retrieved by: kubectl get secret artifactory-license -o yaml
post what? the values file is in the OP, the statefulset i just posted, what else would you like
here's the secret
oc describe secret artifactory-license
Name: artifactory-license
Namespace: artifactory
Labels: <none>
Annotations: <none>
Type: Opaque
Data
====
license-key: 790 bytes
Sorry @scphantm, updated the comment. Sorry about all the back and forth, its just hard for me to reproduce so I'm trying to get all the details
oc get statefulset -o yaml
apiVersion: v1
items:
- apiVersion: apps/v1
kind: StatefulSet
metadata:
creationTimestamp: 2019-04-10T13:21:22Z
generation: 1
labels:
app: artifactory
chart: artifactory-7.13.7
component: artifactory
heritage: Tiller
release: artifactory
name: artifactory-artifactory
namespace: artifactory
resourceVersion: "1630849"
selfLink: /apis/apps/v1/namespaces/artifactory/statefulsets/artifactory-artifactory
uid: 88d06ac3-5b93-11e9-ad13-0cc47a51ee18
spec:
podManagementPolicy: OrderedReady
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: artifactory
release: artifactory
role: artifactory
serviceName: artifactory
template:
metadata:
annotations:
checksum/binarystore: e423233797d6d4a28bff74cf4225cdf2e604bd43e9350079694e8d8959ed4b9c
creationTimestamp: null
labels:
app: artifactory
component: artifactory
release: artifactory
role: artifactory
spec:
containers:
- command:
- /bin/sh
- -c
- |
mkdir -p /var/opt/jfrog/artifactory/access/etc; cp -Lrf /tmp/access/bootstrap.creds /var/opt/jfrog/artifactory/access/etc/bootstrap.creds; chmod 600 /var/opt/jfrog/artifactory/access/etc/bootstrap.creds; /entrypoint-artifactory.sh
env:
- name: DB_TYPE
value: postgresql
- name: DB_HOST
value: 10.130.0.8
- name: DB_PORT
value: "5432"
- name: DB_USER
valueFrom:
secretKeyRef:
key: user
name: artifactory-postgres
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
key: password
name: artifactory-postgres
- name: ARTIFACTORY_MASTER_KEY
valueFrom:
secretKeyRef:
key: master-key
name: artifactory-artifactory
- name: EXTRA_JAVA_OPTIONS
value: ' -Xms1g -Xmx4g '
image: docker.bintray.io/jfrog/artifactory-pro:6.9.0
imagePullPolicy: IfNotPresent
lifecycle:
postStart:
exec:
command:
- /bin/sh
- -c
- ""
name: artifactory
ports:
- containerPort: 8081
protocol: TCP
resources:
limits:
cpu: "2"
memory: 4Gi
requests:
cpu: 500m
memory: 1Gi
securityContext:
allowPrivilegeEscalation: false
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/opt/jfrog/artifactory
name: artifactory-volume
- mountPath: /artifactory-data
name: artifactory-data
- mountPath: /artifactory-backup
name: artifactory-backup
- mountPath: /artifactory_extra_conf/artifactory.lic
name: artifactory-license
subPath: license-key
- mountPath: /tmp/access/bootstrap.creds
name: access-bootstrap-creds
subPath: bootstrap.creds
- mountPath: /artifactory_extra_conf/info/installer-info.json
name: installer-info
subPath: installer-info.json
dnsPolicy: ClusterFirst
initContainers:
- command:
- sh
- -c
- rm -rfv /var/opt/jfrog/artifactory/lost+found /var/opt/jfrog/artifactory/data/.lock
image: alpine:3.8
imagePullPolicy: IfNotPresent
name: remove-lost-found
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/opt/jfrog/artifactory
name: artifactory-volume
- mountPath: /artifactory-data
name: artifactory-data
- mountPath: /artifactory-backup
name: artifactory-backup
- command:
- sh
- -c
- |
until nc -z -w 2 10.130.0.8 5432 && echo database ok; do
sleep 2;
done;
image: alpine:3.8
imagePullPolicy: IfNotPresent
name: wait-for-db
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 1030
runAsUser: 1030
serviceAccount: artifactory-artifactory
serviceAccountName: artifactory-artifactory
terminationGracePeriodSeconds: 30
volumes:
- configMap:
defaultMode: 420
name: artifactory-artifactory-bs
name: binarystore-xml
- configMap:
defaultMode: 420
name: artifactory-artifactory-installer-info
name: installer-info
- name: artifactory-license
secret:
defaultMode: 420
secretName: artifactory-license
- name: access-bootstrap-creds
secret:
defaultMode: 420
secretName: artifactory-artifactory-bootstrap-creds
- name: artifactory-data
persistentVolumeClaim:
claimName: artifactory-artifactory-data-pvc
- name: artifactory-backup
persistentVolumeClaim:
claimName: artifactory-artifactory-backup-pvc
updateStrategy:
type: RollingUpdate
volumeClaimTemplates:
- metadata:
creationTimestamp: null
name: artifactory-volume
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
storageClassName: managed-nfs-storage
status:
phase: Pending
status:
collisionCount: 0
currentReplicas: 1
currentRevision: artifactory-artifactory-64f59d6f5f
observedGeneration: 1
readyReplicas: 1
replicas: 1
updateRevision: artifactory-artifactory-64f59d6f5f
updatedReplicas: 1
- apiVersion: apps/v1
kind: StatefulSet
metadata:
creationTimestamp: 2019-04-08T18:32:40Z
generation: 1
labels:
app: postgresql
chart: postgresql-3.16.1
heritage: Tiller
release: artifactory-postgres
name: artifactory-postgres-postgresql
namespace: artifactory
resourceVersion: "1129324"
selfLink: /apis/apps/v1/namespaces/artifactory/statefulsets/artifactory-postgres-postgresql
uid: b0eb78ed-5a2c-11e9-ad13-0cc47a51ee18
spec:
podManagementPolicy: OrderedReady
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: postgresql
release: artifactory-postgres
role: master
serviceName: artifactory-postgres-postgresql-headless
template:
metadata:
creationTimestamp: null
labels:
app: postgresql
chart: postgresql-3.16.1
heritage: Tiller
release: artifactory-postgres
role: master
name: artifactory-postgres-postgresql
spec:
containers:
- env:
- name: PGDATA
value: /bitnami/postgresql
- name: POSTGRES_USER
value: artifactory
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
key: postgresql-password
name: artifactory-postgres-postgresql
- name: POSTGRES_DB
value: artifactory
image: docker.io/bitnami/postgresql:9.6.11
imagePullPolicy: Always
livenessProbe:
exec:
command:
- sh
- -c
- exec pg_isready -U "artifactory" -d "artifactory" -h localhost
failureThreshold: 6
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: artifactory-postgres-postgresql
ports:
- containerPort: 5432
name: postgresql
protocol: TCP
readinessProbe:
exec:
command:
- sh
- -c
- exec pg_isready -U "artifactory" -d "artifactory" -h localhost
failureThreshold: 6
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
resources:
requests:
cpu: 250m
memory: 256Mi
securityContext:
runAsUser: 1000090501
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /bitnami/postgresql
name: data
dnsPolicy: ClusterFirst
initContainers:
- command:
- sh
- -c
- |
chown -R 1.000090501e+09:1.000090501e+09 /bitnami
if [ -d /bitnami/postgresql/data ]; then
chmod 0700 /bitnami/postgresql/data;
fi
image: docker.io/bitnami/minideb:latest
imagePullPolicy: Always
name: init-chmod-data
resources:
requests:
cpu: 250m
memory: 256Mi
securityContext:
runAsUser: 0
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /bitnami/postgresql
name: data
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 1000090501
terminationGracePeriodSeconds: 30
updateStrategy:
type: RollingUpdate
volumeClaimTemplates:
- metadata:
creationTimestamp: null
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
storageClassName: managed-nfs-storage
status:
phase: Pending
status:
collisionCount: 0
currentReplicas: 1
currentRevision: artifactory-postgres-postgresql-766f7b58bc
observedGeneration: 1
readyReplicas: 1
replicas: 1
updateRevision: artifactory-postgres-postgresql-766f7b58bc
updatedReplicas: 1
kind: List
metadata:
resourceVersion: ""
selfLink: ""
oc get secret artifactory-license -o yaml
apiVersion: v1
data:
license-key: {bla bla bla}
kind: Secret
metadata:
creationTimestamp: 2019-04-08T19:04:11Z
name: artifactory-license
namespace: artifactory
resourceVersion: "1135586"
selfLink: /api/v1/namespaces/artifactory/secrets/artifactory-license
uid: 17dc9bbd-5a31-11e9-af31-0cc47a51e1de
type: Opaque
@scphantm everything looks ok. I looked inside the entrypoint and saw the following:
# Add additional conf files that were mounted to ARTIFACTORY_EXTRA_CONF
addExtraConfFiles () {
logger "Adding extra configuration files to ${ARTIFACTORY_HOME}/etc if any exist"
# If directory not empty
if [ -d "${ARTIFACTORY_EXTRA_CONF}" ] && [ "$(ls -A ${ARTIFACTORY_EXTRA_CONF})" ]; then
logger "Adding files from ${ARTIFACTORY_EXTRA_CONF} to ${ARTIFACTORY_HOME}/etc"
cp -rfv ${ARTIFACTORY_EXTRA_CONF}/* ${ARTIFACTORY_HOME}/etc || errorExit "Copy files from ${ARTIFACTORY_EXTRA_CONF} to ${ARTIFACTORY_HOME}/etc failed"
fi
}
I also looked for the line "Adding files from" in the log files you posted and it doesn't seem to be there. Please check if you can see the line "Adding files from in the STDOUT of the artifactory container (kubectl logs artifactory-artifactory-0). Also, please post an ls to the /artifactory_extra_conf/ directory. As you can see from the function, all we do here is a simple copy, so if it is not a directory on one side, it shouldn't be a directory on the other side - it would simply have the same file structure.
I have the method calls in my log
2019-04-10 13:21:34 [262 entrypoint-artifactory.sh] Setting up Access data directories if missing
2019-04-10 13:21:34 [152 entrypoint-artifactory.sh] Adding extra configuration files to /var/opt/jfrog/artifactory/access/etc if any exist
2019-04-10 13:21:34 [273 entrypoint-artifactory.sh] Setting up Replicator data directories if missing
2019-04-10 13:21:34 [163 entrypoint-artifactory.sh] Adding extra configuration files to /var/opt/jfrog/artifactory/replicator/etc if any exist
2019-04-10 13:21:34 [721 entrypoint-artifactory.sh] Adding plugins if exist '/tmp/plugins/internalUser.groovy' -> '/opt/jfrog/artifactory/etc/plugins/internalUser.groovy'
but the path is wrong in the log, i think thats a different log entry.
$ cd artifactory_extra_conf
$ ls -alh
total 8.0K
drwxrwxrwx. 3 artifactory artifactory 41 Apr 10 13:21 .
drwxr-xr-x. 25 root root 4.0K Apr 10 13:21 ..
-rw-r--r--. 1 root artifactory 790 Apr 10 13:21 artifactory.lic
drwxr-xr-x. 2 root root 33 Apr 10 13:21 info
if i cat artifactory.lic
i see my license file correctly.
@scphantm that's a different log entry. I'm looking for "Adding files from"
doesn't exist. Its not getting inside the if statement.
ok, so please try to evaluate this expression in your container:
[ -d "${ARTIFACTORY_EXTRA_CONF}" ] && [ "$(ls -A ${ARTIFACTORY_EXTRA_CONF})" ]
the value of ARTIFACTORY_EXTRA_CONF
is /artifactory_extra_conf
echo $ARTIFACTORY_EXTRA_CONF
/artifactory_extra_conf
$ ls -A ${ARTIFACTORY_EXTRA_CONF}
artifactory.lic info
$ echo $ARTIFACTORY_HOME
/opt/jfrog/artifactory
$ [ -d "${ARTIFACTORY_EXTRA_CONF}" ] && [ "$(ls -A ${ARTIFACTORY_EXTRA_CONF})" ]
$ [ -d "${ARTIFACTORY_EXTRA_CONF}" ] && echo "Directory ${ARTIFACTORY_EXTRA_CONF} exists."
Directory /artifactory_extra_conf exists.
$
I don't think that method is being called. My clue is this
addExtraConfFiles () {
logger "Adding extra configuration files to ${ARTIFACTORY_HOME}/etc if any exist"
but the log file has
Adding extra configuration files to /var/opt/jfrog/artifactory/access/etc if any exist
if the method is being called, then the log should read
Adding extra configuration files to /opt/jfrog/artifactory/etc if any exist
seeing if
$ echo $ARTIFACTORY_HOME
/opt/jfrog/artifactory
that's weird. please add the following to your values.yaml file:
artifactory:
preStartCommand: "sleep 200"
exec into the container while the sleep is running and check if the directory is still there with the license file. This will require you to delete the release and the pvc so that we have a "fresh start"
no change.
I don't think that method is being called. My clue is this
addExtraConfFiles () { logger "Adding extra configuration files to ${ARTIFACTORY_HOME}/etc if any exist"
but the log file has
Adding extra configuration files to /var/opt/jfrog/artifactory/access/etc if any exist
if the method is being called, then the log should read
Adding extra configuration files to /opt/jfrog/artifactory/etc if any exist
seeing if
$ echo $ARTIFACTORY_HOME /opt/jfrog/artifactory
@scphantm This is not the same method being called, this is a different method:
# Add additional conf files that were mounted to ACCESS_EXTRA_CONF
addExtraAccessConfFiles () {
logger "Adding extra configuration files to ${ACCESS_ETC_FOLDER} if any exist"
# If directory not empty
if [ -d "${ACCESS_EXTRA_CONF}" ] && [ "$(ls -A ${ACCESS_EXTRA_CONF})" ]; then
logger "Adding files from ${ACCESS_EXTRA_CONF} to ${ACCESS_ETC_FOLDER}"
cp -rfv ${ACCESS_EXTRA_CONF}/* ${ACCESS_ETC_FOLDER} || errorExit "Copy files from ${ACCESS_EXTRA_CONF} to ${ACCESS_ETC_FOLDER} failed"
fi
}
You can read this script from inside your container. Its in /entrypoint-artifactory.sh
If you're going to try to use the sleep thing I mentioned, you can also run the entrypoint yourself in debug mode, e.g:
bash -x /entrypoint-artifactory.sh
now, this is very interesting. i did this
artifactory:
preStartCommand: 'cp -rfv /artifactory_extra_conf/* /opt/jfrog/artifactory/etc || errorExit "Copy files from /artifactory_extra_conf to /opt/jfrog/artifactory/etc failed"'
and got this at the beginning of my log
Running custom preStartCommand command
--
| cp: cannot overwrite directory '/opt/jfrog/artifactory/etc/artifactory.lic' with non-directory
| '/artifactory_extra_conf/info/installer-info.json' -> '/opt/jfrog/artifactory/etc/info/installer-info.json'
| /bin/sh: 1: errorExit: not found
| 2019-04-10 15:48:06 [733 entrypoint-artifactory.sh] Preparing to run Artifactory in Docker
Is this using the same PVC? its important to note that when you delete a statefulset the dynamically provisioned PVC will not be deleted. You have to explictly delete the PVC using kubectl delete pvc <pvc-name>
So this can still be the same directory on the old PVC, which makes sense
Ha, deleted the PVC and ran it again
| Running custom preStartCommand command
-- | --
| cp: target '/opt/jfrog/artifactory/etc' is not a directory
| /bin/sh: 1: errorExit: not found
| 2019-04-10 15:55:51 [733 entrypoint-artifactory.sh] Preparing to run Artifactory in Docker
| 2019-04-10 15:55:51 [734 entrypoint-artifactory.sh] Running as uid=1030(artifactory) gid=1030(artifactory) groups=1030(artifactory)
| 2019-04-10 15:55:51 [59 entrypoint-artifactory.sh] Dockerfile for this image can found inside the container.
| 2019-04-10 15:55:51 [60 entrypoint-artifactory.sh] To view the Dockerfile: 'cat /docker/artifactory-pro/Dockerfile.artifactory'.
| 2019-04-10 15:55:51 [65 entrypoint-artifactory.sh] Checking open files and processes limits
| 2019-04-10 15:55:51 [68 entrypoint-artifactory.sh] Current max open files is 1048576
| 2019-04-10 15:55:51 [80 entrypoint-artifactory.sh] Current max open processes is 1048576
| 2019-04-10 15:55:51 [212 entrypoint-artifactory.sh] Testing directory /var/opt/jfrog/artifactory has read/write permissions for user 'artifactory' (id 1030)
| 2019-04-10 15:55:52 [237 entrypoint-artifactory.sh] Permissions for /var/opt/jfrog/artifactory are good
| 2019-04-10 15:55:52 [242 entrypoint-artifactory.sh] Setting up Artifactory data directories if missing
| mkdir: created directory '/var/opt/jfrog/artifactory/etc'
| 2019-04-10 15:55:52 [141 entrypoint-artifactory.sh] Adding extra configuration files to /opt/jfrog/artifactory/etc if any exist
| 2019-04-10 15:55:52 [145 entrypoint-artifactory.sh] Adding files from /artifactory_extra_conf to /opt/jfrog/artifactory/etc
| '/artifactory_extra_conf/artifactory.lic' -> '/opt/jfrog/artifactory/etc/artifactory.lic'
| '/artifactory_extra_conf/info' -> '/opt/jfrog/artifactory/etc/info'
| '/artifactory_extra_conf/info/installer-info.json' -> '/opt/jfrog/artifactory/etc/info/installer-info.json'
all morning i forgot to delete the PVC. ugh. Maybe me putting the stupid things in quotes did it. i dunno.
Cool @scphantm. So is it working now? I do see the logger line in your posted log
yea, it seems to be working now. Thanks
@scphantm - Thanks for confirming. I'll close this now. We have also merged a change (https://github.com/jfrog/charts/pull/294) the adds more options to passing an Artifactory license.
Is this a request for help?: yes
i have been struggling with this for days now. I finally got artifactory to connect to my external postgres ( i had to reinstall postgres container and set the root user as
artifactory
before it would work) but im beyond that. Now i can't get it to read the stupid license secret.I generated the secret with the same command line thats in the readme file. Then i create this values yaml
oc create secret generic artifactory-license --from-file=license-key=./artifactory-license.lic
then i install it with this
helm install --name artifactory --values artifactory-values.yaml jfrog/artifactory
here's my rig
Im running OKD 3.11. Now, when artifactory load, in the artifactory log i get
when i terminal into the container i get
Why am i having so much trouble with this? Is this helm simply not compatible with OKD/OpenShift? Should I give up and just manually install it with docker directly or something? This was supposed to be simple and yet, it almost seems like the thing is simply incapable of reading secret files. I had to rebuild the postgres container because the thing refused to read the secrets saying use user
int-db-postgres-artifactory
rather thanartifactory
, and now it won't read the license file.Whats going on here?