jfrog / charts

JFrog official Helm Charts
https://jfrog.com/integration/helm-repository/
Apache License 2.0
255 stars 445 forks source link

License key not found #297

Closed scphantm closed 5 years ago

scphantm commented 5 years ago

Is this a request for help?: yes

i have been struggling with this for days now. I finally got artifactory to connect to my external postgres ( i had to reinstall postgres container and set the root user as artifactory before it would work) but im beyond that. Now i can't get it to read the stupid license secret.

I generated the secret with the same command line thats in the readme file. Then i create this values yaml

oc create secret generic artifactory-license --from-file=license-key=./artifactory-license.lic

artifactory:
  license:
    secret: artifactory-license
    dataKey: license-key
  resources:
    requests:
      cpu: 500m
      memory: 1Gi
    limits:
      cpu: 2
      memory: 4Gi
  javaOpts:
    xms: 1g
    xmx: 4g
  persistence:
    type: nfs
    storageClass: managed-nfs-storage
    nfs:
      ip: "10.65.225.11"
      dataDir: "/artifactory-data"
      backupDir: "/artifactory-backup"
      capacity: "1000Gi"

nginx:
  resources:
    requests:
      cpu: 100m
      memory: 250Mi
    limits:
      cpu: 250m
      memory: 500Mi
  persistence:
    enabled: true
    size: 5Gi
    storageClass: managed-nfs-storage

postgresql:
  enabled: false

database:
  type: postgresql
  host: 10.130.0.8
  port: 5432
  secrets:
    user:
      name: artifactory-postgres
      key: user
    password:
      name: artifactory-postgres
      key: password

then i install it with this

helm install --name artifactory --values artifactory-values.yaml jfrog/artifactory

here's my rig

 ~/code/devops/okd_install/helm   master ●  helm version
Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}

Im running OKD 3.11. Now, when artifactory load, in the artifactory log i get

2019-04-08 19:44:04,262 [art-init] [ERROR] (o.a.m.s.MetadataEventServiceImpl:188) - Unable to init the Metadata client. The Metadata Event pipeline will be disabled.
--
  | 2019-04-08 19:44:04,275 [art-init] [ERROR] (o.a.a.e             :55) - Unable to read license file
  | 2019-04-08 19:44:04,447 [art-init] [INFO ] (o.j.c.w.ConfigurationManagerImpl:445) - Replacing temporary DB channel with permanent DB channel
  | 2019-04-08 19:44:04,448 [art-init] [INFO ] (o.j.c.w.ConfigurationManagerImpl:445) - Successfully closed temporary DB channel
  | 2019-04-08 19:44:04,449 [art-init] [INFO ] (o.a.s.ArtifactoryApplicationContext:504) - Artifactory application context set to READY by refresh
  | 2019-04-08 19:44:04,556 [art-init] [INFO ] (o.a.s.a.AccessServiceImpl:1468) - Successful register of Artifactory serviceId jfrt@01d7z4zer4c6rw1q17d38m0vx0 in Access Federation
  | 2019-04-08 19:44:04,572 [art-init] [INFO ] (o.a.m.s.MetadataMigrationHelper:162) - Current operator cursor is at '0', and migration target is '4'. Starting / continuing migration.
  | 2019-04-08 19:44:04,578 [art-init] [INFO ] (o.a.e.w.NodeEventTaskManagerImpl:50) - Background migration started on behalf of Event Operator with ID 'metadata-operator'
  | 2019-04-08 19:44:04,584 [art-init] [INFO ] (o.a.w.s.ArtifactoryContextConfigListener:215) -
  | ###########################################################
  | ### Artifactory successfully started (57.945 seconds)   ###
  | ###########################################################
  |  
  | 2019-04-08 19:44:09,566 [art-exec-5] [ERROR] (o.a.a.e             :55) - Unable to read license file

when i terminal into the container i get

$ cd /opt/jfrog/artifactory/etc/
$ ls -alh
total 1.1M
drwxr-xr-x. 7 artifactory artifactory 4.0K Apr  8 19:42 .
drwxrwxrwx. 9 root        root        4.0K Apr  8 19:42 ..
-rw-r-----. 1 artifactory artifactory  33K Apr  8 18:57 artifactory.config.latest.1554749839000.xml
-rw-r-----. 1 artifactory artifactory  33K Apr  8 18:57 artifactory.config.latest.xml
drwxr-xr-t. 3 artifactory artifactory 4.0K Apr  8 18:56 artifactory.lic
-rw-r-----. 1 artifactory artifactory  864 Apr  8 19:44 artifactory.properties
-rw-r-----. 1 artifactory artifactory  13K Apr  8 18:56 artifactory.system.properties
-rw-r-----. 1 artifactory artifactory 1.2K Apr  8 18:56 binarystore.xml
-rw-r--r--. 1 artifactory artifactory  920 Apr  8 19:44 db.properties
drwxr-xr-x. 2 artifactory artifactory 4.0K Apr  8 18:56 info
-rw-r-----. 1 artifactory artifactory  16K Apr  8 18:56 logback.xml
-rw-r-----. 1 artifactory artifactory 5.7K Apr  8 18:56 mimetypes.xml
drwxr-xr-x. 2 artifactory artifactory 4.0K Apr  8 18:56 plugins
drwx------. 3 artifactory artifactory 4.0K Apr  8 19:44 security
drwxr-x---. 2 artifactory artifactory 4.0K Apr  8 18:56 ui
$ cd artifactory.lic
$ ls -alh
total 288K
drwxr-xr-t. 3 artifactory artifactory 4.0K Apr  8 18:56 .
drwxr-xr-x. 7 artifactory artifactory 4.0K Apr  8 19:42 ..
drwxr-xr-x. 2 artifactory artifactory 4.0K Apr  8 19:50 ..2019_04_08_18_56_01.254719901
lrwxrwxrwx. 1 artifactory artifactory   31 Apr  8 18:56 ..data -> ..2019_04_08_18_56_01.254719901
lrwxrwxrwx. 1 artifactory artifactory   34 Apr  8 18:56 artifactory.cluster.license -> ..data/artifactory.cluster.license

Why am i having so much trouble with this? Is this helm simply not compatible with OKD/OpenShift? Should I give up and just manually install it with docker directly or something? This was supposed to be simple and yet, it almost seems like the thing is simply incapable of reading secret files. I had to rebuild the postgres container because the thing refused to read the secrets saying use user int-db-postgres-artifactory rather than artifactory, and now it won't read the license file.

Whats going on here?

scphantm commented 5 years ago

oh, and when i try to update the license file thru the UI, i get

Unable to install license. java.io.IOException: File '/opt/jfrog/artifactory/etc/artifactory.lic' exists but is a directory

scphantm commented 5 years ago

I calmed down and thought about it some more. I think i see whats happening, i just don't understand why.

artifactory:
  license:
    secret: artifactory-license
    dataKey: license-key

I think something is wrong with the chart and its not capable of reading the dataKey value within the secret. I think its similar to what i encountered with postgres, where it was able to load the secret, but not pull the data.

heres my secret

apiVersion: v1
data:
  license-key: >-
    {bla bla bla}
kind: Secret
metadata:
  creationTimestamp: '2019-04-08T19:04:11Z'
  name: artifactory-license
  namespace: artifactory
  resourceVersion: '1135586'
  selfLink: /api/v1/namespaces/artifactory/secrets/artifactory-license
  uid: 17dc9bbd-5a31-11e9-af31-0cc47a51e1de
type: Opaque

all of the secrets this chart is having a hard time reading are of type Opaque. Should they be a different type?

scphantm commented 5 years ago

so some more digging.

the secret itself is being mounted in correctly. the secret described above is being mounted as

/artifactory_extra_conf/artifactory.lic

when you cat that, you see the actual license. but why is that being brought in correctly, but /opt/jfrog/artifactory/etc/artifactory.lic is all screwy?

scphantm commented 5 years ago

artifactory-artifactory-0.log i dont see anything unusual here either.

Neumsy commented 5 years ago

Could you post the entire deployment config?

oc describe deploymentconfig

My guess is the mount point is putting the .lic file as a directory instead of one level up. Is there anything under that directory of artifactory.lic?

scphantm commented 5 years ago

none found, not sure this has a deploy config

scphantm commented 5 years ago

but i do have the pod

oc describe pod artifactory-artifactory
Name:               artifactory-artifactory-0
Namespace:          artifactory
Priority:           0
PriorityClassName:  <none>
Node:               okdnode6.lab.panasas.com/10.70.9.88
Start Time:         Tue, 09 Apr 2019 11:06:07 -0400
Labels:             app=artifactory
                    component=artifactory
                    controller-revision-hash=artifactory-artifactory-64f59d6f5f
                    release=artifactory
                    role=artifactory
                    statefulset.kubernetes.io/pod-name=artifactory-artifactory-0
Annotations:        checksum/binarystore=e423233797d6d4a28bff74cf4225cdf2e604bd43e9350079694e8d8959ed4b9c
                    openshift.io/scc=hostmount-anyuid
Status:             Running
IP:                 10.130.0.10
Controlled By:      StatefulSet/artifactory-artifactory
Init Containers:
  remove-lost-found:
    Container ID:  docker://04b570fd28868565e231b11e38fb063ae7126ef9abc57f214241092a0179a17a
    Image:         alpine:3.8
    Image ID:      docker-pullable://docker.io/alpine@sha256:a4d41fa0d6bb5b1194189bab4234b1f2abfabb4728bda295f5c53d89766aa046
    Port:          <none>
    Host Port:     <none>
    Command:
      sh
      -c
      rm -rfv /var/opt/jfrog/artifactory/lost+found /var/opt/jfrog/artifactory/data/.lock
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Tue, 09 Apr 2019 11:06:12 -0400
      Finished:     Tue, 09 Apr 2019 11:06:12 -0400
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /artifactory-backup from artifactory-backup (rw)
      /artifactory-data from artifactory-data (rw)
      /var/opt/jfrog/artifactory from artifactory-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from artifactory-artifactory-token-sdfqw (ro)
  wait-for-db:
    Container ID:  docker://44d0bd8ad5d62657294b469db5aca11f6e487d62a56676638ba9c62e68f06824
    Image:         alpine:3.8
    Image ID:      docker-pullable://docker.io/alpine@sha256:a4d41fa0d6bb5b1194189bab4234b1f2abfabb4728bda295f5c53d89766aa046
    Port:          <none>
    Host Port:     <none>
    Command:
      sh
      -c
      until nc -z -w 2 10.130.0.8 5432 && echo database ok; do
  sleep 2;
done;

    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Tue, 09 Apr 2019 11:06:15 -0400
      Finished:     Tue, 09 Apr 2019 11:06:15 -0400
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from artifactory-artifactory-token-sdfqw (ro)
Containers:
  artifactory:
    Container ID:  docker://ad7b7f8abd8a281b22fb6cb885e63855d391673a32f20b45d567e7d040360103
    Image:         docker.bintray.io/jfrog/artifactory-pro:6.9.0
    Image ID:      docker-pullable://docker.bintray.io/jfrog/artifactory-pro@sha256:5bd0011c3cdb7adcc00ec5e64751b1fac02d021fb292259c68e44dcdc3972241
    Port:          8081/TCP
    Host Port:     0/TCP
    Command:
      /bin/sh
      -c
      mkdir -p /var/opt/jfrog/artifactory/access/etc; cp -Lrf /tmp/access/bootstrap.creds /var/opt/jfrog/artifactory/access/etc/bootstrap.creds; chmod 600 /var/opt/jfrog/artifactory/access/etc/bootstrap.creds; /entrypoint-artifactory.sh

    State:          Running
      Started:      Tue, 09 Apr 2019 11:06:18 -0400
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     2
      memory:  4Gi
    Requests:
      cpu:     500m
      memory:  1Gi
    Environment:
      DB_TYPE:                 postgresql
      DB_HOST:                 10.130.0.8
      DB_PORT:                 5432
      DB_USER:                 <set to the key 'user' in secret 'artifactory-postgres'>           Optional: false
      DB_PASSWORD:             <set to the key 'password' in secret 'artifactory-postgres'>       Optional: false
      ARTIFACTORY_MASTER_KEY:  <set to the key 'master-key' in secret 'artifactory-artifactory'>  Optional: false
      EXTRA_JAVA_OPTIONS:       -Xms1g -Xmx4g
    Mounts:
      /artifactory-backup from artifactory-backup (rw)
      /artifactory-data from artifactory-data (rw)
      /artifactory_extra_conf/artifactory.lic from artifactory-license (rw)
      /artifactory_extra_conf/info/installer-info.json from installer-info (rw)
      /tmp/access/bootstrap.creds from access-bootstrap-creds (rw)
      /var/opt/jfrog/artifactory from artifactory-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from artifactory-artifactory-token-sdfqw (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  artifactory-volume:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  artifactory-volume-artifactory-artifactory-0
    ReadOnly:   false
  binarystore-xml:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      artifactory-artifactory-bs
    Optional:  false
  installer-info:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      artifactory-artifactory-installer-info
    Optional:  false
  artifactory-license:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  artifactory-license
    Optional:    false
  access-bootstrap-creds:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  artifactory-artifactory-bootstrap-creds
    Optional:    false
  artifactory-data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  artifactory-artifactory-data-pvc
    ReadOnly:   false
  artifactory-backup:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  artifactory-artifactory-backup-pvc
    ReadOnly:   false
  artifactory-artifactory-token-sdfqw:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  artifactory-artifactory-token-sdfqw
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  node-role.kubernetes.io/compute=true
Tolerations:     node.kubernetes.io/memory-pressure:NoSchedule
Events:          <none>

Name:               artifactory-artifactory-nginx-5989c9fdfc-hhvqg
Namespace:          artifactory
Priority:           0
PriorityClassName:  <none>
Node:               okdnode1.lab.panasas.com/10.70.9.83
Start Time:         Tue, 09 Apr 2019 11:06:07 -0400
Labels:             app=artifactory
                    component=nginx
                    pod-template-hash=1545759897
                    release=artifactory
Annotations:        openshift.io/scc=hostmount-anyuid
Status:             Running
IP:                 10.129.2.45
Controlled By:      ReplicaSet/artifactory-artifactory-nginx-5989c9fdfc
Init Containers:
  remove-lost-found:
    Container ID:  docker://48c59a175123db5ac2a6f5525b169e796ccf0b34dccdc9763ec81da7b3810d15
    Image:         alpine:3.8
    Image ID:      docker-pullable://docker.io/alpine@sha256:a4d41fa0d6bb5b1194189bab4234b1f2abfabb4728bda295f5c53d89766aa046
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/sh
      -c
      rm -rfv /var/opt/jfrog/nginx/lost+found
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Tue, 09 Apr 2019 11:06:12 -0400
      Finished:     Tue, 09 Apr 2019 11:06:12 -0400
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/opt/jfrog/nginx from nginx-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from artifactory-artifactory-token-sdfqw (ro)
  wait-for-artifactory:
    Container ID:  docker://0d701a822b2f902a7074eb901b76def1660fb72f897b5aee3484271991aa23ab
    Image:         alpine:3.8
    Image ID:      docker-pullable://docker.io/alpine@sha256:a4d41fa0d6bb5b1194189bab4234b1f2abfabb4728bda295f5c53d89766aa046
    Port:          <none>
    Host Port:     <none>
    Command:
      sh
      -c
      until nc -z -w 2 artifactory-artifactory 8081 && echo artifactory ok; do
  sleep 2;
done;

    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Tue, 09 Apr 2019 11:06:15 -0400
      Finished:     Tue, 09 Apr 2019 11:06:23 -0400
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from artifactory-artifactory-token-sdfqw (ro)
Containers:
  nginx:
    Container ID:   docker://ee1632b77aa6a2693ea020af51061e3eda5967c09ee1b433d155d32af7799d1a
    Image:          docker.bintray.io/jfrog/nginx-artifactory-pro:6.9.0
    Image ID:       docker-pullable://docker.bintray.io/jfrog/nginx-artifactory-pro@sha256:25b8249a3aa96e9be024829a30717536bc59c741d70505e376b3db3f656e354e
    Ports:          80/TCP, 443/TCP
    Host Ports:     0/TCP, 0/TCP
    State:          Running
      Started:      Tue, 09 Apr 2019 11:06:27 -0400
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Tue, 09 Apr 2019 11:06:25 -0400
      Finished:     Tue, 09 Apr 2019 11:06:25 -0400
    Ready:          True
    Restart Count:  1
    Limits:
      cpu:     250m
      memory:  500Mi
    Requests:
      cpu:      100m
      memory:   250Mi
    Liveness:   http-get http://:80/artifactory/webapp/%23/login delay=60s timeout=10s period=10s #success=1 #failure=10
    Readiness:  http-get http://:80/artifactory/webapp/%23/login delay=60s timeout=10s period=10s #success=1 #failure=10
    Environment:
      ART_BASE_URL:             http://artifactory-artifactory:8081/artifactory
      SSL:                      true
      SKIP_AUTO_UPDATE_CONFIG:  false
    Mounts:
      /var/opt/jfrog/nginx from nginx-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from artifactory-artifactory-token-sdfqw (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  nginx-volume:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  artifactory-artifactory-nginx
    ReadOnly:   false
  artifactory-artifactory-token-sdfqw:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  artifactory-artifactory-token-sdfqw
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  node-role.kubernetes.io/compute=true
Tolerations:     node.kubernetes.io/memory-pressure:NoSchedule
Events:          <none>
scphantm commented 5 years ago

see the original comment, i did some LS's to those folders

Neumsy commented 5 years ago

Where ever that mount point is created, it shouldn't be a file name. Here is an example we have of a pod with a secret creating a file under that location:

   Mounts:
      /config from environment-properties-mydeployment (rw)
      /data from gavc-mydeployment-pv-claim (rw)
      /security from security-gavc-mydeployment (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-mpvcs (ro)

Volumes:
  environment-properties-mydeployment:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      environment-properties-mydeployment
    Optional:  false
  gavc-mydeployment-pv-claim:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  gavc-mydeployment-pv-claim
    ReadOnly:   false
  security-gavc-mydeployment:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  security-gavc-mydeployment
    Optional:    false
  default-token-mpvcs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-mpvcs
    Optional:    false
Neumsy commented 5 years ago

In that config, the /security directory receives all of the files in the secret with that name.

scphantm commented 5 years ago

yea, mine is the same. my issue is that the secret is being mounted correctly (i can cat /artifactory_extra_conf/artifactory.lic and see the license) but for some reason its not being mapped to /opt/jfrog/artifactory/etc/artifactory.lic correctly. there should be symlink, copy function, something that puts it there so the artifactory war file can read it in. something somewhere is creating /opt/jfrog/artifactory/etc/artifactory.lic as a directory.

keep in mind also, all im doing is running the standard helm, im not doing anything special here, so i don't understand why it isn't working on my machine. i had the same issue with it trying to connect to postgres as well. it wasn't reading in my secrets correctly and bumping to the default artifactory user. I got past that one by rebuilding my postgres pod using artifactory and then it linked up correctly.

danielezer commented 5 years ago

@scphantm I am able to reproduce this issue. I will try to figure out the root cause and let you know.

danielezer commented 5 years ago

@scphantm The same thing happened to me when I had a typo in the secret name which caused a discrepancy between the name I provided in the values.yaml to the name of the secret I created. When I fixed the typo, everything worked as expected. I would make sure that there's no such typo on your side. If there isn't any, it would be great if you could, as @Neumsy suggested, describe the statefulset (the artifactory release creates a statefulset and not a deployment) and post the output here.

scphantm commented 5 years ago

yea, checked for type-o's. none i can find.

oc describe statefulset artifactory
Name:               artifactory-artifactory
Namespace:          artifactory
CreationTimestamp:  Wed, 10 Apr 2019 09:21:22 -0400
Selector:           app=artifactory,release=artifactory,role=artifactory
Labels:             app=artifactory
                    chart=artifactory-7.13.7
                    component=artifactory
                    heritage=Tiller
                    release=artifactory
Annotations:        <none>
Replicas:           1 desired | 1 total
Update Strategy:    RollingUpdate
Pods Status:        1 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:           app=artifactory
                    component=artifactory
                    release=artifactory
                    role=artifactory
  Annotations:      checksum/binarystore=e423233797d6d4a28bff74cf4225cdf2e604bd43e9350079694e8d8959ed4b9c
  Service Account:  artifactory-artifactory
  Init Containers:
   remove-lost-found:
    Image:      alpine:3.8
    Port:       <none>
    Host Port:  <none>
    Command:
      sh
      -c
      rm -rfv /var/opt/jfrog/artifactory/lost+found /var/opt/jfrog/artifactory/data/.lock
    Environment:  <none>
    Mounts:
      /artifactory-backup from artifactory-backup (rw)
      /artifactory-data from artifactory-data (rw)
      /var/opt/jfrog/artifactory from artifactory-volume (rw)
   wait-for-db:
    Image:      alpine:3.8
    Port:       <none>
    Host Port:  <none>
    Command:
      sh
      -c
      until nc -z -w 2 10.130.0.8 5432 && echo database ok; do
  sleep 2;
done;

    Environment:  <none>
    Mounts:       <none>
  Containers:
   artifactory:
    Image:      docker.bintray.io/jfrog/artifactory-pro:6.9.0
    Port:       8081/TCP
    Host Port:  0/TCP
    Command:
      /bin/sh
      -c
      mkdir -p /var/opt/jfrog/artifactory/access/etc; cp -Lrf /tmp/access/bootstrap.creds /var/opt/jfrog/artifactory/access/etc/bootstrap.creds; chmod 600 /var/opt/jfrog/artifactory/access/etc/bootstrap.creds; /entrypoint-artifactory.sh

    Limits:
      cpu:     2
      memory:  4Gi
    Requests:
      cpu:     500m
      memory:  1Gi
    Environment:
      DB_TYPE:                 postgresql
      DB_HOST:                 10.130.0.8
      DB_PORT:                 5432
      DB_USER:                 <set to the key 'user' in secret 'artifactory-postgres'>           Optional: false
      DB_PASSWORD:             <set to the key 'password' in secret 'artifactory-postgres'>       Optional: false
      ARTIFACTORY_MASTER_KEY:  <set to the key 'master-key' in secret 'artifactory-artifactory'>  Optional: false
      EXTRA_JAVA_OPTIONS:       -Xms1g -Xmx4g
    Mounts:
      /artifactory-backup from artifactory-backup (rw)
      /artifactory-data from artifactory-data (rw)
      /artifactory_extra_conf/artifactory.lic from artifactory-license (rw)
      /artifactory_extra_conf/info/installer-info.json from installer-info (rw)
      /tmp/access/bootstrap.creds from access-bootstrap-creds (rw)
      /var/opt/jfrog/artifactory from artifactory-volume (rw)
  Volumes:
   binarystore-xml:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      artifactory-artifactory-bs
    Optional:  false
   installer-info:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      artifactory-artifactory-installer-info
    Optional:  false
   artifactory-license:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  artifactory-license
    Optional:    false
   access-bootstrap-creds:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  artifactory-artifactory-bootstrap-creds
    Optional:    false
   artifactory-data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  artifactory-artifactory-data-pvc
    ReadOnly:   false
   artifactory-backup:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  artifactory-artifactory-backup-pvc
    ReadOnly:   false
Volume Claims:
  Name:          artifactory-volume
  StorageClass:  managed-nfs-storage
  Labels:        <none>
  Annotations:   <none>
  Capacity:      20Gi
  Access Modes:  [ReadWriteOnce]
Events:
  Type    Reason            Age   From                    Message
  ----    ------            ----  ----                    -------
  Normal  SuccessfulCreate  7m    statefulset-controller  create Pod artifactory-artifactory-0 in StatefulSet artifactory-artifactory successful

Name:               artifactory-postgres-postgresql
Namespace:          artifactory
CreationTimestamp:  Mon, 08 Apr 2019 14:32:40 -0400
Selector:           app=postgresql,release=artifactory-postgres,role=master
Labels:             app=postgresql
                    chart=postgresql-3.16.1
                    heritage=Tiller
                    release=artifactory-postgres
Annotations:        <none>
Replicas:           1 desired | 1 total
Update Strategy:    RollingUpdate
Pods Status:        1 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=postgresql
           chart=postgresql-3.16.1
           heritage=Tiller
           release=artifactory-postgres
           role=master
  Init Containers:
   init-chmod-data:
    Image:      docker.io/bitnami/minideb:latest
    Port:       <none>
    Host Port:  <none>
    Command:
      sh
      -c
      chown -R 1.000090501e+09:1.000090501e+09 /bitnami
if [ -d /bitnami/postgresql/data ]; then
  chmod  0700 /bitnami/postgresql/data;
fi

    Requests:
      cpu:        250m
      memory:     256Mi
    Environment:  <none>
    Mounts:
      /bitnami/postgresql from data (rw)
  Containers:
   artifactory-postgres-postgresql:
    Image:      docker.io/bitnami/postgresql:9.6.11
    Port:       5432/TCP
    Host Port:  0/TCP
    Requests:
      cpu:      250m
      memory:   256Mi
    Liveness:   exec [sh -c exec pg_isready -U "artifactory" -d "artifactory" -h localhost] delay=30s timeout=5s period=10s #success=1 #failure=6
    Readiness:  exec [sh -c exec pg_isready -U "artifactory" -d "artifactory" -h localhost] delay=5s timeout=5s period=10s #success=1 #failure=6
    Environment:
      PGDATA:             /bitnami/postgresql
      POSTGRES_USER:      artifactory
      POSTGRES_PASSWORD:  <set to the key 'postgresql-password' in secret 'artifactory-postgres-postgresql'>  Optional: false
      POSTGRES_DB:        artifactory
    Mounts:
      /bitnami/postgresql from data (rw)
  Volumes:  <none>
Volume Claims:
  Name:          data
  StorageClass:  managed-nfs-storage
  Labels:        <none>
  Annotations:   <none>
  Capacity:      50Gi
  Access Modes:  [ReadWriteOnce]
Events:          <none>
danielezer commented 5 years ago

@scphantm thanks. Can you please post the following:

  1. The yaml manifest for the statefulset, retrieved by: kubectl get statefulset -o yaml

  2. the yaml for the secret and the secret name, retrieved by: kubectl get secret artifactory-license -o yaml

scphantm commented 5 years ago

post what? the values file is in the OP, the statefulset i just posted, what else would you like

here's the secret

oc describe secret artifactory-license
Name:         artifactory-license
Namespace:    artifactory
Labels:       <none>
Annotations:  <none>

Type:  Opaque

Data
====
license-key:  790 bytes
danielezer commented 5 years ago

Sorry @scphantm, updated the comment. Sorry about all the back and forth, its just hard for me to reproduce so I'm trying to get all the details

scphantm commented 5 years ago

oc get statefulset -o yaml

apiVersion: v1
items:
- apiVersion: apps/v1
  kind: StatefulSet
  metadata:
    creationTimestamp: 2019-04-10T13:21:22Z
    generation: 1
    labels:
      app: artifactory
      chart: artifactory-7.13.7
      component: artifactory
      heritage: Tiller
      release: artifactory
    name: artifactory-artifactory
    namespace: artifactory
    resourceVersion: "1630849"
    selfLink: /apis/apps/v1/namespaces/artifactory/statefulsets/artifactory-artifactory
    uid: 88d06ac3-5b93-11e9-ad13-0cc47a51ee18
  spec:
    podManagementPolicy: OrderedReady
    replicas: 1
    revisionHistoryLimit: 10
    selector:
      matchLabels:
        app: artifactory
        release: artifactory
        role: artifactory
    serviceName: artifactory
    template:
      metadata:
        annotations:
          checksum/binarystore: e423233797d6d4a28bff74cf4225cdf2e604bd43e9350079694e8d8959ed4b9c
        creationTimestamp: null
        labels:
          app: artifactory
          component: artifactory
          release: artifactory
          role: artifactory
      spec:
        containers:
        - command:
          - /bin/sh
          - -c
          - |
            mkdir -p /var/opt/jfrog/artifactory/access/etc; cp -Lrf /tmp/access/bootstrap.creds /var/opt/jfrog/artifactory/access/etc/bootstrap.creds; chmod 600 /var/opt/jfrog/artifactory/access/etc/bootstrap.creds; /entrypoint-artifactory.sh
          env:
          - name: DB_TYPE
            value: postgresql
          - name: DB_HOST
            value: 10.130.0.8
          - name: DB_PORT
            value: "5432"
          - name: DB_USER
            valueFrom:
              secretKeyRef:
                key: user
                name: artifactory-postgres
          - name: DB_PASSWORD
            valueFrom:
              secretKeyRef:
                key: password
                name: artifactory-postgres
          - name: ARTIFACTORY_MASTER_KEY
            valueFrom:
              secretKeyRef:
                key: master-key
                name: artifactory-artifactory
          - name: EXTRA_JAVA_OPTIONS
            value: ' -Xms1g -Xmx4g '
          image: docker.bintray.io/jfrog/artifactory-pro:6.9.0
          imagePullPolicy: IfNotPresent
          lifecycle:
            postStart:
              exec:
                command:
                - /bin/sh
                - -c
                - ""
          name: artifactory
          ports:
          - containerPort: 8081
            protocol: TCP
          resources:
            limits:
              cpu: "2"
              memory: 4Gi
            requests:
              cpu: 500m
              memory: 1Gi
          securityContext:
            allowPrivilegeEscalation: false
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          volumeMounts:
          - mountPath: /var/opt/jfrog/artifactory
            name: artifactory-volume
          - mountPath: /artifactory-data
            name: artifactory-data
          - mountPath: /artifactory-backup
            name: artifactory-backup
          - mountPath: /artifactory_extra_conf/artifactory.lic
            name: artifactory-license
            subPath: license-key
          - mountPath: /tmp/access/bootstrap.creds
            name: access-bootstrap-creds
            subPath: bootstrap.creds
          - mountPath: /artifactory_extra_conf/info/installer-info.json
            name: installer-info
            subPath: installer-info.json
        dnsPolicy: ClusterFirst
        initContainers:
        - command:
          - sh
          - -c
          - rm -rfv /var/opt/jfrog/artifactory/lost+found /var/opt/jfrog/artifactory/data/.lock
          image: alpine:3.8
          imagePullPolicy: IfNotPresent
          name: remove-lost-found
          resources: {}
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          volumeMounts:
          - mountPath: /var/opt/jfrog/artifactory
            name: artifactory-volume
          - mountPath: /artifactory-data
            name: artifactory-data
          - mountPath: /artifactory-backup
            name: artifactory-backup
        - command:
          - sh
          - -c
          - |
            until nc -z -w 2 10.130.0.8 5432 && echo database ok; do
              sleep 2;
            done;
          image: alpine:3.8
          imagePullPolicy: IfNotPresent
          name: wait-for-db
          resources: {}
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
        restartPolicy: Always
        schedulerName: default-scheduler
        securityContext:
          fsGroup: 1030
          runAsUser: 1030
        serviceAccount: artifactory-artifactory
        serviceAccountName: artifactory-artifactory
        terminationGracePeriodSeconds: 30
        volumes:
        - configMap:
            defaultMode: 420
            name: artifactory-artifactory-bs
          name: binarystore-xml
        - configMap:
            defaultMode: 420
            name: artifactory-artifactory-installer-info
          name: installer-info
        - name: artifactory-license
          secret:
            defaultMode: 420
            secretName: artifactory-license
        - name: access-bootstrap-creds
          secret:
            defaultMode: 420
            secretName: artifactory-artifactory-bootstrap-creds
        - name: artifactory-data
          persistentVolumeClaim:
            claimName: artifactory-artifactory-data-pvc
        - name: artifactory-backup
          persistentVolumeClaim:
            claimName: artifactory-artifactory-backup-pvc
    updateStrategy:
      type: RollingUpdate
    volumeClaimTemplates:
    - metadata:
        creationTimestamp: null
        name: artifactory-volume
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 20Gi
        storageClassName: managed-nfs-storage
      status:
        phase: Pending
  status:
    collisionCount: 0
    currentReplicas: 1
    currentRevision: artifactory-artifactory-64f59d6f5f
    observedGeneration: 1
    readyReplicas: 1
    replicas: 1
    updateRevision: artifactory-artifactory-64f59d6f5f
    updatedReplicas: 1
- apiVersion: apps/v1
  kind: StatefulSet
  metadata:
    creationTimestamp: 2019-04-08T18:32:40Z
    generation: 1
    labels:
      app: postgresql
      chart: postgresql-3.16.1
      heritage: Tiller
      release: artifactory-postgres
    name: artifactory-postgres-postgresql
    namespace: artifactory
    resourceVersion: "1129324"
    selfLink: /apis/apps/v1/namespaces/artifactory/statefulsets/artifactory-postgres-postgresql
    uid: b0eb78ed-5a2c-11e9-ad13-0cc47a51ee18
  spec:
    podManagementPolicy: OrderedReady
    replicas: 1
    revisionHistoryLimit: 10
    selector:
      matchLabels:
        app: postgresql
        release: artifactory-postgres
        role: master
    serviceName: artifactory-postgres-postgresql-headless
    template:
      metadata:
        creationTimestamp: null
        labels:
          app: postgresql
          chart: postgresql-3.16.1
          heritage: Tiller
          release: artifactory-postgres
          role: master
        name: artifactory-postgres-postgresql
      spec:
        containers:
        - env:
          - name: PGDATA
            value: /bitnami/postgresql
          - name: POSTGRES_USER
            value: artifactory
          - name: POSTGRES_PASSWORD
            valueFrom:
              secretKeyRef:
                key: postgresql-password
                name: artifactory-postgres-postgresql
          - name: POSTGRES_DB
            value: artifactory
          image: docker.io/bitnami/postgresql:9.6.11
          imagePullPolicy: Always
          livenessProbe:
            exec:
              command:
              - sh
              - -c
              - exec pg_isready -U "artifactory" -d "artifactory" -h localhost
            failureThreshold: 6
            initialDelaySeconds: 30
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 5
          name: artifactory-postgres-postgresql
          ports:
          - containerPort: 5432
            name: postgresql
            protocol: TCP
          readinessProbe:
            exec:
              command:
              - sh
              - -c
              - exec pg_isready -U "artifactory" -d "artifactory" -h localhost
            failureThreshold: 6
            initialDelaySeconds: 5
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 5
          resources:
            requests:
              cpu: 250m
              memory: 256Mi
          securityContext:
            runAsUser: 1000090501
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          volumeMounts:
          - mountPath: /bitnami/postgresql
            name: data
        dnsPolicy: ClusterFirst
        initContainers:
        - command:
          - sh
          - -c
          - |
            chown -R 1.000090501e+09:1.000090501e+09 /bitnami
            if [ -d /bitnami/postgresql/data ]; then
              chmod  0700 /bitnami/postgresql/data;
            fi
          image: docker.io/bitnami/minideb:latest
          imagePullPolicy: Always
          name: init-chmod-data
          resources:
            requests:
              cpu: 250m
              memory: 256Mi
          securityContext:
            runAsUser: 0
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          volumeMounts:
          - mountPath: /bitnami/postgresql
            name: data
        restartPolicy: Always
        schedulerName: default-scheduler
        securityContext:
          fsGroup: 1000090501
        terminationGracePeriodSeconds: 30
    updateStrategy:
      type: RollingUpdate
    volumeClaimTemplates:
    - metadata:
        creationTimestamp: null
        name: data
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 50Gi
        storageClassName: managed-nfs-storage
      status:
        phase: Pending
  status:
    collisionCount: 0
    currentReplicas: 1
    currentRevision: artifactory-postgres-postgresql-766f7b58bc
    observedGeneration: 1
    readyReplicas: 1
    replicas: 1
    updateRevision: artifactory-postgres-postgresql-766f7b58bc
    updatedReplicas: 1
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

oc get secret artifactory-license -o yaml

apiVersion: v1
data:
  license-key: {bla bla bla}
kind: Secret
metadata:
  creationTimestamp: 2019-04-08T19:04:11Z
  name: artifactory-license
  namespace: artifactory
  resourceVersion: "1135586"
  selfLink: /api/v1/namespaces/artifactory/secrets/artifactory-license
  uid: 17dc9bbd-5a31-11e9-af31-0cc47a51e1de
type: Opaque
danielezer commented 5 years ago

@scphantm everything looks ok. I looked inside the entrypoint and saw the following:

# Add additional conf files that were mounted to ARTIFACTORY_EXTRA_CONF
addExtraConfFiles () {
    logger "Adding extra configuration files to ${ARTIFACTORY_HOME}/etc if any exist"

    # If directory not empty
    if [ -d "${ARTIFACTORY_EXTRA_CONF}" ] && [ "$(ls -A ${ARTIFACTORY_EXTRA_CONF})" ]; then
        logger "Adding files from ${ARTIFACTORY_EXTRA_CONF} to ${ARTIFACTORY_HOME}/etc"
        cp -rfv ${ARTIFACTORY_EXTRA_CONF}/* ${ARTIFACTORY_HOME}/etc || errorExit "Copy files from ${ARTIFACTORY_EXTRA_CONF} to ${ARTIFACTORY_HOME}/etc failed"
    fi
}

I also looked for the line "Adding files from" in the log files you posted and it doesn't seem to be there. Please check if you can see the line "Adding files from in the STDOUT of the artifactory container (kubectl logs artifactory-artifactory-0). Also, please post an ls to the /artifactory_extra_conf/ directory. As you can see from the function, all we do here is a simple copy, so if it is not a directory on one side, it shouldn't be a directory on the other side - it would simply have the same file structure.

scphantm commented 5 years ago

I have the method calls in my log

2019-04-10 13:21:34  [262 entrypoint-artifactory.sh] Setting up Access data directories if missing
2019-04-10 13:21:34  [152 entrypoint-artifactory.sh] Adding extra configuration files to /var/opt/jfrog/artifactory/access/etc if any exist
2019-04-10 13:21:34  [273 entrypoint-artifactory.sh] Setting up Replicator data directories if missing
2019-04-10 13:21:34  [163 entrypoint-artifactory.sh] Adding extra configuration files to /var/opt/jfrog/artifactory/replicator/etc if any exist
2019-04-10 13:21:34  [721 entrypoint-artifactory.sh] Adding plugins if exist '/tmp/plugins/internalUser.groovy' -> '/opt/jfrog/artifactory/etc/plugins/internalUser.groovy'

but the path is wrong in the log, i think thats a different log entry.

$ cd artifactory_extra_conf
$ ls -alh
total 8.0K
drwxrwxrwx.  3 artifactory artifactory   41 Apr 10 13:21 .
drwxr-xr-x. 25 root        root        4.0K Apr 10 13:21 ..
-rw-r--r--.  1 root        artifactory  790 Apr 10 13:21 artifactory.lic
drwxr-xr-x.  2 root        root          33 Apr 10 13:21 info

if i cat artifactory.lic i see my license file correctly.

danielezer commented 5 years ago

@scphantm that's a different log entry. I'm looking for "Adding files from"

scphantm commented 5 years ago

doesn't exist. Its not getting inside the if statement.

danielezer commented 5 years ago

ok, so please try to evaluate this expression in your container:

[ -d "${ARTIFACTORY_EXTRA_CONF}" ] && [ "$(ls -A ${ARTIFACTORY_EXTRA_CONF})" ]

the value of ARTIFACTORY_EXTRA_CONF is /artifactory_extra_conf

scphantm commented 5 years ago
echo $ARTIFACTORY_EXTRA_CONF
/artifactory_extra_conf
$ ls -A ${ARTIFACTORY_EXTRA_CONF}
artifactory.lic  info
$ echo $ARTIFACTORY_HOME
/opt/jfrog/artifactory
$ [ -d "${ARTIFACTORY_EXTRA_CONF}" ] && [ "$(ls -A ${ARTIFACTORY_EXTRA_CONF})" ]
$ [ -d "${ARTIFACTORY_EXTRA_CONF}" ] && echo "Directory ${ARTIFACTORY_EXTRA_CONF} exists."
Directory /artifactory_extra_conf exists.
$
scphantm commented 5 years ago

I don't think that method is being called. My clue is this

addExtraConfFiles () {
    logger "Adding extra configuration files to ${ARTIFACTORY_HOME}/etc if any exist"

but the log file has

Adding extra configuration files to /var/opt/jfrog/artifactory/access/etc if any exist

if the method is being called, then the log should read

Adding extra configuration files to /opt/jfrog/artifactory/etc if any exist

seeing if

$ echo $ARTIFACTORY_HOME
/opt/jfrog/artifactory
danielezer commented 5 years ago

that's weird. please add the following to your values.yaml file:

artifactory:
  preStartCommand: "sleep 200"

exec into the container while the sleep is running and check if the directory is still there with the license file. This will require you to delete the release and the pvc so that we have a "fresh start"

scphantm commented 5 years ago

artifactory-artifactory-0.log

no change.

danielezer commented 5 years ago

I don't think that method is being called. My clue is this

addExtraConfFiles () {
    logger "Adding extra configuration files to ${ARTIFACTORY_HOME}/etc if any exist"

but the log file has

Adding extra configuration files to /var/opt/jfrog/artifactory/access/etc if any exist

if the method is being called, then the log should read

Adding extra configuration files to /opt/jfrog/artifactory/etc if any exist

seeing if

$ echo $ARTIFACTORY_HOME
/opt/jfrog/artifactory

@scphantm This is not the same method being called, this is a different method:

# Add additional conf files that were mounted to ACCESS_EXTRA_CONF
addExtraAccessConfFiles () {
    logger "Adding extra configuration files to ${ACCESS_ETC_FOLDER} if any exist"

    # If directory not empty
    if [ -d "${ACCESS_EXTRA_CONF}" ] && [ "$(ls -A ${ACCESS_EXTRA_CONF})" ]; then
        logger "Adding files from ${ACCESS_EXTRA_CONF} to ${ACCESS_ETC_FOLDER}"
        cp -rfv ${ACCESS_EXTRA_CONF}/* ${ACCESS_ETC_FOLDER} || errorExit "Copy files from ${ACCESS_EXTRA_CONF} to ${ACCESS_ETC_FOLDER} failed"
    fi
}

You can read this script from inside your container. Its in /entrypoint-artifactory.sh

danielezer commented 5 years ago

If you're going to try to use the sleep thing I mentioned, you can also run the entrypoint yourself in debug mode, e.g:

bash -x /entrypoint-artifactory.sh
scphantm commented 5 years ago

now, this is very interesting. i did this

artifactory:
  preStartCommand: 'cp -rfv /artifactory_extra_conf/* /opt/jfrog/artifactory/etc || errorExit "Copy files from /artifactory_extra_conf to /opt/jfrog/artifactory/etc failed"'

and got this at the beginning of my log

Running custom preStartCommand command
--
  | cp: cannot overwrite directory '/opt/jfrog/artifactory/etc/artifactory.lic' with non-directory
  | '/artifactory_extra_conf/info/installer-info.json' -> '/opt/jfrog/artifactory/etc/info/installer-info.json'
  | /bin/sh: 1: errorExit: not found
  | 2019-04-10 15:48:06  [733 entrypoint-artifactory.sh] Preparing to run Artifactory in Docker
danielezer commented 5 years ago

Is this using the same PVC? its important to note that when you delete a statefulset the dynamically provisioned PVC will not be deleted. You have to explictly delete the PVC using kubectl delete pvc <pvc-name>

So this can still be the same directory on the old PVC, which makes sense

scphantm commented 5 years ago

Ha, deleted the PVC and ran it again


  | Running custom preStartCommand command
-- | --
  | cp: target '/opt/jfrog/artifactory/etc' is not a directory
  | /bin/sh: 1: errorExit: not found
  | 2019-04-10 15:55:51  [733 entrypoint-artifactory.sh] Preparing to run Artifactory in Docker
  | 2019-04-10 15:55:51  [734 entrypoint-artifactory.sh] Running as uid=1030(artifactory) gid=1030(artifactory) groups=1030(artifactory)
  | 2019-04-10 15:55:51   [59 entrypoint-artifactory.sh] Dockerfile for this image can found inside the container.
  | 2019-04-10 15:55:51   [60 entrypoint-artifactory.sh] To view the Dockerfile: 'cat /docker/artifactory-pro/Dockerfile.artifactory'.
  | 2019-04-10 15:55:51   [65 entrypoint-artifactory.sh] Checking open files and processes limits
  | 2019-04-10 15:55:51   [68 entrypoint-artifactory.sh] Current max open files is 1048576
  | 2019-04-10 15:55:51   [80 entrypoint-artifactory.sh] Current max open processes is 1048576
  | 2019-04-10 15:55:51  [212 entrypoint-artifactory.sh] Testing directory /var/opt/jfrog/artifactory has read/write permissions for user 'artifactory' (id 1030)
  | 2019-04-10 15:55:52  [237 entrypoint-artifactory.sh] Permissions for /var/opt/jfrog/artifactory are good
  | 2019-04-10 15:55:52  [242 entrypoint-artifactory.sh] Setting up Artifactory data directories if missing
  | mkdir: created directory '/var/opt/jfrog/artifactory/etc'
  | 2019-04-10 15:55:52  [141 entrypoint-artifactory.sh] Adding extra configuration files to /opt/jfrog/artifactory/etc if any exist
  | 2019-04-10 15:55:52  [145 entrypoint-artifactory.sh] Adding files from /artifactory_extra_conf to /opt/jfrog/artifactory/etc
  | '/artifactory_extra_conf/artifactory.lic' -> '/opt/jfrog/artifactory/etc/artifactory.lic'
  | '/artifactory_extra_conf/info' -> '/opt/jfrog/artifactory/etc/info'
  | '/artifactory_extra_conf/info/installer-info.json' -> '/opt/jfrog/artifactory/etc/info/installer-info.json'

all morning i forgot to delete the PVC. ugh. Maybe me putting the stupid things in quotes did it. i dunno.

danielezer commented 5 years ago

Cool @scphantm. So is it working now? I do see the logger line in your posted log

scphantm commented 5 years ago

yea, it seems to be working now. Thanks

eldada commented 5 years ago

@scphantm - Thanks for confirming. I'll close this now. We have also merged a change (https://github.com/jfrog/charts/pull/294) the adds more options to passing an Artifactory license.