Closed govindkailas closed 4 years ago
Hi @govindkailas . I tried your exact setup and it worked as expected although parameters are not passed in properly as you have another level of artifactory
(since the OSS chart uses the main artifactory
chart as a dependency).
The issue in your case is that you need to pass all the parameters with another preceding artifactory.
. For example:
helm install artifactory-oss \
--set artifactory.nginx.enabled=false \
--set artifactory.postgresql.enabled=false \
--set postgresql.enabled=false \
--set artifactory.artifactory.service.type=NodePort \
--set artifactory.artifactory.resources.requests.cpu="500m" \
--set artifactory.artifactory.resources.limits.cpu="2" \
--set artifactory.artifactory.resources.requests.memory="1Gi" \
--set artifactory.artifactory.resources.limits.memory="4Gi" \
--set artifactory.artifactory.javaOpts.xms="1g" \
--set artifactory.artifactory.javaOpts.xmx="3g" \
jfrog/artifactory-oss -n tkgdev-artifactory-dev
(I see you did it right for some of the parameters.
Looking at the output of your kubectl describe
, I see Artifactory does not get the assigned resources you sent in your command line, but much lower, which are probably defaults set in your cluster or namespace...
Limits:
cpu: 200m
memory: 512Mi
Requests:
cpu: 25m
memory: 256Mi
The limits here are far too low for Artifactory to run properly.
Fix the parameters and test again. Let us know the results.
Yes, that's true. There is a default resource limit set on namespace. Any idea why is the passed resource request is not hournered? Also what's the master key missing error which is coming in the log? Nevertheless I will remove the resource limits on namespace and try again.
I have removed the default resource limits set on the namespace, now I see that artifcatory-oss is not picking up the limits we pass from the helm command line. I used the same helm install command that you provided.
3m9s Warning FailedCreate statefulset/artifactory-oss-artifactory create Pod artifactory-oss-artifactory-0 in StatefulSet artifactory-oss-artifactory failed error: pods "artifactory-oss-artifactory-0" is forbidden: failed quota: tkgdev-artifactory-dev-quota: must specify limits.cpu,limits.memory,requests.cpu,requests.memory
Yes, that's true. There is a default resource limit set on namespace. Any idea why is the passed resource request is not hournered? Also what's the master key missing error which is coming in the log? Nevertheless I will remove the resource limits on namespace and try again.
This was not honoured because of the missing extra artifactory.
prefix.
As for
failed error: pods "artifactory-oss-artifactory-0" is forbidden: failed quota: tkgdev-artifactory-dev-quota: must specify limits.cpu,limits.memory,requests.cpu,requests.memory
I assume there's some policy on your cluster. I tested again with my command I provided and it started up properly with the correct resources set.
Thanks @eldada, I had to adjust the resource quota to get this working. Now I notice that there are two ports open, Is there a way to keep it as one single port(say 8081)?
kubectl get all -n tkgdev-artifactory-dev
NAME READY STATUS RESTARTS AGE
pod/artifactory-oss-artifactory-0 1/1 Running 0 4m21s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/artifactory-oss-artifactory LoadBalancer 192.25.94.244 <pending> 8082:31767/TCP,8081:31276/TCP 4m23s
NAME READY AGE
statefulset.apps/artifactory-oss-artifactory 1/1 4m24s
@govindkailas The two ports are indeed needed, one for ui and as an entry for other products via a new service called router, and the other for artifactory APIs.
Important to note that you don't have to use both ports. Port 8082 can route to Artifactory too, but we keep the port 8081 for backward compatibility and for better performance (as you skip the router and go directly to Artifactory). I'm closing this issue. If needed, you can reopen to continue discussion.
@eldada I meet this problem。my helm is V3 ,artifactory-oss is 7.46.11 . i use ceph bucket.there is my s3 config,and artifactory statefulsets.apps config
persistence: mountPath: "/var/opt/jfrog/artifactory"
## A manually managed Persistent Volume and Claim
## Requires persistence.enabled: true
## If defined, PVC must be created manually before volume will be bound
# existingClaim:
#accessMode: ReadWriteOnce
## Storage default size. Should be increased for production deployments.
#size: 800Gi
## Use a custom Secret to be mounted as your binarystore.xml
## NOTE: This will ignore all settings below that make up binarystore.xml
#customBinarystoreXmlSecret:
## Redundancy required For HA deployments, with "cluster" persistence storage type
#redundancy: 3
#lenientLimit: 1
## Cache default size. Should be increased for production deployments.
#maxCacheSize: 5000000000
#cacheProviderDir: cache
## Set the persistence storage type. This will apply the matching binarystore.xml to Artifactory config
## Supported types are:
## file-system (default)
## cluster-file-system
## nfs
## google-storage
## google-storage-v2
## cluster-google-storage-v2
## aws-s3-v3
## s3-storage-v3-direct
## cluster-s3-storage-v3
## azure-blob
## azure-blob-storage-direct
## cluster-azure-blob-storage
type: aws-s3-v3
## Use binarystoreXml to provide a custom binarystore.xml
## This is intentionally commented and below previous content of binarystoreXml is moved under files/binarystore.xml
## binarystoreXml:
## For artifactory.persistence.type nfs
## If using NFS as the shared storage, you must have a running NFS server that is accessible by your Kubernetes
## cluster nodes.
## Need to have the following set
nfs:
# Must pass actual IP of NFS server with '--set For artifactory.persistence.nfs.ip=${NFS_IP}'
ip:
haDataMount: "/data"
haBackupMount: "/backup"
dataDir: "/var/opt/jfrog/artifactory"
backupDir: "/var/opt/jfrog/artifactory-backup"
capacity: 500Gi
awsS3V3:
testConnection: true
identity: 2O0FS4D3XX2V3XIULxxx
credential: rvLgggDf9QmUeORCBwnnDJ9gt3HjD9Mrxxxxx
region: us-east-1
bucketName: artifactory
path: artifactory/filestore
endpoint: rook-ceph-rgw-my-store-obj.rook-ceph
maxConnections: 50
kmsServerSideEncryptionKeyId:
kmsKeyRegion:
kmsCryptoMode:
useInstanceCredentials: false
usePresigning: false
signatureExpirySeconds: 300
signedUrlExpirySeconds: 30
cloudFrontDomainName:
cloudFrontKeyPairId:
cloudFrontPrivateKey:
enableSignedUrlRedirect: false
enablePathStyleAccess: false
narwal@rd-k8s-ceph-test-master-04:~/jfrog/artifactory-oss$ kubectl get statefulsets.apps -n artifactory-oss artifactory-oss-1667962421 -o yaml apiVersion: apps/v1 kind: StatefulSet metadata: annotations: meta.helm.sh/release-name: artifactory-oss-1667962421 meta.helm.sh/release-namespace: artifactory-oss creationTimestamp: "2022-11-09T02:53:43Z" generation: 2 labels: app: artifactory app.kubernetes.io/managed-by: Helm chart: artifactory-107.46.11 component: artifactory databaseUpgradeReady: "yes" heritage: Helm release: artifactory-oss-1667962421 name: artifactory-oss-1667962421 namespace: artifactory-oss resourceVersion: "108890810" uid: 695fb0dc-eaa9-42dc-baa4-5a0e36cd19e0 spec: podManagementPolicy: OrderedReady replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: app: artifactory release: artifactory-oss-1667962421 role: artifactory serviceName: artifactory template: metadata: annotations: checksum/access-config: 4238695fead7796f691b71efaca2e260e8e92f1fd7b5e4755f8cb0bab57f2227 checksum/admin-creds: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b checksum/binarystore: b9fdbd65000d46d48fb264981adec8c1c5636288fa8b47ecbca5b19b78e8704c checksum/database-secrets: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b checksum/systemyaml: 274487d37e4cad59c753db30277d5c54c16bcd0f092ce1aef59f6efeb96999dc creationTimestamp: null labels: app: artifactory chart: artifactory-107.46.11 component: artifactory heritage: Helm release: artifactory-oss-1667962421 role: artifactory spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution:
artifactory log: 2022-11-09T03:50:41.339Z [jfrou] [FATAL] [2d0e6acda77764c6] [bootstrap.go:99 ] [main ] [] - Failed resolving master key: failed resolving 'shared.security.masterKey' key; file does not exist: /opt/jfrog/artifactory/var/etc/security/master.key 2022-11-09T03:50:41.756Z [jfmd ] [ERROR] [ ] [keys.go:23 ] [main ] [] - Failed resolving master key: failed resolving 'shared.security.masterKey' key; file does not exist: /opt/jfrog/artifactory/var/etc/security/master.key goroutine 1 [running]: runtime/debug.Stack() /src/runtime/debug/stack.go:24 +0x65 jfrog.com/jfrog-go-commons/v7/pkg/log.(standardLogger).Panicfc(0xc0003440c0, {0x1a78140, 0xc000597b30}, {0xc00001e0a0, 0x96}, {0x0, 0x0, 0x0}) goroot/pkg/mod/jfrog.com/jfrog-go-commons/v7@v7.58.0/pkg/log/standard_logger.go:95 +0xd2 jfrog.com/metadata/v7/services/common.MustResolveSecurityKeys({0x1a78140, 0xc000597b30}, {0x1a7c0f0, 0xc0000c9f40}, {0x1a801a8, 0xc0003440c0?}) jfrog.com/metadata/v7@v7.48.2/services/common/keys.go:23 +0x1ca main.main() jfrog.com/metadata/v7@v7.48.2/metadata.go:31 +0x345 [init] panic: Failed resolving master key: failed resolving 'shared.security.masterKey' key; file does not exist: /opt/jfrog/artifactory/var/etc/security/master.key goroutine 1 [running]: runtime/debug.Stack() /src/runtime/debug/stack.go:24 +0x65 jfrog.com/jfrog-go-commons/v7/pkg/log.(standardLogger).Panicfc(0xc0003440c0, {0x1a78140, 0xc000597b30}, {0xc00001e0a0, 0x96}, {0x0, 0x0, 0x0}) goroot/pkg/mod/jfrog.com/jfrog-go-commons/v7@v7.58.0/pkg/log/standard_logger.go:95 +0xd2 jfrog.com/metadata/v7/services/common.MustResolveSecurityKeys({0x1a78140, 0xc000597b30}, {0x1a7c0f0, 0xc0000c9f40}, {0x1a801a8, 0xc0003440c0?}) jfrog.com/metadata/v7@v7.48.2/services/common/keys.go:23 +0x1ca main.main() jfrog.com/metadata/v7@v7.48.2/metadata.go:31 +0x345
goroutine 1 [running]: github.com/rs/zerolog.(Logger).Panic.func1({0xc000050900?, 0x0?}) goroot/pkg/mod/github.com/rs/zerolog@v1.27.0/log.go:359 +0x2d github.com/rs/zerolog.(Event).msg(0xc0006281e0, {0xc000050900, 0x2e6}) goroot/pkg/mod/github.com/rs/zerolog@v1.27.0/event.go:156 +0x2a5 github.com/rs/zerolog.(Event).Msgf(0xc0006281e0, {0xc00001e140?, 0x2754cb8?}, {0xc000475d38?, 0x2756ec0?, 0xc00001e140?}) goroot/pkg/mod/github.com/rs/zerolog@v1.27.0/event.go:129 +0x4e jfrog.com/jfrog-go-commons/v7/pkg/log.(standardLogger).logMessage(0xc0003440c0, {0x1a78140, 0xc000597b30}, 0x8?, {0xc00001e140, 0x99}, {0xc000475d38, 0x1, 0x1}) goroot/pkg/mod/jfrog.com/jfrog-go-commons/v7@v7.58.0/pkg/log/standard_logger.go:118 +0x245 jfrog.com/jfrog-go-commons/v7/pkg/log.(*standardLogger).Panicfc(0xc0003440c0, {0x1a78140, 0xc000597b30}, {0xc00001e0a0, 0x96}, {0x0, 0x0, 0x0}) goroot/pkg/mod/jfrog.com/jfrog-go-commons/v7@v7.58.0/pkg/log/standard_logger.go:96 +0x1d0 jfrog.com/metadata/v7/services/common.MustResolveSecurityKeys({0x1a78140, 0xc000597b30}, {0x1a7c0f0, 0xc0000c9f40}, {0x1a801a8, 0xc0003440c0?}) jfrog.com/metadata/v7@v7.48.2/services/common/keys.go:23 +0x1ca main.main() jfrog.com/metadata/v7@v7.48.2/metadata.go:31 +0x345
BUG REPORT Artifactory-OSS failing with Master key missing error.
Version of Helm and Kubernetes:
Helm: v3.1.2 and K8s: v1.16.4
Which chart:
2.1.2
What happened: I did a helm install with the below command, I just wanted a simple artifactory instance without postgress and ingress settings.
helm install artifactory-oss --set artifactory.nginx.enabled=false --set artifactory.postgresql.enabled=false --set postgresql.enabled=false --set artifactory.artifactory.service.type=NodePort --set artifactory.resources.requests.cpu="500m" --set artifactory.resources.limits.cpu="2" --set artifactory.resources.requests.memory="1Gi" --set artifactory.resources.limits.memory="4Gi" --set artifactory.javaOpts.xms="1g" --set artifactory.javaOpts.xmx="3g" jfrog/artifactory-oss -n tkgdev-artifactory-dev
What you expected to happen: I should see artifactory pod in running state and it should be accessible via nodeport.
How to reproduce it (as minimally and precisely as possible): Run the above-mentioned helm install command
Anything else we need to know: Logs from the pod
Output of kubectl get all
Output of describe pod
Output of pvc I see that pvc got created and its attached to the pod as well,
Out of curiosity went inside the pod and inspected the folder, there is no key. Why would OSS version need a key?