Closed santh529 closed 3 years ago
@santh529 can you share reproducible steps for the same . Also provide chart version and values.yaml file
@chukka Hi, $ helm upgrade --install artifactory-ha --set myy.masterKey=${MASTER_KEY} --set my.joinKey=${JOIN_KEY} --namespace artifactory-ha jfrog/artifactory-ha
Release "artifactory-ha" has been upgraded. Happy Helming! NAME: artifactory-ha LAST DEPLOYED: Tue May 11 09:19:16 2021 NAMESPACE: artifactory-ha STATUS: deployed REVISION: 2 TEST SUITE: None NOTES: Congratulations. You have just deployed JFrog Artifactory HA!
DATABASE: To extract the database password, run the following export DB_PASSWORD=$(kubectl get --namespace artifactory-ha $(kubectl get secret --namespacecartifactory-ha -o name | grep postgresql) -o jsonpath="{.data.postgresql-password}" | base64 --decode) echo ${DB_PASSWORD}
SETUP:
Get the Artifactory IP and URL NOTE: It may take a few minutes for the LoadBalancer public IP to be available!
You can watch the status of the service by running 'kubectl get svc -w artifactory-ha-nginx' export SERVICE_IP=$(kubectl get svc --namespace artifactory-ha artifactory-ha-nginx -o jsonpath='{.status.loadBalancer.ingress[0].ip}') echo http://$SERVICE_IP/
Open Artifactory in your browser Default credential for Artifactory: user: admin password: password
Add HA licenses to activate Artifactory HA through the Artifactory UI NOTE: Each Artifactory node requires a valid license. See https://www.jfrog.com/confluence/display/RTF/HA+Installation+and+Setup for more details.
$ oc get deployments -n artifactory-ha NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/artifactory-ha-nginx 0/1 0 0 43h
$oc get pods -n artifactory-ha NAME READY STATUS RESTARTS AGE pod/artifactory-ha-artifactory-ha-member-0 0/1 Init:1/5 0 16h pod/artifactory-ha-artifactory-ha-primary-0 0/1 Init:3/5 0 17h pod/artifactory-ha-postgresql-0 0/1 CrashLoopBackOff 3 16h
$oc get statefulsets -n artifactory-ha NAME READY AGE statefulset.apps/artifactory-ha-artifactory-ha-member 0/2 16h statefulset.apps/artifactory-ha-artifactory-ha-primary 0/1 17h statefulset.apps/artifactory-ha-postgresql 0/1 16h
artifactory-ha chart version : 7.18.6
values.yaml file :
global:
versions: {}
customCertificates: enabled: false
initContainerImage: releases-docker.jfrog.io/alpine:3.13.5
installer: type: platform:
installerInfo: '{"productId": "Helm_artifactory-ha/{{ .Chart.Version }}", "features": [ { "featureId": "Platform/{{ default "kubernetes" .Values.installer.platform }}"}]}'
systemYamlOverride:
existingSecret:
dataKey:
rbac: create: true role:
rules:
- apiGroups:
- ''
resources:
- services
- endpoints
- pods
verbs:
- get
- watch
- list
serviceAccount: create: true
name: annotations: {}
automountServiceAccountToken: true
ingress: enabled: false defaultBackend: enabled: true
hosts: [] routerPath: / artifactoryPath: /artifactory/ annotations: {}
labels: {}
tls: []
additionalRules: []
customIngress: | networkpolicy:
waitForDatabase: true
postgresql: enabled: true image: registry: releases-docker.jfrog.io repository: bitnami/postgresql tag: 13.2.0-debian-10-r55 postgresqlUsername: artifactory postgresqlPassword: "" postgresqlDatabase: artifactory postgresqlExtendedConf: listenAddresses: "*" maxConnections: "1500" persistence: enabled: true size: 200Gi
service: port: 5432 primary: nodeSelector: {} affinity: {} tolerations: [] readReplicas: nodeSelector: {} affinity: {} tolerations: [] resources: {}
database: type: driver:
url:
user: password:
secrets: {}
logger: image: registry: releases-docker.jfrog.io repository: busybox tag: 1.32.1
artifactory: name: artifactory-ha
image: registry: releases-docker.jfrog.io repository: jfrog/artifactory-pro
pullPolicy: IfNotPresent
priorityClass: create: false value: 1000000000
# name:
## Use an existing priority class
# existingPriorityClass:
deleteDBPropertiesOnStartup: true database: maxOpenConnections: 80 tomcat: connector: maxThreads: 200 extraConfig: 'acceptCount="100"'
customCertificates: enabled: false
.Values.artifactory.openMetrics.enabled
to true
openMetrics: enabled: false
haDataDir: enabled: false path: haBackupDir: enabled: false path:
copyOnEveryStartup:
loggers: []
loggersResources: {}
catalinaLoggers: []
catalinaLoggersResources: {}
migration: enabled: true timeoutSeconds: 3600
# preStartCommand: "mkdir -p /opt/jfrog/artifactory/var/bootstrap/artifactory/tomcat/lib; cd /opt/jfrog/artifactory/var/bootstrap/artifactory/tomcat/lib && wget -O /opt/jfrog/artifactory/var/bootstrap/artifactory/tomcat/lib/mysql-connector-java-5.1.41.jar https://jcenter.bintray.com/mysql/mysql-connector-java/5.1.41/mysql-connector-java-5.1.41.jar"
customInitContainersBegin: |
## Add custom init containers
customInitContainers: |
customSidecarContainers: |
customVolumes: |
customVolumeMounts: |
customPersistentPodVolumeClaim: {}
customPersistentVolumeClaim: {}
customSecrets:
consoleLog: false
binarystore: enabled: true
admin: ip: "127.0.0.1" username: "admin" password: secret: dataKey:
license:
licenseKey:
## If artifactory.license.secret is passed, it will be mounted as
## ARTIFACTORY_HOME/etc/artifactory.cluster.license and loaded at run time.
secret:
## The dataKey should be the name of the secret data key created.
dataKey:
configMapName:
configMaps: |
userPluginSecrets:
extraEnvironmentVariables:
systemYaml: | shared: logging: consoleLog: enabled: {{ .Values.artifactory.consoleLog }} extraJavaOpts: > -Dartifactory.access.client.max.connections={{ .Values.access.tomcat.connector.maxThreads }} {{- with .Values.artifactory.primary.javaOpts }} -Dartifactory.async.corePoolSize={{ .corePoolSize }} {{- if .xms }} -Xms{{ .xms }} {{- end }} {{- if .xmx }} -Xmx{{ .xmx }} {{- end }} {{- if .jmx.enabled }} -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port={{ .jmx.port }} -Dcom.sun.management.jmxremote.rmi.port={{ .jmx.port }} -Dcom.sun.management.jmxremote.ssl={{ .jmx.ssl }} {{- if .jmx.host }} -Djava.rmi.server.hostname={{ tpl .jmx.host $ }} {{- else }} -Djava.rmi.server.hostname={{ template "artifactory-ha.fullname" $ }} {{- end }} {{- if .jmx.authenticate }} -Dcom.sun.management.jmxremote.authenticate=true -Dcom.sun.management.jmxremote.access.file={{ .jmx.accessFile }} -Dcom.sun.management.jmxremote.password.file={{ .jmx.passwordFile }} {{- else }} -Dcom.sun.management.jmxremote.authenticate=false {{- end }} {{- end }} {{- if .other }} {{ .other }} {{- end }} {{- end }} database: {{- if .Values.postgresql.enabled }} type: postgresql url: "jdbc:postgresql://{{ .Release.Name }}-postgresql:{{ .Values.postgresql.service.port }}/{{ .Values.postgresql.postgresqlDatabase }}" host: "" driver: org.postgresql.Driver username: "{{ .Values.postgresql.postgresqlUsername }}" {{ else }} type: "{{ .Values.database.type }}" driver: "{{ .Values.database.driver }}" {{- end }} artifactory: {{- if .Values.artifactory.openMetrics }} metrics: enabled: {{ .Values.artifactory.openMetrics.enabled }} {{- end }} {{- if or .Values.artifactory.haDataDir.enabled .Values.artifactory.haBackupDir.enabled }} node: {{- if .Values.artifactory.haDataDir.path }} haDataDir: {{ .Values.artifactory.haDataDir.path }} {{- end }} {{- if .Values.artifactory.haBackupDir.path }} haBackupDir: {{ .Values.artifactory.haBackupDir.path }} {{- end }} {{- end }} database: maxOpenConnections: {{ .Values.artifactory.database.maxOpenConnections }} tomcat: connector: maxThreads: {{ .Values.artifactory.tomcat.connector.maxThreads }} extraConfig: {{ .Values.artifactory.tomcat.connector.extraConfig }} frontend: session: timeMinutes: {{ .Values.frontend.session.timeoutMinutes | quote }} access: database: maxOpenConnections: {{ .Values.access.database.maxOpenConnections }} tomcat: connector: maxThreads: {{ .Values.access.tomcat.connector.maxThreads }} extraConfig: {{ .Values.access.tomcat.connector.extraConfig }} {{- if .Values.access.database.enabled }} type: "{{ .Values.access.database.type }}" url: "{{ .Values.access.database.url }}" driver: "{{ .Values.access.database.driver }}" username: "{{ .Values.access.database.user }}" password: "{{ .Values.access.database.password }}" {{- end }} metadata: database: maxOpenConnections: {{ .Values.metadata.database.maxOpenConnections }} {{- if .Values.artifactory.replicator.enabled }} replicator: enabled: true {{- end }}
externalPort: 8082 internalPort: 8082 externalArtifactoryPort: 8081 internalArtifactoryPort: 8081 uid: 1030 gid: 1030 terminationGracePeriodSeconds: 30
runAsUser
and the fsGroup
to the artifactory.uid
value.setSecurityContext: true
livenessProbe: enabled: true path: /router/api/v1/system/health initialDelaySeconds: 0 failureThreshold: 10 timeoutSeconds: 5 periodSeconds: 10 successThreshold: 1
readinessProbe: enabled: true path: /router/api/v1/system/health initialDelaySeconds: 0 failureThreshold: 10 timeoutSeconds: 5 periodSeconds: 10 successThreshold: 1
startupProbe: enabled: true path: /router/api/v1/system/health initialDelaySeconds: 30 failureThreshold: 60 periodSeconds: 5 timeoutSeconds: 5
persistence: enabled: true local: false redundancy: 3 mountPath: "/var/opt/jfrog/artifactory" accessMode: ReadWriteOnce size: 200Gi
## Use a custom Secret to be mounted as your binarystore.xml
## NOTE: This will ignore all settings below that make up binarystore.xml
customBinarystoreXmlSecret:
maxCacheSize: 50000000000
cacheProviderDir: cache
eventual:
numberOfThreads: 10
## artifactory data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClassName: "-"
## Set the persistence storage type. This will apply the matching binarystore.xml to Artifactory config
## Supported types are:
## file-system (default)
## nfs
## google-storage
## aws-s3
## aws-s3-v3
## azure-blob
type: file-system
## Use binarystoreXml to provide a custom binarystore.xml
## This can be a template or hardcoded.
binarystoreXml: |
{{- if eq .Values.artifactory.persistence.type "file-system" }}
<!-- File system replication -->
{{- if .Values.artifactory.persistence.fileSystem.existingSharedClaim.enabled }}
<!-- File Storage - Dynamic for Artifactory files, pre-created for DATA and BACKUP -->
<config version="4">
<chain>
<provider id="cache-fs" type="cache-fs"> <!-- This is a cached filestore -->
<provider id="sharding" type="sharding"> <!-- This is a sharding provider -->
{{- range $sharedClaimNumber, $e := until (.Values.artifactory.persistence.fileSystem.existingSharedClaim.numberOfExistingClaims|int) -}}
<sub-provider id="shard{{ $sharedClaimNumber }}" type="state-aware"/>
{{- end }}
</provider>
</provider>
</chain>
<provider id="cache-fs" type="cache-fs">
<maxCacheSize>{{ .Values.artifactory.persistence.maxCacheSize }}</maxCacheSize>
<cacheProviderDir>{{ .Values.artifactory.persistence.cacheProviderDir }}</cacheProviderDir>
</provider>
// Specify the read and write strategy and redundancy for the sharding binary provider
<provider id="sharding" type="sharding">
<readBehavior>roundRobin</readBehavior>
<writeBehavior>percentageFreeSpace</writeBehavior>
<redundancy>2</redundancy>
</provider>
{{- range $sharedClaimNumber, $e := until (.Values.artifactory.persistence.fileSystem.existingSharedClaim.numberOfExistingClaims|int) -}}
//For each sub-provider (mount), specify the filestore location
<provider id="shard{{ $sharedClaimNumber }}" type="state-aware">
<fileStoreDir>filestore{{ $sharedClaimNumber }}</fileStoreDir>
</provider>
{{- end }}
</config>
{{- else }}
<config version="2">
<chain>
<provider id="cache-fs" type="cache-fs">
<provider id="sharding-cluster" type="sharding-cluster">
<readBehavior>crossNetworkStrategy</readBehavior>
<writeBehavior>crossNetworkStrategy</writeBehavior>
<redundancy>{{ .Values.artifactory.persistence.redundancy }}</redundancy>
<lenientLimit>2</lenientLimit>
<minSpareUploaderExecutor>2</minSpareUploaderExecutor>
<sub-provider id="state-aware" type="state-aware"/>
<dynamic-provider id="remote" type="remote"/>
<property name="zones" value="local,remote"/>
</provider>
</provider>
</chain>
<provider id="cache-fs" type="cache-fs">
<maxCacheSize>{{ .Values.artifactory.persistence.maxCacheSize }}</maxCacheSize>
<cacheProviderDir>{{ .Values.artifactory.persistence.cacheProviderDir }}</cacheProviderDir>
</provider>
<!-- Shards add local file-system provider configuration -->
<provider id="state-aware" type="state-aware">
<fileStoreDir>shard-fs-1</fileStoreDir>
<zone>local</zone>
</provider>
<!-- Shards dynamic remote provider configuration -->
<provider id="remote" type="remote">
<checkPeriod>30</checkPeriod>
<serviceId>tester-remote1</serviceId>
<timeout>10000</timeout>
<zone>remote</zone>
<property name="header.remote.block" value="true"/>
</provider>
</config>
{{- end }}
{{- end }}
{{- if eq .Values.artifactory.persistence.type "google-storage" }}
<!-- Google storage -->
<config version="2">
<chain>
<provider id="cache-fs" type="cache-fs">
<provider id="sharding-cluster" type="sharding-cluster">
<readBehavior>crossNetworkStrategy</readBehavior>
<writeBehavior>crossNetworkStrategy</writeBehavior>
<redundancy>{{ .Values.artifactory.persistence.redundancy }}</redundancy>
<minSpareUploaderExecutor>2</minSpareUploaderExecutor>
<sub-provider id="eventual-cluster" type="eventual-cluster">
<provider id="retry" type="retry">
{{- if .Values.artifactory.persistence.googleStorage.gcpServiceAccount.enabled }}
<provider id="google-storage-v2" type="google-storage-v2"/>
{{- else }}
<provider id="google-storage" type="google-storage"/>
{{- end }}
</provider>
</sub-provider>
<dynamic-provider id="remote" type="remote"/>
<property name="zones" value="local,remote"/>
</provider>
</provider>
</chain>
<!-- Set max cache-fs size -->
<provider id="cache-fs" type="cache-fs">
<maxCacheSize>{{ .Values.artifactory.persistence.maxCacheSize }}</maxCacheSize>
<cacheProviderDir>{{ .Values.artifactory.persistence.cacheProviderDir }}</cacheProviderDir>
</provider>
<provider id="eventual-cluster" type="eventual-cluster">
<zone>local</zone>
</provider>
<provider id="remote" type="remote">
<checkPeriod>30</checkPeriod>
<timeout>10000</timeout>
<zone>remote</zone>
</provider>
<provider id="file-system" type="file-system">
<fileStoreDir>{{ .Values.artifactory.persistence.mountPath }}/data/filestore</fileStoreDir>
<tempDir>/tmp</tempDir>
</provider>
{{- if .Values.artifactory.persistence.googleStorage.gcpServiceAccount.enabled }}
<provider id="google-storage-v2" type="google-storage-v2">
<useInstanceCredentials>false</useInstanceCredentials>
{{- else }}
<provider id="google-storage" type="google-storage">
<identity>{{ .Values.artifactory.persistence.googleStorage.identity }}</identity>
<credential>{{ .Values.artifactory.persistence.googleStorage.credential }}</credential>
{{- end }}
<providerId>google-cloud-storage</providerId>
<endpoint>{{ .Values.artifactory.persistence.googleStorage.endpoint }}</endpoint>
<httpsOnly>{{ .Values.artifactory.persistence.googleStorage.httpsOnly }}</httpsOnly>
<bucketName>{{ .Values.artifactory.persistence.googleStorage.bucketName }}</bucketName>
<path>{{ .Values.artifactory.persistence.googleStorage.path }}</path>
<bucketExists>{{ .Values.artifactory.persistence.googleStorage.bucketExists }}</bucketExists>
</provider>
</config>
{{- end }}
{{- if eq .Values.artifactory.persistence.type "aws-s3-v3" }}
<!-- AWS S3 V3 -->
<config version="2">
<chain> <!--template="cluster-s3-storage-v3"-->
<provider id="cache-fs-eventual-s3" type="cache-fs">
<provider id="sharding-cluster-eventual-s3" type="sharding-cluster">
<sub-provider id="eventual-cluster-s3" type="eventual-cluster">
<provider id="retry-s3" type="retry">
<provider id="s3-storage-v3" type="s3-storage-v3"/>
</provider>
</sub-provider>
<dynamic-provider id="remote-s3" type="remote"/>
</provider>
</provider>
</chain>
<provider id="sharding-cluster-eventual-s3" type="sharding-cluster">
<readBehavior>crossNetworkStrategy</readBehavior>
<writeBehavior>crossNetworkStrategy</writeBehavior>
<redundancy>{{ .Values.artifactory.persistence.redundancy }}</redundancy>
<property name="zones" value="local,remote"/>
</provider>
<provider id="remote-s3" type="remote">
<zone>remote</zone>
</provider>
<provider id="eventual-cluster-s3" type="eventual-cluster">
<zone>local</zone>
</provider>
<!-- Set max cache-fs size -->
<provider id="cache-fs-eventual-s3" type="cache-fs">
<maxCacheSize>{{ .Values.artifactory.persistence.maxCacheSize }}</maxCacheSize>
<cacheProviderDir>{{ .Values.artifactory.persistence.cacheProviderDir }}</cacheProviderDir>
</provider>
{{- with .Values.artifactory.persistence.awsS3V3 }}
<provider id="s3-storage-v3" type="s3-storage-v3">
<testConnection>{{ .testConnection }}</testConnection>
{{- if .identity }}
<identity>{{ .identity }}</identity>
{{- end }}
{{- if .credential }}
<credential>{{ .credential }}</credential>
{{- end }}
<region>{{ .region }}</region>
<bucketName>{{ .bucketName }}</bucketName>
<path>{{ .path }}</path>
<endpoint>{{ .endpoint }}</endpoint>
{{- with .maxConnections }}
<maxConnections>{{ . }}</maxConnections>
{{- end }}
{{- with .kmsServerSideEncryptionKeyId }}
<kmsServerSideEncryptionKeyId>{{ . }}</kmsServerSideEncryptionKeyId>
{{- end }}
{{- with .kmsKeyRegion }}
<kmsKeyRegion>{{ . }}</kmsKeyRegion>
{{- end }}
{{- with .kmsCryptoMode }}
<kmsCryptoMode>{{ . }}</kmsCryptoMode>
{{- end }}
{{- if .useInstanceCredentials }}
<useInstanceCredentials>true</useInstanceCredentials>
{{- else }}
<useInstanceCredentials>false</useInstanceCredentials>
{{- end }}
<usePresigning>{{ .usePresigning }}</usePresigning>
<signatureExpirySeconds>{{ .signatureExpirySeconds }}</signatureExpirySeconds>
{{- with .cloudFrontDomainName }}
<cloudFrontDomainName>{{ . }}</cloudFrontDomainName>
{{- end }}
{{- with .cloudFrontKeyPairId }}
<cloudFrontKeyPairId>{{ .cloudFrontKeyPairId }}</cloudFrontKeyPairId>
{{- end }}
{{- with .cloudFrontPrivateKey }}
<cloudFrontPrivateKey>{{ . }}</cloudFrontPrivateKey>
{{- end }}
{{- with .enableSignedUrlRedirect }}
<enableSignedUrlRedirect>{{ . }}</enableSignedUrlRedirect>
{{- end }}
{{- with .enablePathStyleAccess }}
<enablePathStyleAccess>{{ . }}</enablePathStyleAccess>
{{- end }}
</provider>
{{- end }}
</config>
{{- end }}
{{- if eq .Values.artifactory.persistence.type "aws-s3" }}
<!-- AWS S3 -->
<config version="2">
<chain> <!--template="cluster-s3"-->
<provider id="cache-fs" type="cache-fs">
<provider id="sharding-cluster" type="sharding-cluster">
<sub-provider id="eventual-cluster" type="eventual-cluster">
<provider id="retry-s3" type="retry">
<provider id="s3" type="s3"/>
</provider>
</sub-provider>
<dynamic-provider id="remote" type="remote"/>
</provider>
</provider>
</chain>
<!-- Set max cache-fs size -->
<provider id="cache-fs" type="cache-fs">
<maxCacheSize>{{ .Values.artifactory.persistence.maxCacheSize }}</maxCacheSize>
<cacheProviderDir>{{ .Values.artifactory.persistence.cacheProviderDir }}</cacheProviderDir>
</provider>
<provider id="eventual-cluster" type="eventual-cluster">
<zone>local</zone>
</provider>
<provider id="remote" type="remote">
<checkPeriod>30</checkPeriod>
<timeout>10000</timeout>
<zone>remote</zone>
</provider>
<provider id="sharding-cluster" type="sharding-cluster">
<readBehavior>crossNetworkStrategy</readBehavior>
<writeBehavior>crossNetworkStrategy</writeBehavior>
<redundancy>{{ .Values.artifactory.persistence.redundancy }}</redundancy>
<property name="zones" value="local,remote"/>
</provider>
<provider id="s3" type="s3">
<endpoint>{{ .Values.artifactory.persistence.awsS3.endpoint }}</endpoint>
{{- if .Values.artifactory.persistence.awsS3.roleName }}
<roleName>{{ .Values.artifactory.persistence.awsS3.roleName }}</roleName>
<refreshCredentials>true</refreshCredentials>
{{- else }}
<refreshCredentials>{{ .Values.artifactory.persistence.awsS3.refreshCredentials }}</refreshCredentials>
{{- end }}
<s3AwsVersion>{{ .Values.artifactory.persistence.awsS3.s3AwsVersion }}</s3AwsVersion>
<testConnection>{{ .Values.artifactory.persistence.awsS3.testConnection }}</testConnection>
<httpsOnly>{{ .Values.artifactory.persistence.awsS3.httpsOnly }}</httpsOnly>
<region>{{ .Values.artifactory.persistence.awsS3.region }}</region>
<bucketName>{{ .Values.artifactory.persistence.awsS3.bucketName }}</bucketName>
{{- if .Values.artifactory.persistence.awsS3.identity }}
<identity>{{ .Values.artifactory.persistence.awsS3.identity }}</identity>
{{- end }}
{{- if .Values.artifactory.persistence.awsS3.credential }}
<credential>{{ .Values.artifactory.persistence.awsS3.credential }}</credential>
{{- end }}
<path>{{ .Values.artifactory.persistence.awsS3.path }}</path>
{{- range $key, $value := .Values.artifactory.persistence.awsS3.properties }}
<property name="{{ $key }}" value="{{ $value }}"/>
{{- end }}
</provider>
</config>
{{- end }}
{{- if eq .Values.artifactory.persistence.type "azure-blob" }}
<!-- Azure Blob Storage -->
<config version="2">
<chain> <!--template="cluster-azure-blob-storage"-->
<provider id="cache-fs" type="cache-fs">
<provider id="sharding-cluster" type="sharding-cluster">
<sub-provider id="eventual-cluster" type="eventual-cluster">
<provider id="retry-azure-blob-storage" type="retry">
<provider id="azure-blob-storage" type="azure-blob-storage"/>
</provider>
</sub-provider>
<dynamic-provider id="remote" type="remote"/>
</provider>
</provider>
</chain>
<!-- Set max cache-fs size -->
<provider id="cache-fs" type="cache-fs">
<maxCacheSize>{{ .Values.artifactory.persistence.maxCacheSize }}</maxCacheSize>
<cacheProviderDir>{{ .Values.artifactory.persistence.cacheProviderDir }}</cacheProviderDir>
</provider>
<!-- cluster eventual Azure Blob Storage Service default chain -->
<provider id="sharding-cluster" type="sharding-cluster">
<readBehavior>crossNetworkStrategy</readBehavior>
<writeBehavior>crossNetworkStrategy</writeBehavior>
<redundancy>2</redundancy>
<lenientLimit>1</lenientLimit>
<property name="zones" value="local,remote"/>
</provider>
<provider id="remote" type="remote">
<zone>remote</zone>
</provider>
<provider id="eventual-cluster" type="eventual-cluster">
<zone>local</zone>
</provider>
<!--cluster eventual template-->
<provider id="azure-blob-storage" type="azure-blob-storage">
<accountName>{{ .Values.artifactory.persistence.azureBlob.accountName }}</accountName>
<accountKey>{{ .Values.artifactory.persistence.azureBlob.accountKey }}</accountKey>
<endpoint>{{ .Values.artifactory.persistence.azureBlob.endpoint }}</endpoint>
<containerName>{{ .Values.artifactory.persistence.azureBlob.containerName }}</containerName>
<multiPartLimit>{{ .Values.artifactory.persistence.azureBlob.multiPartLimit }}</multiPartLimit>
<multipartElementSize>{{ .Values.artifactory.persistence.azureBlob.multipartElementSize }}</multipartElementSize>
<testConnection>{{ .Values.artifactory.persistence.azureBlob.testConnection }}</testConnection>
</provider>
</config>
{{- end }}
## For artifactory.persistence.type file-system
fileSystem:
## You may also use existing shared claims for the data and backup storage. This allows storage (NAS for example) to be used for Data and Backup dirs which are safe to share across multiple artifactory nodes.
## You may specify numberOfExistingClaims to indicate how many of these existing shared claims to mount. (Default = 1)
## Create PVCs with ReadWriteMany that match the naming convetions:
## {{ template "artifactory-ha.fullname" . }}-data-pvc-<claim-ordinal>
## {{ template "artifactory-ha.fullname" . }}-backup-pvc
## Example (using numberOfExistingClaims: 2)
## myexample-data-pvc-0
## myexample-data-pvc-1
## myexample-backup-pvc
## Note: While you need two PVC fronting two PVs, multiple PVs can be attached to the same storage in many cases allowing you to share an underlying drive.
## Need to have the following set
existingSharedClaim:
enabled: false
numberOfExistingClaims: 1
## Should be a child directory of {{ .Values.artifactory.persistence.mountPath }}
dataDir: "{{ .Values.artifactory.persistence.mountPath }}/artifactory-data"
backupDir: "/var/opt/jfrog/artifactory-backup"
## For artifactory.persistence.type nfs
## If using NFS as the shared storage, you must have a running NFS server that is accessible by your Kubernetes
## cluster nodes.
## Need to have the following set
nfs:
# Must pass actual IP of NFS server with '--set For artifactory.persistence.nfs.ip=${NFS_IP}'
ip:
haDataMount: "/data"
haBackupMount: "/backup"
dataDir: "/var/opt/jfrog/artifactory-ha"
backupDir: "/var/opt/jfrog/artifactory-backup"
capacity: 200Gi
mountOptions: []
## For artifactory.persistence.type google-storage
googleStorage:
## When using GCP buckets as your binary store (Available with enterprise license only)
gcpServiceAccount:
enabled: false
## Use either an existing secret prepared in advance or put the config (replace the content) in the values
## ref: https://github.com/jfrog/charts/blob/master/stable/artifactory-ha/README.md#google-storage
# customSecretName:
# config: |
# {
# "type": "service_account",
# "project_id": "<project_id>",
# "private_key_id": "?????",
# "private_key": "-----BEGIN PRIVATE KEY-----\n????????==\n-----END PRIVATE KEY-----\n",
# "client_email": "???@j<project_id>.iam.gserviceaccount.com",
# "client_id": "???????",
# "auth_uri": "https://accounts.google.com/o/oauth2/auth",
# "token_uri": "https://oauth2.googleapis.com/token",
# "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
# "client_x509_cert_url": "https://www.googleapis.com/robot/v1....."
# }
endpoint: commondatastorage.googleapis.com
httpsOnly: false
# Set a unique bucket name
bucketName: "artifactory-ha-gcp"
identity:
credential:
path: "artifactory-ha/filestore"
bucketExists: false
## For artifactory.persistence.type aws-s3-v3
awsS3V3:
testConnection: false
identity:
credential:
region:
bucketName: artifactory-aws
path: artifactory/filestore
endpoint:
maxConnections: 50
kmsServerSideEncryptionKeyId:
kmsKeyRegion:
kmsCryptoMode:
useInstanceCredentials: true
usePresigning: false
signatureExpirySeconds: 300
cloudFrontDomainName:
cloudFrontKeyPairId:
cloudFrontPrivateKey:
enableSignedUrlRedirect: false
enablePathStyleAccess: false
## For artifactory.persistence.type aws-s3
## IMPORTANT: Make sure S3 `endpoint` and `region` match! See https://docs.aws.amazon.com/general/latest/gr/rande.html
awsS3:
# Set a unique bucket name
bucketName: "artifactory-ha-aws"
endpoint:
region:
roleName:
identity:
credential:
path: "artifactory-ha/filestore"
refreshCredentials: true
httpsOnly: true
testConnection: false
s3AwsVersion: "AWS4-HMAC-SHA256"
## Additional properties to set on the s3 provider
properties: {}
# httpclient.max-connections: 100
## For artifactory.persistence.type azure-blob
azureBlob:
accountName:
accountKey:
endpoint:
containerName:
multiPartLimit: 100000000
multipartElementSize: 50000000
testConnection: false
service: name: artifactory type: ClusterIP
## Set this to a list of IP CIDR ranges
## Example: loadBalancerSourceRanges: ['10.10.10.5/32', '10.11.10.5/32']
## or pass from helm command line
## Example: helm install ... --set nginx.service.loadBalancerSourceRanges='{10.10.10.5/32,10.11.10.5/32}'
loadBalancerSourceRanges: []
annotations: {}
## Which nodes in the cluster should be in the external load balancer pool (have external traffic routed to them)
## Supported pool values
## members
## all
pool: members
javaOpts: {}
replicator: enabled: false ingress: name: hosts: [] annotations: {}
# nginx.ingress.kubernetes.io/proxy-buffering: "off"
# nginx.ingress.kubernetes.io/configuration-snippet: |
# chunked_transfer_encoding on;
tls: []
# Secrets must be manually created in the namespace.
# - hosts:
# - artifactory.domain.example
# secretName: chart-example-tls-secret
## When replicator is enabled and want to use tracker feature, trackerIngress.enabled flag should be set to true
## Please refer - https://www.jfrog.com/confluence/display/JFROG/JFrog+Peer-to-Peer+%28P2P%29+Downloads
trackerIngress:
enabled: false
name:
hosts: []
annotations: {}
# kubernetes.io/ingress.class: nginx
# nginx.ingress.kubernetes.io/proxy-buffering: "off"
# nginx.ingress.kubernetes.io/configuration-snippet: |
# chunked_transfer_encoding on;
tls: []
# Secrets must be manually created in the namespace.
# - hosts:
# - artifactory.domain.example
# secretName: chart-example-tls-secret
ssh: enabled: false internalPort: 1339 externalPort: 1339
annotations: {}
primary: name: artifactory-ha-primary
# preStartCommand:
labels: {}
persistence:
## Set existingClaim to true or false
## If true, you must prepare a PVC with the name e.g `volume-myrelease-artifactory-ha-primary-0`
existingClaim: false
replicaCount: 1
# minAvailable: 1
updateStrategy:
type: RollingUpdate
## Resources for the primary node
resources: {}
# requests:
# memory: "1Gi"
# cpu: "500m"
# limits:
# memory: "2Gi"
# cpu: "1"
## The following Java options are passed to the java process running Artifactory primary node.
## You should set them according to the resources set above
javaOpts:
# xms: "1g"
# xmx: "2g"
corePoolSize: 16
jmx:
enabled: false
port: 9010
host:
ssl: false
# When authenticate is true, accessFile and passwordFile are required
authenticate: false
accessFile:
passwordFile:
# other: ""
nodeSelector: {}
tolerations: []
affinity: {}
## Only used if "affinity" is empty
podAntiAffinity:
## Valid values are "soft" or "hard"; any other value indicates no anti-affinity
type: ""
topologyKey: "kubernetes.io/hostname"
node: name: artifactory-ha-member
# preStartCommand:
labels: {}
persistence:
## Set existingClaim to true or false
## If true, you must prepare a PVC with the name e.g `volume-myrelease-artifactory-ha-member-0`
existingClaim: false
replicaCount: 2
updateStrategy:
type: RollingUpdate
minAvailable: 1
## Resources for the member nodes
resources: {}
# requests:
# memory: "1Gi"
# cpu: "500m"
# limits:
# memory: "2Gi"
# cpu: "1"
## The following Java options are passed to the java process running Artifactory member nodes.
## You should set them according to the resources set above
javaOpts:
# xms: "1g"
# xmx: "2g"
corePoolSize: 16
jmx:
enabled: false
port: 9010
host:
ssl: false
# When authenticate is true, accessFile and passwordFile are required
authenticate: false
accessFile:
passwordFile:
# other: ""
# xms: "1g"
# xmx: "2g"
# other: ""
nodeSelector: {}
## Wait for Artifactory primary
waitForPrimaryStartup:
enabled: true
## Setting time will override the built in test and will just wait the set time
time:
tolerations: []
## Complete specification of the "affinity" of the member nodes; if this is non-empty,
## "podAntiAffinity" values are not used.
affinity: {}
## Only used if "affinity" is empty
podAntiAffinity:
## Valid values are "soft" or "hard"; any other value indicates no anti-affinity
type: ""
topologyKey: "kubernetes.io/hostname"
frontend:
session:
timeoutMinutes: '30'
access:
accessConfig: security: tls: false
kubectl create secret tls <secret-name> --cert=ca.crt --key=ca.private.key
database: maxOpenConnections: 80 tomcat: connector: maxThreads: 50 extraConfig: 'acceptCount="100"'
metadata: database: maxOpenConnections: 80
initContainers: resources: {}
nginx: enabled: true kind: Deployment name: nginx labels: {} replicaCount: 1 minAvailable: 0 uid: 104 gid: 107 securityContext: {}
image: registry: releases-docker.jfrog.io repository: jfrog/nginx-artifactory-pro
pullPolicy: IfNotPresent
priorityClassName:
loggers: []
loggersResources: {}
logs: stderr: false level: warn
mainConf: |
worker_processes 4;
{{ if .Values.nginx.logs.stderr }}
error_log stderr {{ .Values.nginx.logs.level }};
{{- else -}}
error_log {{ .Values.nginx.persistence.mountPath }}/logs/error.log {{ .Values.nginx.logs.level }};
{{- end }}
pid /tmp/nginx.pid;
{{- if .Values.artifactory.ssh.enabled }}
## SSH Server Configuration
stream {
server {
listen {{ .Values.nginx.ssh.internalPort }};
proxy_pass {{ include "artifactory-ha.fullname" . }}:{{ .Values.artifactory.ssh.externalPort }};
}
}
{{- end }}
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
variables_hash_max_size 1024;
variables_hash_bucket_size 64;
server_names_hash_max_size 4096;
server_names_hash_bucket_size 128;
types_hash_max_size 2048;
types_hash_bucket_size 64;
proxy_read_timeout 2400s;
client_header_timeout 2400s;
client_body_timeout 2400s;
proxy_connect_timeout 75s;
proxy_send_timeout 2400s;
proxy_buffer_size 128k;
proxy_buffers 40 128k;
proxy_busy_buffers_size 128k;
proxy_temp_file_write_size 250m;
proxy_http_version 1.1;
client_body_buffer_size 128k;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
log_format timing 'ip = $remote_addr '
'user = \"$remote_user\" '
'local_time = \"$time_local\" '
'host = $host '
'request = \"$request\" '
'status = $status '
'bytes = $body_bytes_sent '
'upstream = \"$upstream_addr\" '
'upstream_time = $upstream_response_time '
'request_time = $request_time '
'referer = \"$http_referer\" '
'UA = \"$http_user_agent\"';
access_log {{ .Values.nginx.persistence.mountPath }}/logs/access.log timing;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
artifactoryConf: | {{- if .Values.nginx.https.enabled }} ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; ssl_certificate {{ .Values.nginx.persistence.mountPath }}/ssl/tls.crt; ssl_certificate_key {{ .Values.nginx.persistence.mountPath }}/ssl/tls.key; ssl_session_cache shared:SSL:1m; ssl_prefer_server_ciphers on; {{- end }}
server {
{{- if .Values.nginx.internalPortHttps }}
listen {{ .Values.nginx.internalPortHttps }} ssl;
{{- else -}}
{{- if .Values.nginx.https.enabled }}
listen {{ .Values.nginx.https.internalPort }} ssl;
{{- end }}
{{- end }}
{{- if .Values.nginx.internalPortHttp }}
listen {{ .Values.nginx.internalPortHttp }};
{{- else -}}
{{- if .Values.nginx.http.enabled }}
listen {{ .Values.nginx.http.internalPort }};
{{- end }}
{{- end }}
server_name ~(?<repo>.+)\.{{ include "artifactory-ha.fullname" . }} {{ include "artifactory-ha.fullname" . }}
{{- range .Values.ingress.hosts -}}
{{- if contains "." . -}}
{{ "" | indent 0 }} ~(?<repo>.+)\.{{ . }}
{{- end -}}
{{- end -}};
if ($http_x_forwarded_proto = '') {
set $http_x_forwarded_proto $scheme;
}
## Application specific logs
## access_log /var/log/nginx/artifactory-access.log timing;
## error_log /var/log/nginx/artifactory-error.log;
rewrite ^/artifactory/?$ / redirect;
if ( $repo != "" ) {
rewrite ^/(v1|v2)/(.*) /artifactory/api/docker/$repo/$1/$2 break;
}
chunked_transfer_encoding on;
client_max_body_size 0;
location / {
proxy_read_timeout 900;
proxy_pass_header Server;
proxy_cookie_path ~*^/.* /;
proxy_pass {{ include "artifactory-ha.scheme" . }}://{{ include "artifactory-ha.fullname" . }}:{{ .Values.artifactory.externalPort }}/;
{{- if .Values.nginx.service.ssloffload}}
proxy_set_header X-JFrog-Override-Base-Url $http_x_forwarded_proto://$host;
{{- else }}
proxy_set_header X-JFrog-Override-Base-Url $http_x_forwarded_proto://$host:$server_port;
proxy_set_header X-Forwarded-Port $server_port;
{{- end }}
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
add_header Strict-Transport-Security always;
location /artifactory/ {
if ( $request_uri ~ ^/artifactory/(.*)$ ) {
proxy_pass {{ include "artifactory-ha.scheme" . }}://{{ include "artifactory-ha.fullname" . }}:{{ .Values.artifactory.externalArtifactoryPort }}/artifactory/$1;
}
proxy_pass {{ include "artifactory-ha.scheme" . }}://{{ include "artifactory-ha.fullname" . }}:{{ .Values.artifactory.externalArtifactoryPort }}/artifactory/;
}
}
}
service:
type: LoadBalancer
ssloffload: false
## For supporting whitelist on the Nginx LoadBalancer service
## Set this to a list of IP CIDR ranges
## Example: loadBalancerSourceRanges: ['10.10.10.5/32', '10.11.10.5/32']
## or pass from helm command line
## Example: helm install ... --set nginx.service.loadBalancerSourceRanges='{10.10.10.5/32,10.11.10.5/32}'
loadBalancerSourceRanges: []
## Provide static ip address
loadBalancerIP:
## There are two available options: “Cluster” (default) and “Local”.
externalTrafficPolicy: Cluster
labels: {}
# label-key: label-value
http: enabled: true externalPort: 80 internalPort: 80 https: enabled: true externalPort: 443 internalPort: 443
ssh: internalPort: 1339 externalPort: 1339
livenessProbe: enabled: true path: /router/api/v1/system/health initialDelaySeconds: 0 failureThreshold: 10 timeoutSeconds: 5 periodSeconds: 10 successThreshold: 1
readinessProbe: enabled: true path: /router/api/v1/system/health initialDelaySeconds: 0 failureThreshold: 10 timeoutSeconds: 5 periodSeconds: 10 successThreshold: 1
startupProbe: enabled: true path: /router/api/v1/system/health initialDelaySeconds: 30 failureThreshold: 60 periodSeconds: 5 timeoutSeconds: 5
customConfigMap:
customArtifactoryConfigMap: persistence: mountPath: "/var/opt/jfrog/nginx" enabled: false
## Requires persistence.enabled: true
## If defined, PVC must be created manually before volume will be bound
# existingClaim:
accessMode: ReadWriteOnce
size: 5Gi
## nginx data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClassName: "-"
resources: {}
nodeSelector: {}
tolerations: []
affinity: {}
filebeat: enabled: false name: artifactory-filebeat image: repository: "docker.elastic.co/beats/filebeat" version: 7.9.2 logstashUrl: "logstash:5044"
terminationGracePeriod: 10
livenessProbe: exec: command:
|
curl --fail 127.0.0.1:5066
failureThreshold: 3 initialDelaySeconds: 10 periodSeconds: 10 timeoutSeconds: 5
readinessProbe: exec: command:
|
filebeat test output
failureThreshold: 3 initialDelaySeconds: 10 periodSeconds: 10 timeoutSeconds: 5
resources: {}
filebeatYml: | logging.level: info path.data: {{ .Values.artifactory.persistence.mountPath }}/log/filebeat name: artifactory-filebeat queue.spool: ~ filebeat.inputs:
additionalResources: |
hostAliases: []
Hi, @santh529 ; Please could you edit the comment with your values.yaml as code so it is more readable? Another option would be to remove all unused sections/comment lines to make it shorter.
Is this a request for help?:
--- Request for help
Is this a BUG REPORT or FEATURE REQUEST? (choose one): Bug
Version of Helm and Kubernetes: Helm: version.BuildInfo{Version:"v3.5.3", GitCommit:"041ce5a2c17a58be0fcd5f5e16fb3e7e95fea622", GitTreeState:"dirty", GoVersion:"go1.15.8"} Kubernetes: Client Version: version.Info{Major:"1", Minor:"11+", GitVersion:"v1.11.0+d4cacc", GitCommit:"d4cacc0", GitTreeState:"clean", BuildDate:"2018-10-10T16:38:01Z",GoVersion:"go1.10.3", Compiler:"gc", Platform:"windows/amd64"} Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.0+a5a0987, GitCommit:"a5a0987268903082e32a9217c88d60bf59c0ccfe", GitTreeState:"clean", BildDate:"2021-03-25T22:15:23Z", GoVersion:"go1.15.7", Compiler:"gc", Platform:"inux/amd64"} Which chart: Artifactory Ha
What happened: Deployed, but deployments,statefulsets are not running
What you expected to happen: Resources should run
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know: