Closed frankfong0208 closed 3 years ago
If this is a BUG REPORT, please:
If this is a FEATURE REQUEST, please:
In both cases, be ready for followup questions, and please respond in a timely manner. If we can't reproduce a bug or think a feature already exists, we might close your issue. If we're wrong, PLEASE feel free to reopen it and explain why. -->
@frankfong0208 can you please share following things to investigate further
artifactory-master-key
Hi @chukka, please find requested information: --Share kubectl command to create this secret artifactory-master-key it is a helm chart to create the secret for master-key values.yaml:
artifactory:
joinKey: DUMMY
masterKey: DUMMY
--templates/artifactory-master-key-secret.yaml
{{- if .Values.artifactory.masterKey }}
apiVersion: v1
data:
master-key: {{ tpl .Values.artifactory.masterKey . | b64enc | quote }}
kind: Secret
metadata:
name: artifactory-master-key
namespace: jfrog-platform
labels:
app: {{ include "jfrog-secrets.name" . }}
chart: {{ include "jfrog-secrets.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
type: Opaque
{{- end }}
command: helm upgrade --install jfrog-secrets --namespace jfrog-platform ./jfrog-secrets --set artifactory.joinKey=744189d0dd46295f8ca810ff90cf560a,artifactory.masterKey= d0d7dc09a7091f68b59f51fa2e7b0c336ab7f38d8d23fd18aed38bb6568164a7 the key vaules might be different as I kept changing them
--customvalues.yaml
global:
imageRegistry: releases-docker.jfrog.io
## Set this flag to false, if you are using artifactory chart.
artifactoryHaEnabled: true
jfrogUrl: '{{ include "jfrog-platform.jfrogUrl" . }}'
jfrogUrlUI: '{{ include "jfrog-platform.jfrogUrl" . }}'
database:
host: "jfrog-rds.dddd.ap-southeast-2.rds.amazonaws.com" # RDS endpoint
port: 5432
sslMode: disable
secrets:
adminUsername:
name: "jp-database-creds"
key: "db-admin-user"
adminPassword:
name: "jp-database-creds"
key: "db-admin-password"
initContainerSetupDBImage: releases-docker.jfrog.io/postgres:13.2-alpine
initContainerImagePullPolicy: Always
initDBCreation: true
customCertificates:
enabled: false
customInitContainersBegin: |
{{ template "initdb" . }}
customVolumes: |
{{ template "initdb-volume" . }}
# This Postgresql is used by all products , set postgresql.enabled: false, when you want to use external postgresql for all products
postgresql:
enabled: false
image:
repository: bitnami/postgresql
tag: 13.2.0-debian-10-r55
postgresqlUsername: postgres
postgresqlPassword: postgres
postgresqlExtendedConf:
max_connections: 1000
max_wal_size: 1000MB
persistence:
size: 500Gi
## This Rabbitmq is used by Xray and Pipelines only, set rabbitmq.enabled: false, when Xray or Pipelines is not enabled
rabbitmq:
enabled: true
image:
repository: bitnami/rabbitmq
tag: 3.8.14-debian-10-r32
auth:
username: admin
password: password
erlangCookie: secretcookie
maxAvailableSchedulers: null
onlineSchedulers: null
persistence:
size: 50Gi
extraEnvVars:
- name: RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS
value: "+S 2:2 +sbwt none +sbwtdcpu none +sbwtdio none"
extraSecrets:
load-definition:
load_definition.json: |
{
"vhosts": [
{
"name": "xray"
}
],
"permissions": [
{
"user": "admin",
"vhost": "xray",
"configure": ".*",
"write": ".*",
"read": ".*"
}
],
"policies": [
{
"name": "ha-all",
"apply-to": "all",
"pattern": ".*",
"vhost": "xray",
"definition": {
"ha-mode": "all",
"ha-sync-mode": "automatic",
}
}
]
}
loadDefinition:
enabled: true
existingSecret: load-definition
extraConfiguration: |
management.load_definitions = /app/load_definition.json
## This Redis is used by pipelines only, set redis.enabled: false, when pipelines is not enabled
redis:
enabled: false
image:
repository: bitnami/redis
tag: 6.2.1-debian-10-r9
cluster:
enabled: false
usePassword: false
artifactory:
## Note: Set artifactoryHaEnabled flag(global section) to false, if you are using artifactory chart.
enabled: false
postgresql:
enabled: false
waitForDatabase: false
database:
type: postgresql
driver: org.postgresql.Driver
url: '{{ include "database.url" . }}'
user: artifactory
password: artifactory
artifactory:
migration:
enabled: false
persistence:
size: 200Gi
# license:
# secret:
# dataKey:
artifactory-ha:
## Note: Set artifactoryHaEnabled flag(global section) to true, if you are using artifactory-ha chart.
enabled: true
postgresql:
enabled: false
waitForDatabase: false
database:
type: postgresql
driver: org.postgresql.Driver
secrets:
user:
name: "jp-database-creds"
key: "db-admin-user"
password:
name: "jp-database-creds"
key: "db-admin-password"
url:
name: "jp-database-creds"
key: "db-url"
artifactory:
## For artifactory pro license (supports single node only), enable node replicaCount=0
# node:
# replicaCount: 0
joinKeySecretName: artifactory-join-key
masterKeySecretName: artifactory-master-key
migration:
enabled: false
persistence:
annotations: eks\.amazonaws\.com/role-arn=arn:aws:iam::111111111:role/shared-artifactory-s3-role
# size: 200Gi
type: aws-s3-v3
awsS3V3:
region: ap-southeast-2
bucketName: shared-artifactory-bucket
xray:
enabled: true
unifiedUpgradeAllowed: true
postgresql:
enabled: false
database:
url: '{{ include "database.url" . }}'
user: xray
password: xray
common:
persistence:
size: 200Gi
rabbitmq:
connectionConfigFromEnvironment: false
rabbitmq:
enabled: false
external:
username: admin
password: password
url: "amqp://{{ .Release.Name }}-rabbitmq:5672/xray"
erlangCookie: secretcookie
distribution:
enabled: false
unifiedUpgradeAllowed: true
postgresql:
enabled: false
database:
url: '{{ include "database.url" . }}'
user: distribution
password: distribution
mission-control:
enabled: true
replicaCount: 3
unifiedUpgradeAllowed: true
postgresql:
enabled: false
database:
url: '{{ include "database.url" . }}'
user: mc
password: mc
name: '{{ include "database.name" . }}'
pipelines:
enabled: false
unifiedUpgradeAllowed: true
postgresql:
enabled: false
global:
postgresql:
host: "{{ .Release.Name }}-postgresql"
port: 5432
database: "pipelinesdb"
user: "apiuser"
password: "pipeline"
pipelines:
api:
externalUrl: http://pipelines.test.com
www:
externalUrl: http://pipelines.test.com
msg:
uiUserPassword: password
redis:
enabled: false
rabbitmq:
enabled: false
internal_ip: "{{ .Release.Name }}-rabbitmq"
msg_hostname: "{{ .Release.Name }}-rabbitmq"
port: 5672
manager_port: 15672
ms_username: admin
ms_password: password
cp_username: admin
cp_password: password
root_vhost_exchange_name: rootvhost
erlang_cookie: secretcookie
build_vhost_name: pipelines
root_vhost_name: pipelinesRoot
protocol: amqp
@frankfong0208 this doesn’t seems like a master key issue, as you mentioned
kubectl exec jfrog-platform-artifactory-ha-primary-0 -- cat /var/opt/jfrog/artifactory/etc/security/master.key', I could get the correct custom master key
Can you send the full logs of the artifactory pod.
Hi @rahulsadanandan I could get the correct custom master key passed into the primary artifactory pod as well using exactly the same command as yours. That was not an issue. The issue was the pod was stuck with failed startup probe even if a custom master key was passed in. I googled for some errors and someone probably from your end in some related issues suspected it was master key causing the trouble... Couldn't find that thread now unfortunately...
Please find the full log of the primary artifactory ha pod below:
Preparing to run Artifactory in Docker Running as uid=1030(artifactory) gid=1030(artifactory) groups=1030(artifactory) Dockerfile for this image can found inside the container. To view the Dockerfile: 'cat /docker/artifactory-pro/Dockerfile.artifactory'. SKIP_WAIT_FOR_EXTERNAL_DB is set to true. Skipping wait for external database to come up Copying Artifactory bootstrap files 2021-07-14T22:43:32.000Z [shell] [38;5;69m[INFO ][0m [] [systemYamlHelper.sh:520 ] [main] - Resolved .artifactory.tomcat.connector.maxThreads (200) from /opt/jfrog/artifactory/var/etc/system.yaml 2021-07-14T22:43:32.000Z [shell] [38;5;69m[INFO ][0m [] [systemYamlHelper.sh:520 ] [main] - Resolved .access.tomcat.connector.maxThreads (50) from /opt/jfrog/artifactory/var/etc/system.yaml 2021-07-14T22:43:32.000Z [shell] [38;5;69m[INFO ][0m [] [systemYamlHelper.sh:520 ] [main] - Resolved .artifactory.tomcat.connector.extraConfig (acceptCount="100") from /opt/jfrog/artifactory/var/etc/system.yaml 2021-07-14T22:43:32.000Z [shell] [38;5;69m[INFO ][0m [] [systemYamlHelper.sh:520 ] [main] - Resolved .access.tomcat.connector.extraConfig (acceptCount="100") from /opt/jfrog/artifactory/var/etc/system.yaml 2021-07-14T22:43:32.000Z [shell] [38;5;69m[INFO ][0m [] [systemYamlHelper.sh:520 ] [main] - Resolved .shared.extraJavaOpts (sensitive_key_hidden_) from /opt/jfrog/artifactory/var/etc/system.yaml 2021-07-14T22:43:32.000Z [shell] [38;5;69m[INFO ][0m [] [systemYamlHelper.sh:520 ] [main] - Resolved .shared.extraJavaOpts (sensitive_keyhidden) from /opt/jfrog/artifactory/var/etc/system.yaml 2021-07-14T22:43:33.000Z [shell] [38;5;69m[INFO ][0m [] [systemYamlHelper.sh:520 ] [main] - Resolved .shared.database.type (postgresql) from /opt/jfrog/artifactory/var/etc/system.yaml 2021-07-14T22:43:33.000Z [shell] [38;5;69m[INFO ][0m [] [systemYamlHelper.sh:520 ] [main] - Resolved JF_SHARED_DATABASE_URL (jdbc:postgresql://jfrog-rds.ckpg8sk01oug.ap-southeast-2.rds.amazonaws.com:5432/artifactory_ha) from environment variable 2021-07-14T22:43:33.000Z [shell] [38;5;69m[INFO ][0m [] [systemYamlHelper.sh:520 ] [main] - Resolved JF_SHARED_DATABASE_PASSWORD (__sensitive_key_hidden___) from environment variable 2021-07-14T22:43:33.000Z [shell] [38;5;69m[INFO ][0m [] [systemYamlHelper.sh:520 ] [main] - Resolved .artifactory.database.maxOpenConnections (80) from /opt/jfrog/artifactory/var/etc/system.yaml 2021-07-14T22:43:33.000Z [shell] [38;5;69m[INFO ][0m [] [systemYamlHelper.sh:520 ] [main] - Resolved .access.database.maxOpenConnections (80) from /opt/jfrog/artifactory/var/etc/system.yaml 2021-07-14T22:43:33.000Z [shell] [38;5;69m[INFO ][0m [] [installerCommon.sh:1512 ] [main] - Checking open files and processes limits 2021-07-14T22:43:33.000Z [shell] [38;5;69m[INFO ][0m [] [installerCommon.sh:1515 ] [main] - Current max open files is 1048576 2021-07-14T22:43:33.000Z [shell] [38;5;69m[INFO ][0m [] [installerCommon.sh:1526 ] [main] - Current max open processes is unlimited yaml validation succeeded 2021-07-14T22:43:33.000Z [shell] [38;5;69m[INFO ][0m [] [installerCommon.sh:2874 ] [main] - System.yaml validation succeeded
Database connection check failed Could not determine database type 2021-07-14T22:43:33.000Z [shell] [38;5;69m[INFO ][0m [] [installerCommon.sh:1593 ] [main] - Testing directory /opt/jfrog/artifactory/var has read/write permissions for user id 1030 2021-07-14T22:43:33.000Z [shell] [38;5;69m[INFO ][0m [] [installerCommon.sh:1608 ] [main] - Permissions for /opt/jfrog/artifactory/var are good 2021-07-14T22:43:33.000Z [shell] [38;5;69m[INFO ][0m [] [installerCommon.sh:3374 ] [main] - Setting JF_SHARED_NODE_ID to jfrog-platform-artifactory-ha-primary-0 2021-07-14T22:43:33.000Z [shell] [38;5;69m[INFO ][0m [] [installerCommon.sh:3374 ] [main] - Setting JF_SHARED_NODE_IP to 10.100.85.219 2021-07-14T22:43:33.000Z [shell] [38;5;69m[INFO ][0m [] [installerCommon.sh:3374 ] [main] - Setting JF_SHARED_NODE_NAME to jfrog-platform-artifactory-ha-primary-0 2021-07-14T22:43:33.000Z [shell] [38;5;69m[INFO ][0m [] [systemYamlHelper.sh:520 ] [main] - Resolved JF_SHARED_NODE_HAENABLED (true) from environment variable 2021-07-14T22:43:34.000Z [shell] [38;5;69m[INFO ][0m [] [systemYamlHelper.sh:520 ] [main] - Resolved .shared.extraJavaOpts (__sensitive_key_hidden___) from /opt/jfrog/artifactory/var/etc/system.yaml 2021-07-14T22:43:34.000Z [shell] [38;5;69m[INFO ][0m [] [artifactoryCommon.sh:148 ] [main] - Saving /opt/jfrog/artifactory/app/artifactory/tomcat/conf/server.xml as /opt/jfrog/artifactory/app/artifactory/tomcat/conf/server.xml.orig 2021-07-14T22:43:34.000Z [shell] [38;5;69m[INFO ][0m [] [artifactoryCommon.sh:156 ] [main] - Using Tomcat template to generate : /opt/jfrog/artifactory/app/artifactory/tomcat/conf/server.xml 2021-07-14T22:43:34.000Z [shell] [38;5;69m[INFO ][0m [] [artifactoryCommon.sh:871 ] [main] - Resolved ${artifactory.port||8081} to default value : 8081 2021-07-14T22:43:34.000Z [shell] [38;5;69m[INFO ][0m [] [systemYamlHelper.sh:520 ] [main] - Resolved .artifactory.tomcat.connector.sendReasonPhrase (false) from /opt/jfrog/artifactory/var/etc/system.yaml 2021-07-14T22:43:34.000Z [shell] [38;5;69m[INFO ][0m [] [systemYamlHelper.sh:520 ] [main] - Resolved .artifactory.tomcat.connector.maxThreads (200) from /opt/jfrog/artifactory/var/etc/system.yaml 2021-07-14T22:43:34.000Z [shell] [38;5;69m[INFO ][0m [] [artifactoryCommon.sh:871 ] [main] - Resolved ${artifactory.tomcat.connector.maxThreads||200} to default value : 200 2021-07-14T22:43:34.000Z [shell] [38;5;69m[INFO ][0m [] [systemYamlHelper.sh:520 ] [main] - Resolved .artifactory.tomcat.connector.extraConfig (acceptCount="100") from /opt/jfrog/artifactory/var/etc/system.yaml 2021-07-14T22:43:34.000Z [shell] [38;5;69m[INFO ][0m [] [artifactoryCommon.sh:871 ] [main] - Resolved ${artifactory.tomcat.maintenanceConnector.port||8091} to default value : 8091 2021-07-14T22:43:34.000Z [shell] [38;5;69m[INFO ][0m [] [artifactoryCommon.sh:871 ] [main] - Resolved ${artifactory.tomcat.maintenanceConnector.maxThreads||5} to default value : 5 2021-07-14T22:43:34.000Z [shell] [38;5;69m[INFO ][0m [] [artifactoryCommon.sh:871 ] [main] - Resolved ${artifactory.tomcat.maintenanceConnector.acceptCount||5} to default value : 5 2021-07-14T22:43:34.000Z [shell] [38;5;69m[INFO ][0m [] [artifactoryCommon.sh:871 ] [main] - Resolved ${access.http.port||8040} to default value : 8040 2021-07-14T22:43:34.000Z [shell] [38;5;69m[INFO ][0m [] [systemYamlHelper.sh:520 ] [main] - Resolved .access.tomcat.connector.sendReasonPhrase (false) from /opt/jfrog/artifactory/var/etc/system.yaml 2021-07-14T22:43:34.000Z [shell] [38;5;69m[INFO ][0m [] [artifactoryCommon.sh:871 ] [main] - Resolved ${access.tomcat.connector.sendReasonPhrase||false} to default value : false 2021-07-14T22:43:34.000Z [shell] [38;5;69m[INFO ][0m [] [systemYamlHelper.sh:520 ] [main] - Resolved .access.tomcat.connector.maxThreads (50) from /opt/jfrog/artifactory/var/etc/system.yaml 2021-07-14T22:43:34.000Z [shell] [38;5;69m[INFO ][0m [] [artifactoryCommon.sh:871 ] [main] - Resolved ${access.tomcat.connector.maxThreads||50} to default value : 50 2021-07-14T22:43:34.000Z [shell] [38;5;69m[INFO ][0m [] [systemYamlHelper.sh:520 ] [main] - Resolved .access.tomcat.connector.extraConfig (acceptCount="100") from /opt/jfrog/artifactory/var/etc/system.yaml 2021-07-14T22:43:34.000Z [shell] [38;5;69m[INFO ][0m [] [systemYamlHelper.sh:520 ] [main] - Resolved JF_PRODUCT_HOME (/opt/jfrog/artifactory) from environment variable 2021-07-14T22:43:35.000Z [shell] [38;5;69m[INFO ][0m [] [artifactoryCommon.sh:871 ] [main] - Resolved ${shared.tomcat.workDir||/opt/jfrog/artifactory/var/work/artifactory/tomcat} to default value : /opt/jfrog/artifactory/var/work/artifactory/tomcat
JF_ARTIFACTORY_USER : artifactory
JF_SHARED_DATABASE_URL : jdbc:postgresql://jfrog-rds.dfdsf.ap-southeast-2.rds.amazonaws.com:5432/artifactory_ha
JF_SHARED_NODE_ID : jfrog-platform-artifactory-ha-primary-0
JF_SHARED_NODE_IP : 10.100.85.219
JF_ARTIFACTORY_PID : /opt/jfrog/artifactory/app/run/artifactory.pid
JF_SHARED_EXTRAJAVAOPTS : **
-Dartifactory.async.corePoolSize : 16
JF_SHARED_DATABASE_USERNAME : postgres
JF_PRODUCT_DATA_INTERNAL : /var/opt/jfrog/artifactory
JF_SYSTEM_YAML : /opt/jfrog/artifactory/var/etc/system.yaml
JF_PRODUCT_HOME : /opt/jfrog/artifactory
JF_ROUTER_TOPOLOGY_LOCAL_REQUIREDSERVICETYPES : jfrt,jfac,jfmd,jffe,jfevt
JF_SHARED_NODE_HAENABLED : true
JF_SHARED_DATABASE_PASSWORD : **
JF_SHARED_NODE_NAME : jfrog-platform-artifactory-ha-primary-0
Using default router's certificate and private key
Starting router...
router not running. Proceed to start it up.
router started. PID: 3565
2021-07-14T22:43:35.503Z [36m[jfrou][0m [34m[INFO ][0m [769b1613c1b81a57] [bootstrap.go:76 ] [main ] - Router (jfrou) service initialization started. Version: 7.21.4-1 Revision: 7bb45d0a6a1ca7ecd9c323d91634db9d92018662 PID: 3565 Home: /opt/jfrog/artifactory
2021-07-14T22:43:35.503Z [36m[jfrou][0m [34m[INFO ][0m [769b1613c1b81a57] [bootstrap.go:79 ] [main ] - JFrog Router IP: 10.100.85.219
Starting metadata...
2021-07-14T22:43:35.506Z [36m[jfrou][0m [34m[INFO ][0m [769b1613c1b81a57] [bootstrap.go:192 ] [main ] - System configuration encryption report:
shared.database.password: does not exist in the config file
shared.newrelic.licenseKey: does not exist in the config file
shared.security.joinKeyFile: file '/opt/jfrog/artifactory/var/etc/security/join.key' - already encrypted
2021-07-14T22:43:35.506Z [36m[jfrou][0m [34m[INFO ][0m [769b1613c1b81a57] [bootstrap.go:84 ] [main ] - JFrog Router Service ID: jfrou@01fafqvp1zrqhckj4bvfr3g0wj
2021-07-14T22:43:35.506Z [36m[jfrou][0m [34m[INFO ][0m [769b1613c1b81a57] [bootstrap.go:85 ] [main ] - JFrog Router Node ID: jfrog-platform-artifactory-ha-primary-0
2021-07-14T22:43:35.519Z [36m[jfrou][0m [34m[INFO ][0m [769b1613c1b81a57] [http_client_holder.go:155 ] [main ] - System cert pool contents were loaded as trusted CAs for TLS communication
2021-07-14T22:43:35.519Z [36m[jfrou][0m [34m[INFO ][0m [769b1613c1b81a57] [http_client_holder.go:175 ] [main ] - Following certificates were successfully loaded as trusted CAs for TLS communication:
[/opt/jfrog/artifactory/var/data/router/keys/trusted/access-root-ca.crt]
JF_METADATA_ACCESSCLIENT_URL: http://localhost:8081/access
metadata started. PID: 3760
[DEBUG] Resolved system configuration file path: /opt/jfrog/artifactory/var/etc/system.yaml
[TRACE] Config key not set for aws secret (metadata.database.secretsManagerAlias)
2021-07-14T22:43:35.697Z [34;1m[jfmd ][0m [34m[INFO ][0m [711db245f76778f4] [postgres_jdbc_url_converter.go] [main ] - No ssl parameter found, falling back to sslmode=disable [database]
2021-07-14T22:43:35.697Z [34;1m[jfmd ][0m [34m[INFO ][0m [711db245f76778f4] [database_bearer.go:101 ] [main ] - Connecting to (db config: {postgresql jdbc:postgresql://jfrog-rds.ckpg8sk01oug.ap-southeast-2.rds.amazonaws.com:5432/artifactory_ha}) [database]
2021-07-14T22:43:35.707Z [34;1m[jfmd ][0m [34m[INFO ][0m [711db245f76778f4] [migrator.go:60 ] [main ] - Applying 47 migrations files [database]
2021-07-14T22:43:35.709Z [34;1m[jfmd ][0m [34m[INFO ][0m [711db245f76778f4] [application.go:68 ] [main ] - Metadata (jfmd) service initialization started. Version: 7.21.4 Revision: 7bb45d0a6a1 PID: 3760 Home: /opt/jfrog/artifactory/var [app_initializer]
2021-07-14T22:43:35.711Z [34;1m[jfmd ][0m [34m[INFO ][0m [711db245f76778f4] [server_bearer.go:169 ] [main ] - Got service_id from datastore: jfmd@01fakhyvag55n029kzsyn7gvh7 [ServerInit]
Starting event...
event not running. Proceed to start it up.
event started. PID: 3918
[INFO ] JFrog Event (jfevt) service initialization started. Version: 7.21.4 (revision: 7bb45d0a6a, build date: 2021-07-08T20:53:07Z) PID: 3918 Home: /opt/jfrog/artifactory
[DEBUG] Resolved system configuration file path: /opt/jfrog/artifactory/var/etc/system.yaml
Starting frontend...
frontend not running. Proceed to start it up.
frontend started. PID: 4079
2021-07-14T22:43:36.000Z [shell] [38;5;69m[INFO ][0m [] [installerCommon.sh:1188 ] [main] - Redirection is set to false. Skipping catalina log redirection
NOTE: Picked up JDK_JAVA_OPTIONS: --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.util.concurrent=ALL-UNNAMED --add-opens=java.rmi/sun.rmi.transport=ALL-UNNAMED
2021-07-14T22:43:37.087L [35m[tomct][0m [INFO ] [ ] [org.apache.coyote.http11.Http11NioProtocol] [org.apache.coyote.AbstractProtocol init] - Initializing ProtocolHandler ["http-nio-8081"]
2021-07-14T22:43:37.113L [35m[tomct][0m [INFO ] [ ] [org.apache.tomcat.util.net.NioSelectorPool] [org.apache.tomcat.util.net.NioSelectorPool getSharedSelector] - Using a shared selector for servlet write/read
2021-07-14T22:43:37.147L [35m[tomct][0m [INFO ] [ ] [org.apache.coyote.http11.Http11NioProtocol] [org.apache.coyote.AbstractProtocol init] - Initializing ProtocolHandler ["http-nio-127.0.0.1-8091"]
2021-07-14T22:43:37.148L [35m[tomct][0m [INFO ] [ ] [org.apache.tomcat.util.net.NioSelectorPool] [org.apache.tomcat.util.net.NioSelectorPool getSharedSelector] - Using a shared selector for servlet write/read
2021-07-14T22:43:37.159L [35m[tomct][0m [INFO ] [ ] [org.apache.coyote.http11.Http11NioProtocol] [org.apache.coyote.AbstractProtocol init] - Initializing ProtocolHandler ["http-nio-127.0.0.1-8040"]
2021-07-14T22:43:37.160L [35m[tomct][0m [INFO ] [ ] [org.apache.tomcat.util.net.NioSelectorPool] [org.apache.tomcat.util.net.NioSelectorPool getSharedSelector] - Using a shared selector for servlet write/read
2021-07-14T22:43:37.169L [35m[tomct][0m [INFO ] [ ] [org.apache.catalina.core.StandardService] [org.apache.catalina.core.StandardService startInternal] - Starting service [Catalina]
2021-07-14T22:43:37.170L [35m[tomct][0m [INFO ] [ ] [org.apache.catalina.core.StandardEngine] [org.apache.catalina.core.StandardEngine startInternal] - Starting Servlet engine: [Apache Tomcat/8.5.66]
2021-07-14T22:43:37.190L [35m[tomct][0m [INFO ] [ ] [org.apache.catalina.startup.HostConfig] [org.apache.catalina.startup.HostConfig deployDescriptor] - Deploying deployment descriptor [/opt/jfrog/artifactory/app/artifactory/tomcat/conf/Catalina/localhost/access.xml]
2021-07-14T22:43:37.190L [35m[tomct][0m [INFO ] [ ] [org.apache.catalina.startup.HostConfig] [org.apache.catalina.startup.HostConfig deployDescriptor] - Deploying deployment descriptor [/opt/jfrog/artifactory/app/artifactory/tomcat/conf/Catalina/localhost/artifactory.xml]
2021-07-14T22:43:37.241L [35m[tomct][0m [WARNING] [ ] [org.apache.catalina.startup.HostConfig] [org.apache.catalina.startup.HostConfig deployDescriptor] - A docBase [/opt/jfrog/artifactory/app/artifactory/tomcat/webapps/artifactory.war] inside the host appBase has been specified, and will be ignored
2021-07-14T22:43:37.242L [35m[tomct][0m [WARNING] [ ] [org.apache.catalina.startup.HostConfig] [org.apache.catalina.startup.HostConfig deployDescriptor] - A docBase [/opt/jfrog/artifactory/app/artifactory/tomcat/webapps/access.war] inside the host appBase has been specified, and will be ignored
2021-07-14T22:43:37.386Z [33m[jffe ][39m [34m[INFO ][39m [ ] [ ] [main ] - frontend (jffe) service initialization started. Version: 1.21.3 Revision: 10000003 PID: 4275 Home: /opt/jfrog/artifactory
2021-07-14T22:43:37.389Z [33m[jffe ][39m [34m[INFO ][39m [ ] [ ] [main ] - attempting pinging artifactory for 180 retires and 1.0s interval for total of 3 minutes
2021-07-14T22:43:37.504Z [36m[jfrou][0m [34m[INFO ][0m [769b1613c1b81a57] [config_holder.go:107 ] [main ] - Configuration update detected
2021-07-14T22:43:39.716Z [34;1m[jfmd ][0m [34m[INFO ][0m [711db245f76778f4] [accessclient.go:57 ] [main ] - Cluster join: Retry 5: Service registry ping failed, will retry. Error: Error while trying to connect to local router at address 'http://localhost:8046/access': Get "http://localhost:8046/access/api/v1/system/ping": dial tcp 127.0.0.1:8046: connect: connection refused [access_client]
2021-07-14T22:43:39.872Z [36;1m[jfevt][0m [34m[INFO ][0m [6791d047755e014d] [access_thin_client.go:103 ] [main ] - Cluster join: Retry 5: Service registry ping failed, will retry. Error: Error while trying to connect to local router at address 'http://localhost:8046/access/api/v1/system/ping': Get "http://localhost:8046/access/api/v1/system/ping": dial tcp 127.0.0.1:8046: connect: connection refused [startup]
2021-07-14T22:43:40.024Z [1;32m[jfrt ][0;39m [34m[INFO ][0;39m [7f135706ceef9e54] [o.a.c.h.HaNodeProperties:63 ] [ocalhost-startStop-2] - Artifactory is running in clustered mode.
2021-07-14T22:43:40.042Z [1;32m[jfrt ][0;39m [34m[INFO ][0;39m [7f135706ceef9e54] [tifactoryHomeConfigListener:85] [ocalhost-startStop-2] - Resolved Home: '/opt/jfrog/artifactory
2021-07-14T22:43:40.214Z [1;33m[jfac ][0;39m [34m[INFO ][0;39m [30c3ea429da159f9] [licationContextInitializer:162] [ocalhost-startStop-1] - Access (jfac) service initialization started. Version: 7.21.4 Revision: 72104900 PID: 4255 Home: /opt/jfrog/artifactory
2021-07-14T22:43:40.252Z [1;33m[jfac ][0;39m [34m[INFO ][0;39m [30c3ea429da159f9] [o.j.a.AccessApplication:55 ] [ocalhost-startStop-1] - Starting AccessApplication v7.21.4 using Java 11.0.10 on jfrog-platform-artifactory-ha-primary-0 with PID 4255 (/opt/jfrog/artifactory/app/artifactory/tomcat/webapps/access/WEB-INF/lib/access-application-7.21.4.jar started by artifactory in /var/opt/jfrog/artifactory)
2021-07-14T22:43:40.252Z [1;33m[jfac ][0;39m [34m[INFO ][0;39m [30c3ea429da159f9] [o.j.a.AccessApplication:679 ] [ocalhost-startStop-1] - The following profiles are active: production
2021-07-14T22:43:40.730Z [1;32m[jfrt ][0;39m [34m[INFO ][0;39m [ ] [o.j.c.w.FileWatcher:147 ] [Thread-9 ] - Starting watch of folder configurations
2021-07-14T22:43:40.928Z [1;32m[jfrt ][0;39m [34m[INFO ][0;39m [7f135706ceef9e54] [.BasicConfigurationManager:186] [ocalhost-startStop-2] - Artifactory (jfrt) service initialization started. Version: 7.21.5 Revision: 72105900 PID: 4255 Home: /opt/jfrog/artifactory
2021-07-14T22:43:41.314Z [1;32m[jfrt ][0;39m [34m[INFO ][0;39m [7f135706ceef9e54] [d.c.m.ConverterManagerImpl:212] [ocalhost-startStop-2] - Triggering PRE_INIT conversion, from 7.21.3 to 7.21.5
2021-07-14T22:43:41.315Z [1;32m[jfrt ][0;39m [34m[INFO ][0;39m [7f135706ceef9e54] [d.c.m.ConverterManagerImpl:215] [ocalhost-startStop-2] - Finished PRE_INIT conversion, current version is: 7.21.5
2021-07-14T22:43:41.315Z [1;32m[jfrt ][0;39m [34m[INFO ][0;39m [7f135706ceef9e54] [d.i.DbInitializationManager:49] [ocalhost-startStop-2] - Initializing DB Schema initialization manager
2021-07-14T22:43:41.316Z [1;32m[jfrt ][0;39m [34m[INFO ][0;39m [7f135706ceef9e54] [.i.DbInitializationManager:177] [ocalhost-startStop-2] - Database: PostgreSQL 12.5. Driver: PostgreSQL JDBC Driver 42.2.19 Pool: postgresql
2021-07-14T22:43:41.857Z [1;32m[jfrt ][0;39m [34m[INFO ][0;39m [7f135706ceef9e54] [d.i.DbInitializationManager:53] [ocalhost-startStop-2] - DB Schema initialization manager initialized
2021-07-14T22:43:41.876Z [1;32m[jfrt ][0;39m [34m[INFO ][0;39m [7f135706ceef9e54] [SchemaInitializationManager:48] [ocalhost-startStop-2] - Initializing Post-DB initialization manager
2021-07-14T22:43:41.895Z [1;32m[jfrt ][0;39m [34m[INFO ][0;39m [7f135706ceef9e54] [o.a.l.s.d.i.HaInitLock:62 ] [ocalhost-startStop-2] - Found an init lock by current node: jfrog-platform-artifactory-ha-primary-0. Removing the lock.
2021-07-14T22:43:41.903Z [1;32m[jfrt ][0;39m [34m[INFO ][0;39m [7f135706ceef9e54] [o.a.l.s.d.i.HaInitLock:128 ] [ocalhost-startStop-2] - Attempting to acquire a high-availability init lock on key artifactory-init, with node-id jfrog-platform-artifactory-ha-primary-0
2021-07-14T22:43:41.906Z [1;32m[jfrt ][0;39m [34m[INFO ][0;39m [7f135706ceef9e54] [o.a.l.s.d.i.HaInitLock:135 ] [ocalhost-startStop-2] - Acquired high-availability init lock on key artifactory-init
2021-07-14T22:43:41.921Z [1;32m[jfrt ][0;39m [31m[WARN ][0;39m [7f135706ceef9e54] [o.a.a.ConverterBlockerImpl:69 ] [ocalhost-startStop-2] - No valid installed license found. Blocking conversion
2021-07-14T22:43:41.923Z [1;32m[jfrt ][0;39m [1;31m[ERROR][0;39m [7f135706ceef9e54] [d.c.m.ConverterManagerImpl:277] [ocalhost-startStop-2] - Conversion failed. You should analyze the error and retry launching Artifactory. Error is: Converter can't run since no matching license found, please add new license
2021-07-14T22:43:41.927Z [1;32m[jfrt ][0;39m [1;31m[ERROR][0;39m [7f135706ceef9e54] [tifactoryHomeConfigListener:55] [ocalhost-startStop-2] - Failed initializing Home. Caught exception:
java.lang.IllegalStateException: Converter can't run since no matching license found, please add new license
at org.artifactory.storage.db.converter.markers.ConverterManagerImpl.handleException(ConverterManagerImpl.java:280)
at org.artifactory.storage.db.converter.markers.ConverterManagerImpl.serviceConvert(ConverterManagerImpl.java:238)
at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1655)
at java.base/java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:658)
at org.artifactory.storage.db.converter.markers.ConverterManagerImpl.convertDatabase(ConverterManagerImpl.java:156)
at org.artifactory.lifecycle.storage.db.init.PostDbSchemaInitializationManager.convertAndInit(PostDbSchemaInitializationManager.java:61)
at org.artifactory.lifecycle.storage.db.init.HaInitLock.runInsideLock(HaInitLock.java:144)
at org.artifactory.lifecycle.storage.db.init.HaInitLock.runInsideInitLockIfNeeded(HaInitLock.java:108)
at org.artifactory.lifecycle.storage.db.init.PostDbSchemaInitializationManager.init(PostDbSchemaInitializationManager.java:50)
at org.artifactory.lifecycle.webapp.servlet.BasicConfigurationManager.initArtifactoryInstallation(BasicConfigurationManager.java:154)
at org.artifactory.lifecycle.webapp.servlet.BasicConfigurationManager.initialize(BasicConfigurationManager.java:126)
at org.artifactory.lifecycle.webapp.servlet.ArtifactoryHomeConfigListener.initBasicConfigManager(ArtifactoryHomeConfigListener.java:61)
at org.artifactory.lifecycle.webapp.servlet.ArtifactoryHomeConfigListener.contextInitialized(ArtifactoryHomeConfigListener.java:53)
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4705)
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5168)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:743)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:719)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:691)
at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:672)
at org.apache.catalina.startup.HostConfig$DeployDescriptor.run(HostConfig.java:1873)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.lang.IllegalStateException: Converter can't run since no matching license found, please add new license
at org.artifactory.storage.db.converter.markers.ConverterManagerImpl.shouldBlockConvert(ConverterManagerImpl.java:179)
at org.artifactory.storage.db.converter.markers.ConverterManagerImpl.serviceConvert(ConverterManagerImpl.java:228)
... 24 common frames omitted
2021-07-14T22:43:41.930Z [1;32m[jfrt ][0;39m [1;31m[ERROR][0;39m [7f135706ceef9e54] [actoryContextConfigListener:92] [ocalhost-startStop-2] - Failed initializing Artifactory context: Artifactory home not initialized.
2021-07-14T22:43:41.931L [35m[tomct][0m [SEVERE] [ ] [org.apache.catalina.core.StandardContext] [org.apache.catalina.core.StandardContext startInternal] - One or more listeners failed to start. Full details will be found in the appropriate container log file
2021-07-14T22:43:41.932L [35m[tomct][0m [SEVERE] [ ] [org.apache.catalina.core.StandardContext] [org.apache.catalina.core.StandardContext startInternal] - Context [/artifactory] startup failed due to previous errors
2021-07-14T22:43:41.949L [35m[tomct][0m [INFO ] [ ] [org.apache.catalina.startup.HostConfig] [org.apache.catalina.startup.HostConfig deployDescriptor] - Deployment of deployment descriptor [/opt/jfrog/artifactory/app/artifactory/tomcat/conf/Catalina/localhost/artifactory.xml] has finished in [4,758] ms
2021-07-14T22:43:43.907Z [1;33m[jfac ][0;39m [34m[INFO ][0;39m [30c3ea429da159f9] [s.d.u.AccessJdbcHelperImpl:140] [ocalhost-startStop-1] - Database: PostgreSQL 12.5. Driver: PostgreSQL JDBC Driver 42.2.19
2021-07-14T22:43:44.720Z [34;1m[jfmd ][0m [34m[INFO ][0m [711db245f76778f4] [accessclient.go:57 ] [main ] - Cluster join: Retry 10: Service registry ping failed, will retry. Error: Error while trying to connect to local router at address 'http://localhost:8046/access': Get "http://localhost:8046/access/api/v1/system/ping": dial tcp 127.0.0.1:8046: connect: connection refused [access_client]
2021-07-14T22:43:44.877Z [36;1m[jfevt][0m [34m[INFO ][0m [6791d047755e014d] [access_thin_client.go:103 ] [main ] - Cluster join: Retry 10: Service registry ping failed, will retry. Error: Error while trying to connect to local router at address 'http://localhost:8046/access/api/v1/system/ping': Get "http://localhost:8046/access/api/v1/system/ping": dial tcp 127.0.0.1:8046: connect: connection refused [startup]
2021-07-14T22:43:45.010Z [1;33m[jfac ][0;39m [34m[INFO ][0;39m [30c3ea429da159f9] [alConfigurationServiceBase:182] [ocalhost-startStop-1] - Loading configuration from db finished successfully
2021-07-14T22:43:45.152Z [1;33m[jfac ][0;39m [34m[INFO ][0;39m [30c3ea429da159f9] [alConfigurationServiceBase:114] [ocalhost-startStop-1] - Current configurations are the same as the new configurations, no need for an update. No action was taken.
2021-07-14T22:43:45.557Z [1;33m[jfac ][0;39m [34m[INFO ][0;39m [30c3ea429da159f9] [.h.AccessNodeIdProviderImpl:73] [ocalhost-startStop-1] - Service id initialized: jfac@01fakhypawqdmn1p9yz7em19nn
2021-07-14T22:43:45.993Z [1;33m[jfac ][0;39m [34m[INFO ][0;39m [30c3ea429da159f9] [j.a.s.s.t.TokenServiceImpl:127] [ocalhost-startStop-1] - Scheduling task for revoking expired tokens using cron expression: 0 0 0/1 ?
2021-07-14T22:43:46.414Z [1;33m[jfac ][0;39m [34m[INFO ][0;39m [30c3ea429da159f9] [b.AccessServerBootstrapImpl:43] [ocalhost-startStop-1] - [ACCESS BOOTSTRAP] Starting JFrog Access bootstrap...
2021-07-14T22:43:46.415Z [1;33m[jfac ][0;39m [34m[INFO ][0;39m [30c3ea429da159f9] [CertificateFileHandlerBase:115] [ocalhost-startStop-1] - [ACCESS BOOTSTRAP] Initializing root certificate.
2021-07-14T22:43:46.485Z [33m[jffe ][39m [34m[INFO ][39m [ ] [ ] [main ] - pinging artifactory, attempt number 10
2021-07-14T22:43:46.488Z [33m[jffe ][39m [34m[INFO ][39m [ ] [ ] [main ] - pinging artifactory attempt number 10 failed with code : ECONNREFUSED
2021-07-14T22:43:46.850Z [1;33m[jfac ][0;39m [34m[INFO ][0;39m [30c3ea429da159f9] [CertificateFileHandlerBase:328] [ocalhost-startStop-1] - [ACCESS BOOTSTRAP] Saved new root certificate at: /opt/jfrog/artifactory/var/etc/access/keys/root.crt
2021-07-14T22:43:46.853Z [1;33m[jfac ][0;39m [34m[INFO ][0;39m [30c3ea429da159f9] [CertificateFileHandlerBase:132] [ocalhost-startStop-1] - [ACCESS BOOTSTRAP] Finished initializing root certificate. Certificate source: DATABASE
2021-07-14T22:43:46.854Z [1;33m[jfac ][0;39m [34m[INFO ][0;39m [30c3ea429da159f9] [CertificateFileHandlerBase:115] [ocalhost-startStop-1] - [ACCESS BOOTSTRAP] Initializing ca certificate.
2021-07-14T22:43:46.943Z [1;33m[jfac ][0;39m [34m[INFO ][0;39m [30c3ea429da159f9] [CertificateFileHandlerBase:328] [ocalhost-startStop-1] - [ACCESS BOOTSTRAP] Saved new ca certificate at: /opt/jfrog/artifactory/var/etc/access/keys/ca.crt
2021-07-14T22:43:46.944Z [1;33m[jfac ][0;39m [34m[INFO ][0;39m [30c3ea429da159f9] [CertificateFileHandlerBase:132] [ocalhost-startStop-1] - [ACCESS BOOTSTRAP] Finished initializing ca certificate. Certificate source: DATABASE
2021-07-14T22:43:47.055Z [1;33m[jfac ][0;39m [34m[INFO ][0;39m [30c3ea429da159f9] [.s.b.AccessProjectBootstrap:76] [ocalhost-startStop-1] - Finished initializing Projects Entities in 22.28 millis
2021-07-14T22:43:47.059Z [1;33m[jfac ][0;39m [34m[INFO ][0;39m [30c3ea429da159f9] [.s.b.AccessProjectBootstrap:69] [ocalhost-startStop-1] - Finished initializing Projects Shared Entities in 2.95 millis
2021-07-14T22:43:47.098Z [1;33m[jfac ][0;39m [34m[INFO ][0;39m [30c3ea429da159f9] [.s.b.AccessProjectBootstrap:89] [ocalhost-startStop-1] - Finished initializing Projects permissions in 38.45 millis
2021-07-14T22:43:47.098Z [1;33m[jfac ][0;39m [34m[INFO ][0;39m [30c3ea429da159f9] [.s.b.AccessProjectBootstrap:61] [ocalhost-startStop-1] - Finished initializing Projects in 69.22 millis
2021-07-14T22:43:47.099Z [1;33m[jfac ][0;39m [34m[INFO ][0;39m [30c3ea429da159f9] [.a.s.s.m.MetricsServiceImpl:47] [ocalhost-startStop-1] - Metrics Framework Service is disabled
2021-07-14T22:43:47.099Z [1;33m[jfac ][0;39m [34m[INFO ][0;39m [30c3ea429da159f9] [b.AccessServerBootstrapImpl:52] [ocalhost-startStop-1] - [ACCESS BOOTSTRAP] JFrog Access bootstrap finished.
2021-07-14T22:43:47.785Z [1;33m[jfac ][0;39m [34m[INFO ][0;39m [30c3ea429da159f9] [o.j.a.s.r.s.GrpcServerImpl:65 ] [ocalhost-startStop-1] - Starting gRPC Server on port 8045
2021-07-14T22:43:48.044Z [1;33m[jfac ][0;39m [34m[INFO ][0;39m [30c3ea429da159f9] [o.j.a.s.r.s.GrpcServerImpl:84 ] [ocalhost-startStop-1] - gRPC Server started, listening on 8045
2021-07-14T22:43:48.054Z [1;33m[jfac ][0;39m [34m[INFO ][0;39m [30c3ea429da159f9] [o.j.a.s.s.JoinKeyAccess:172 ] [ocalhost-startStop-1] - Cluster join: Join key loaded successfully (from: /opt/jfrog/artifactory/var/etc/security/join.key)
2021-07-14T22:43:48.109Z [1;33m[jfac ][0;39m [34m[INFO ][0;39m [30c3ea429da159f9] [a.s.b.AccessServerRegistrar:66] [ocalhost-startStop-1] - [ACCESS BOOTSTRAP] Starting JFrog Access registrar...
2021-07-14T22:43:48.153Z [1;33m[jfac ][0;39m [34m[INFO ][0;39m [252d959ac5c4fdb1] [a.c.RefreshableScheduledJob:53] [ocalhost-startStop-1] - Scheduling loadCertificates task to run every 30 seconds
2021-07-14T22:43:48.165Z [1;33m[jfac ][0;39m [34m[INFO ][0;39m [bc56b00dfd3525f3] [a.c.RefreshableScheduledJob:53] [ocalhost-startStop-1] - Scheduling heartbeat task to run every 5 seconds
2021-07-14T22:43:48.167Z [1;33m[jfac ][0;39m [34m[INFO ][0;39m [bc56b00dfd3525f3] [a.c.RefreshableScheduledJob:53] [ocalhost-startStop-1] - Scheduling federationCleanupService task to run every 1209600 seconds
2021-07-14T22:43:48.182Z [1;33m[jfac ][0;39m [34m[INFO ][0;39m [bc56b00dfd3525f3] [a.c.RefreshableScheduledJob:53] [ocalhost-startStop-1] - Scheduling staleTokenCleanup task to run every 3600 seconds
2021-07-14T22:43:49.718Z [1;33m[jfac ][0;39m [34m[INFO ][0;39m [bc56b00dfd3525f3] [o.j.a.AccessApplication:61 ] [ocalhost-startStop-1] - Started AccessApplication in 10.7 seconds (JVM running for 13.485)
2021-07-14T22:43:49.727Z [34;1m[jfmd ][0m [34m[INFO ][0m [711db245f76778f4] [accessclient.go:57 ] [main ] - Cluster join: Retry 15: Service registry ping failed, will retry. Error: Error while trying to connect to local router at address 'http://localhost:8046/access': Get "http://localhost:8046/access/api/v1/system/ping": dial tcp 127.0.0.1:8046: connect: connection refused [access_client]
2021-07-14T22:43:49.756L [35m[tomct][0m [INFO ] [ ] [org.apache.catalina.startup.HostConfig] [org.apache.catalina.startup.HostConfig deployDescriptor] - Deployment of deployment descriptor [/opt/jfrog/artifactory/app/artifactory/tomcat/conf/Catalina/localhost/access.xml] has finished in [12,566] ms
2021-07-14T22:43:49.757L [35m[tomct][0m [INFO ] [ ] [org.apache.catalina.startup.HostConfig] [org.apache.catalina.startup.HostConfig deployDirectory] - Deploying web application directory [/opt/jfrog/artifactory/app/artifactory/tomcat/webapps/ROOT]
2021-07-14T22:43:49.771L [35m[tomct][0m [INFO ] [ ] [org.apache.catalina.startup.HostConfig] [org.apache.catalina.startup.HostConfig deployDirectory] - Deployment of web application directory [/opt/jfrog/artifactory/app/artifactory/tomcat/webapps/ROOT] has finished in [14] ms
2021-07-14T22:43:49.774L [35m[tomct][0m [INFO ] [ ] [org.apache.coyote.http11.Http11NioProtocol] [org.apache.coyote.AbstractProtocol start] - Starting ProtocolHandler ["http-nio-8081"]
2021-07-14T22:43:49.782L [35m[tomct][0m [INFO ] [ ] [org.apache.coyote.http11.Http11NioProtocol] [org.apache.coyote.AbstractProtocol start] - Starting ProtocolHandler ["http-nio-127.0.0.1-8091"]
2021-07-14T22:43:49.784L [35m[tomct][0m [INFO ] [ ] [org.apache.coyote.http11.Http11NioProtocol] [org.apache.coyote.AbstractProtocol start] - Starting ProtocolHandler ["http-nio-127.0.0.1-8040"]
2021-07-14T22:43:49.882Z [36;1m[jfevt][0m [34m[INFO ][0m [6791d047755e014d] [access_thin_client.go:103 ] [main ] - Cluster join: Retry 15: Service registry ping failed, will retry. Error: Error while trying to connect to local router at address 'http://localhost:8046/access/api/v1/system/ping': Get "http://localhost:8046/access/api/v1/system/ping": dial tcp 127.0.0.1:8046: connect: connection refused [startup]
2021-07-14T22:43:50.848Z [36m[jfrou][0m [34m[INFO ][0m [769b1613c1b81a57] [join_executor.go:116 ] [main ] - Cluster join: Trying to rejoin the cluster
2021-07-14T22:43:51.032Z [1;33m[jfac ][0;39m [31m[WARN ][0;39m [1d8420efd3f48d7c] [.j.a.s.s.r.JoinServiceImpl:137] [27.0.0.1-8040-exec-1] - Requested Router Join for nodeId: jfrog-platform-artifactory-ha-primary-0 at: 10.100.85.219 conflicts (IP) with existing node: ServiceNodeImpl(serviceId=jfmd@01fakhyvag55n029kzsyn7gvh7, nodeId=jfrog-platform-artifactory-ha-primary-0, routerId=jfrog-platform-artifactory-ha-primary-0, state=UNHEALTHY_PEER, created=1626302604499, lastUpdated=1626302604499, version=null, revision=null, ip=10.100.85.219, endpoints=[EndpointImpl(port=8082, secure=false, paths=[/metadata/(.)])])
2021-07-14T22:43:51.077Z [36m[jfrou][0m [31m[WARN ][0m [769b1613c1b81a57] [join_executor.go:192 ] [main ] - Cluster join: Error during cluster join: Error response from service registry, status code: 409; message: Requested Router Join for nodeId: jfrog-platform-artifactory-ha-primary-0 at: 10.100.85.219 conflicts (node id) with existing node: ServiceNodeImpl(serviceId=jfmd@01fakhyvag55n029kzsyn7gvh7, nodeId=jfrog-platform-artifactory-ha-primary-0, routerId=jfrog-platform-artifactory-ha-primary-0, state=UNHEALTHY_PEER, created=1626302604499, lastUpdated=1626302604499, version=null, revision=null, ip=10.100.85.219, endpoints=[EndpointImpl(port=8082, secure=false, paths=[/metadata/(.)])]); Retrying...
2021-07-14T22:43:54.097Z [1;33m[jfac ][0;39m [31m[WARN ][0;39m [6bb0e92ac1064524] [.j.a.s.s.r.JoinServiceImpl:137] [27.0.0.1-8040-exec-2] - Requested Router Join for nodeId: jfrog-platform-artifactory-ha-primary-0 at: 10.100.85.219 conflicts (IP) with existing node: ServiceNodeImpl(serviceId=jfmd@01fakhyvag55n029kzsyn7gvh7, nodeId=jfrog-platform-artifactory-ha-primary-0, routerId=jfrog-platform-artifactory-ha-primary-0, state=UNHEALTHY_PEER, created=1626302604499, lastUpdated=1626302604499, version=null, revision=null, ip=10.100.85.219, endpoints=[EndpointImpl(port=8082, secure=false, paths=[/metadata/(.)])])
2021-07-14T22:43:54.100Z [36m[jfrou][0m [31m[WARN ][0m [769b1613c1b81a57] [join_executor.go:192 ] [main ] - Cluster join: Error during cluster join: Error response from service registry, status code: 409; message: Requested Router Join for nodeId: jfrog-platform-artifactory-ha-primary-0 at: 10.100.85.219 conflicts (node id) with existing node: ServiceNodeImpl(serviceId=jfmd@01fakhyvag55n029kzsyn7gvh7, nodeId=jfrog-platform-artifactory-ha-primary-0, routerId=jfrog-platform-artifactory-ha-primary-0, state=UNHEALTHY_PEER, created=1626302604499, lastUpdated=1626302604499, version=null, revision=null, ip=10.100.85.219, endpoints=[EndpointImpl(port=8082, secure=false, paths=[/metadata/(.)])]); Retrying...
2021-07-14T22:43:54.731Z [34;1m[jfmd ][0m [34m[INFO ][0m [711db245f76778f4] [accessclient.go:57 ] [main ] - Cluster join: Retry 20: Service registry ping failed, will retry. Error: Error while trying to connect to local router at address 'http://localhost:8046/access': Get "http://localhost:8046/access/api/v1/system/ping": dial tcp 127.0.0.1:8046: connect: connection refused [access_client]
2021-07-14T22:43:54.885Z [36;1m[jfevt][0m [34m[INFO ][0m [6791d047755e014d] [access_thin_client.go:103 ] [main ] - Cluster join: Retry 20: Service registry ping failed, will retry. Error: Error while trying to connect to local router at address 'http://localhost:8046/access/api/v1/system/ping': Get "http://localhost:8046/access/api/v1/system/ping": dial tcp 127.0.0.1:8046: connect: connection refused [startup]
2021-07-14T22:43:56.515Z [33m[jffe ][39m [34m[INFO ][39m [ ] [ ] [main ] - pinging artifactory, attempt number 20
2021-07-14T22:43:56.517Z [33m[jffe ][39m [34m[INFO ][39m [ ] [ ] [main ] - pinging artifactory attempt number 20 failed with code : ECONNREFUSED
2021-07-14T22:43:57.192Z [1;33m[jfac ][0;39m [31m[WARN ][0;39m [30c3ea429da159f9] [o.j.c.ExecutionUtils:165 ] [pool-10-thread-2 ] - Retry 20 Elapsed 9.08 secs failed: Registration with router on URL http://localhost:8046 failed with error: UNAVAILABLE: io exception. Trying again
2021-07-14T22:43:57.278Z [1;33m[jfac ][0;39m [34m[INFO ][0;39m [3c6f463a1f02e814] [s.r.NodeRegistryServiceImpl:68] [27.0.0.1-8040-exec-3] - Cluster join: Successfully joined jfrou@01fafqvp1zrqhckj4bvfr3g0wj with node id jfrog-platform-artifactory-ha-primary-0
2021-07-14T22:43:57.606Z [36m[jfrou][0m [34m[INFO ][0m [769b1613c1b81a57] [join_executor.go:205 ] [main ] - Cluster join: Successfully joined the cluster
2021-07-14T22:43:57.608Z [36m[jfrou][0m [34m[INFO ][0m [769b1613c1b81a57] [http_client_holder.go:155 ] [main ] - System cert pool contents were loaded as trusted CAs for TLS communication
2021-07-14T22:43:57.608Z [36m[jfrou][0m [34m[INFO ][0m [769b1613c1b81a57] [http_client_holder.go:175 ] [main ] - Following certificates were successfully loaded as trusted CAs for TLS communication:
[/opt/jfrog/artifactory/var/data/router/keys/trusted/access-root-ca.crt]
2021-07-14T22:43:57.621Z [36m[jfrou][0m [34m[INFO ][0m [769b1613c1b81a57] [registry_handler.go:89 ] [main ] - The following services were restored automatically based on persisted data: jfevt@01fakhypawqdmn1p9yz7em19nn, jffe@000, jfmd@01fakhyvag55n029kzsyn7gvh7
2021-07-14T22:43:57.622Z [36m[jfrou][0m [34m[INFO ][0m [ ] [server.go:577 ] [main ] - Preparing server api &{Address:localhost:8049 TLS:
Hi @rahulsadanandan @chukka , just wondering if there is any update for this issue
@frankfong0208 from the above logs looks like, there is a license issue
No valid installed license found. Blocking conversion...
Failed initializing Home. Caught exception:
java.lang.IllegalStateException: Converter can't run since no matching license found, please add new license
Can you try adding a valid license and try once with a fresh installation.
hi @rahulsadanandan it seems to be the licensing issue. Now it works with custom master-key. Thanks for your assistance.
Is this a request for help?: yes
Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT
Version of Helm and Kubernetes: Helm v3.5.3 Kubernetes versions: Client Version: v1.19.4 Server Version: v1.18.9-eks-d1db3c Which chart: https://artifacthub.io/packages/helm/jfrog/jfrog-platform/0.7.0 I tried 0.6.0 first and then 0.7.0, same result. What happened: As described above, the jfrog-platform-artifactory-ha-primary-0 pod would not start the main artifactory-ha container due to failed startup probe. I googled for the errors and someone mentioned this could probably due to corrupted master key, which I don't quite follow as the key was generated and configured per the official doco.
What you expected to happen: The jfrog-platform-artifactory-ha-primary-0 pod should start healthy and the jfrog-platform-artifactory-ha-member-0 and jfrog-platform-artifactory-ha-member-1 pods should start as well afterwards rather than get stuck waiting for the primary pod.
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know: I also tried putting artifactory-ha.global.masterKeySecretName=artifactory-master-key, and even not using a secret, but just hard-code the master key value in this artifactory-ha.global.masterKey= or artifactory-ha.artifactory.masterKey= in the customvalues.yaml file. to no avail.
I also tried the global.masterKey or global.masterKeySecretName paths, same result.
The moment I commented out these values, the chart would use the default dummy master key which is bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb and everything worked just fine.
Every time I redo the chart installation, I would delete the artifactory_ha database from inside the rds postgres instance and even tried recreating the rds itself to ensure rds was not contaminated. however, the result was still the same.
Part of customvalues.yaml file: