Closed EsDmitrii closed 3 months ago
What kind of storageclass you are using for jobservice pvc?
What kind of storageclass you are using for jobservice pvc?
@MinerYang
I use 1Gi of linstor storage, we use our own disk pool for pvc
THX @EsDmitrii Could you also describe harbor-core, harbor-jobservice, harbor-registry, nginx pod and the related logs if any.
Hi @MinerYang ! I deployed harbor in HA mode: three replicas of harbor-core, harbor-jobservice, harbor-registry, harbor-portal. There no harbor-nginx pod due to I'm exposed it via Ingress. There no related logs before al components restart, otherwise if I had some log it would be clearer to understand what is going on. Here are described services: Core deployment Jobservice deployment Portal deployment Registry deployment
Does you ArgoCD use helm template
or helm upgrade/install
?
@Kajot-dev hi Yes, I use custom templating plugin for ArgoCD to template multi cluster application manifests, to take and paste secrets from vault to manifests, etc In general, in this case I use templating to prepare harbor application to be deployed in the specific cluster, to use specific registry to pull images from depending on the env (prod, dev, etc) and to take some secrets for helm from vault
I checked Argo logs and events earlier, and there are no anomalies like “app unhealthily”, “app not synced”, etc. So there are no actions that can auto sync application and trigger recreating the pods.
@EsDmitrii I'm asking, because there are certain secret values that depend on a lookup
function. If it's not available and values are not provided directly it will generate a new ones.
From your logs, it appears that something is changing the resources thus k8s is making new replicas, so I would say it's not strictly a harbor issue, but rather a deployment one. Can you confirm that when you make the deployment 2 times in a row without changing the config, the pods do not restart?
@Kajot-dev sounds interesting I’ll turn off auto sync to check this theory out I’ll back with feedback and answers to other questions tomorrow due to I’m have days off
@Kajot-dev hi You're right! Somehow Harbor re-generates internal TLS, patch all related secrets and deployments (couple screenshots attached)
It is huge access vulnerability because I can't see any SA created for Harbor, any role or role binding that associated with Harbor SA and grants access to modify resources I attached above. There no related resources to manage access rights for Harbor in the Helm chart. How does Harbor manage k8s secrets without SA and access to k8s resources granted via roles and role bindings?
It's not managed by harbor but by the helm and in your case the ArgoCD and its ServiceAccount. Since you don't use helm install
but a custom templating, not all helm features are available and it cannot retrieve existing resources/secrets to reuse them. For instance look at the line regarding secret
field in the core's secret:
{{- if not .Values.core.existingSecret }}
secret: {{ .Values.core.secret | default (include "harbor.secretKeyHelper" (dict "key" "secret" "data" $existingSecret.data)) | default (randAlphaNum 16) | b64enc | quote }}
{{- end }}
First, it'll use existingSecret
and this feature does not require lookup
function so you can use it. Then, the usual procedure begins: If set, use core.secret
, if not try to use value in the existing secret (this does not work for you), and as a last resort generate a new one.
As a solution I you should either use existingSecret
or set the values directly:
core.secret
or use core.existingSecret
jobservice.secret
or jobservice.existingSecret
registry.credentials.username
and registry.credentials.password
registry.credentials.htpasswdString
For internal TLS you can generate the certs and configure the source as secret
instead of auto
I understand that ArgoCD trying to sync Harbor with actual state in Git, but how Harbor modify secrets and deployments in k8s without granted permissions to it? Argo trigger sync with Git state when app is OutOfSync
and there are some diffs in manifests only.
It is not depends on my Argo templating plugins because Argo operates with rendered manifests e.g. all secrets from vault retrieves and pastes to its places on app bootstrap
Example:
I prepare Argo Application manifest with links to the vault:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
finalizers:
- resources-finalizer.argocd.argoproj.io
labels:
argocd.devops.rcod: devops.dev-bootstrap-apps
name: devops.dev-harbor
namespace: argocd
spec:
destination:
namespace: harbor
server: 'https://kubernetes.default.svc'
project: devops.dev
source:
chart: harbor
plugin:
env:
- name: HELM_OVERWRITE
value: >
externalURL: https://harbor.my.awesome.domain
caBundleSecretName: "rtl-ca"
harborAdminPassword:
"<path:path/to/harbor#adminpass>"
expose:
type: ingress
tls:
enabled: true
certSource: secret
secret:
secretName: "harbor-ingress"
ingress:
hosts:
core: harbor.my.awesome.domain
controller: default
className: "nginx"
annotations:
ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
cert-manager.io/cluster-issuer: vault-issuer-int-cluster
cert-manager.io/common-name: harbor.my.awesome.domain
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
nginx.ingress.kubernetes.io/force-ssl-redirect: 'true'
nginx.ingress.kubernetes.io/proxy-body-size: 4096m
nginx.ingress.kubernetes.io/proxy-buffer-size: 10m
nginx.ingress.kubernetes.io/proxy-buffering: 'on'
nginx.ingress.kubernetes.io/proxy-buffers-number: '4'
nginx.ingress.kubernetes.io/proxy-max-temp-file-size: 2048m
nginx.ingress.kubernetes.io/ssl-passthrough: 'true'
persistence:
enabled: true
resourcePolicy: ""
persistentVolumeClaim:
jobservice:
jobLog:
storageClass: "piraeus-local"
accessMode: ReadWriteOnce
size: 1Gi
redis: {}
registry: {}
database: {}
trivy: {}
imageChartStorage:
disableredirect: true
type: s3
s3:
region: us-east-1
bucket: "<path:path/to/harbor#s3bucketname>"
accesskey: "<path:path/to/harbor#s3accesskey>"
secretkey: "<path:path/to/harbor#s3secretkey>"
regionendpoint: iminio.dev.k8s.cd.my.awesome.domain
encrypt: false
secure: true
skipverify: true
v4auth: true
chunksize: "5242880"
rootdirectory: /
multipartcopychunksize: "33554432"
multipartcopymaxconcurrency: 100
multipartcopythresholdsize: "33554432"
database:
type: external
external:
host: "<path:path/to/harbor#pg-harbor-dbhost>"
port: "<path:path/to/harbor#pg-harbor-dbport>"
username: "<path:path/to/harbor#pg-harbor-username>"
password: "<path:path/to/harbor#pg-harbor-password>"
coreDatabase: "<path:path/to/harbor#pg-harbor-dbname>"
sslmode: "disable"
maxIdleConns: 100
maxOpenConns: 900
redis:
type: external
external:
addr: "redisaddr:26379"
sentinelMasterSet: "mymaster"
coreDatabaseIndex: "0"
jobserviceDatabaseIndex: "1"
registryDatabaseIndex: "2"
trivyAdapterIndex: "5"
username: "default"
password: "mystrongpass"
nginx:
replicas: 1
image:
repository: goharbor/nginx-photon
tag: "v2.10.0"
portal:
replicas: 3
image:
repository: goharbor/harbor-portal
tag: "v2.10.0"
core:
replicas: 3
image:
repository: goharbor/harbor-core
tag: "v2.10.0"
extraEnvVars:
- name: SYNC_REGISTRY
value: "true"
jobservice:
replicas: 3
image:
repository: goharbor/harbor-jobservice
tag: "v2.10.0"
registry:
replicas: 3
registry:
image:
repository: goharbor/registry-photon
tag: "v2.10.0"
controller:
image:
repository: goharbor/harbor-registryctl
tag: "v2.10.0"
credentials:
username: "myuser"
password: "mystrongpass"
relativeurls: true
exporter:
replicas: 1
image:
repository: goharbor/harbor-exporter
tag: "v2.10.0"
metrics:
enabled: true
core:
path: /metrics
port: 8001
registry:
path: /metrics
port: 8001
jobservice:
path: /metrics
port: 8001
exporter:
path: /metrics
port: 8001
serviceMonitor:
enabled: true
cache:
enabled: true
expireHours: 24
internalTLS:
enabled: true
strong_ssl_ciphers: false
certSource: "auto"
trivy:
enabled: false
trace:
enabled: false
notary:
enabled: false
- name: HELM_SHARED
value: ''
- name: CHART_NAME
value: harbor
- name: NAMESPACE
value: harbor
name: argocd-vault-plugin-helm
repoURL: 'https://helm.goharbor.io'
targetRevision: 1.14.0
syncPolicy:
retry:
backoff:
duration: 5s
factor: 2
maxDuration: 3m
limit: 5
syncOptions:
- Validate=false
- CreateNamespace=true
- PrunePropagationPolicy=foreground
- PruneLast=true
Manifest above renders to normal helm values like you deploy it manually without any Vault, etc
So there no magic or something else that can break deployment or its logic.
Harbor does not modify secrets, but the template is configured to generate the new secret values if current ones are not available for it (it does not access them because it can't in your case). Then this rendered template with randomly generated secret values is passed to your ArgoCD which applies it onto k8s cluster. At least it is my understanding. New secret values are generated while rendering the tempalate.
I understand that ArgoCD trying to sync Harbor with actual state in Git, but how Harbor modify secrets and deployments in k8s without granted permissions to it? Argo trigger sync with Git state when app is
OutOfSync
and there are some diffs in manifests only.It is not depends on my Argo templating plugins because Argo operates with rendered manifests e.g. all secrets from vault retrieves and pastes to its places on app bootstrap
Example:
- I prepare Argo Application manifest with links to the vault:
apiVersion: argoproj.io/v1alpha1 kind: Application metadata: finalizers: - resources-finalizer.argocd.argoproj.io labels: argocd.devops.rcod: devops.dev-bootstrap-apps name: devops.dev-harbor namespace: argocd spec: destination: namespace: harbor server: 'https://kubernetes.default.svc' project: devops.dev source: chart: harbor plugin: env: - name: HELM_OVERWRITE value: > externalURL: https://harbor.my.awesome.domain caBundleSecretName: "rtl-ca" harborAdminPassword: "<path:path/to/harbor#adminpass>" expose: type: ingress tls: enabled: true certSource: secret secret: secretName: "harbor-ingress" ingress: hosts: core: harbor.my.awesome.domain controller: default className: "nginx" annotations: ingress.kubernetes.io/ssl-redirect: "true" nginx.ingress.kubernetes.io/ssl-redirect: "true" cert-manager.io/cluster-issuer: vault-issuer-int-cluster cert-manager.io/common-name: harbor.my.awesome.domain nginx.ingress.kubernetes.io/backend-protocol: HTTPS nginx.ingress.kubernetes.io/force-ssl-redirect: 'true' nginx.ingress.kubernetes.io/proxy-body-size: 4096m nginx.ingress.kubernetes.io/proxy-buffer-size: 10m nginx.ingress.kubernetes.io/proxy-buffering: 'on' nginx.ingress.kubernetes.io/proxy-buffers-number: '4' nginx.ingress.kubernetes.io/proxy-max-temp-file-size: 2048m nginx.ingress.kubernetes.io/ssl-passthrough: 'true' persistence: enabled: true resourcePolicy: "" persistentVolumeClaim: jobservice: jobLog: storageClass: "piraeus-local" accessMode: ReadWriteOnce size: 1Gi redis: {} registry: {} database: {} trivy: {} imageChartStorage: disableredirect: true type: s3 s3: region: us-east-1 bucket: "<path:path/to/harbor#s3bucketname>" accesskey: "<path:path/to/harbor#s3accesskey>" secretkey: "<path:path/to/harbor#s3secretkey>" regionendpoint: iminio.dev.k8s.cd.my.awesome.domain encrypt: false secure: true skipverify: true v4auth: true chunksize: "5242880" rootdirectory: / multipartcopychunksize: "33554432" multipartcopymaxconcurrency: 100 multipartcopythresholdsize: "33554432" database: type: external external: host: "<path:path/to/harbor#pg-harbor-dbhost>" port: "<path:path/to/harbor#pg-harbor-dbport>" username: "<path:path/to/harbor#pg-harbor-username>" password: "<path:path/to/harbor#pg-harbor-password>" coreDatabase: "<path:path/to/harbor#pg-harbor-dbname>" sslmode: "disable" maxIdleConns: 100 maxOpenConns: 900 redis: type: external external: addr: "redisaddr:26379" sentinelMasterSet: "mymaster" coreDatabaseIndex: "0" jobserviceDatabaseIndex: "1" registryDatabaseIndex: "2" trivyAdapterIndex: "5" username: "default" password: "mystrongpass" nginx: replicas: 1 image: repository: goharbor/nginx-photon tag: "v2.10.0" portal: replicas: 3 image: repository: goharbor/harbor-portal tag: "v2.10.0" core: replicas: 3 image: repository: goharbor/harbor-core tag: "v2.10.0" extraEnvVars: - name: SYNC_REGISTRY value: "true" jobservice: replicas: 3 image: repository: goharbor/harbor-jobservice tag: "v2.10.0" registry: replicas: 3 registry: image: repository: goharbor/registry-photon tag: "v2.10.0" controller: image: repository: goharbor/harbor-registryctl tag: "v2.10.0" credentials: username: "myuser" password: "mystrongpass" relativeurls: true exporter: replicas: 1 image: repository: goharbor/harbor-exporter tag: "v2.10.0" metrics: enabled: true core: path: /metrics port: 8001 registry: path: /metrics port: 8001 jobservice: path: /metrics port: 8001 exporter: path: /metrics port: 8001 serviceMonitor: enabled: true cache: enabled: true expireHours: 24 internalTLS: enabled: true strong_ssl_ciphers: false certSource: "auto" trivy: enabled: false trace: enabled: false notary: enabled: false - name: HELM_SHARED value: '' - name: CHART_NAME value: harbor - name: NAMESPACE value: harbor name: argocd-vault-plugin-helm repoURL: 'https://helm.goharbor.io' targetRevision: 1.14.0 syncPolicy: retry: backoff: duration: 5s factor: 2 maxDuration: 3m limit: 5 syncOptions: - Validate=false - CreateNamespace=true - PrunePropagationPolicy=foreground - PruneLast=true
- Manifest above renders to normal helm values like you deploy it manually without any Vault, etc
So there no magic or something else that can break deployment or its logic.
Well, there is. Each time that manifests are rendered they are different
@Kajot-dev so if I understand right I need to move to existingSecret or use manual helm deploy instead of Argo?
@EsDmitrii Just provide the secret values to the chart config so they are not generated each time or use existingSecret
(but the chart does not provide existingSecret
option for all there values, so you'll need to mix both). Using helm install
will also solve this.
Generally after configuring it, try running your chart with helm template
. If you get the exact same file every time, it means your config is ok
@Kajot-dev thank you so much ! Appreciate your support:) I’m modifying configs now, will be back with results in 1-2 days
@Kajot-dev Hi!
So all stuff works well, I defined my own secrets, certs, etc
BUT
I started to face the problem with registry.credentials.htpasswdString
.
When I create htpasswd pass like this htpasswd -b -c .htpasswd USERNAME PASSWORD
, I see this in logs:
time="2024-04-01T11:36:47.85058898Z" level=warning msg="error authorizing context: basic authentication challenge for realm "harbor-registry-basic-realm": invalid authorization credential" go.version=go1.21.4 http.request.host="infra-harbor-registry:5000" http.request.id=ce444d90-44cd-4f65-86ab-fa9169d044b2 http.request.method=GET http.request.remoteaddr="10.0.1.164:56006" http.request.uri="/v2/" http.request.useragent="Go-http-client/1.1"
time="2024-04-01T11:36:47.861681129Z" level=error msg="error authenticating user "USERNAME": authentication failure" go.version=go1.21.4 http.request.host="infra-harbor-registry:5000" http.request.id=c41c5d03-acfc-4007-a453-d557e0c75cee http.request.method=GET http.request.remoteaddr="10.0.1.164:45906" http.request.uri="/v2/MY-IMAGE-I-WANT-TO-PULL/manifests/sha256:c51afd0a9f5c3776c58220af25b68d018de66db8c4d97d1f329542df9c66c640" http.request.useragent=harbor-registry-client vars.name="MY-IMAGE-I-WANT-TO-PULL" vars.reference="sha256:c51afd0a9f5c3776c58220af25b68d018de66db8c4d97d1f329542df9c66c640"
time="2024-04-01T11:36:47.861770391Z" level=warning msg="error authorizing context: basic authentication challenge for realm "harbor-registry-basic-realm": authentication failure" go.version=go1.21.4 http.request.host="infra-harbor-registry:5000" http.request.id=c41c5d03-acfc-4007-a453-d557e0c75cee http.request.method=GET http.request.remoteaddr="10.0.1.164:45906" http.request.uri="/v2/MY-IMAGE-I-WANT-TO-PULL/manifests/sha256:c51afd0a9f5c3776c58220af25b68d018de66db8c4d97d1f329542df9c66c640" http.request.useragent=harbor-registry-client vars.name="MY-IMAGE-I-WANT-TO-PULL" vars.reference="sha256:c51afd0a9f5c3776c58220af25b68d018de66db8c4d97d1f329542df9c66c640"
When I move back to registry.credentials.username
and registry.credentials.password
- all become well
Do you have any idea? As I see helm does the same:
as workaround I took htpasswd string generated by helm and added it to the Vault I think now it works well, need another day to check :)
hi! all works well thank you for assistance!
Hi all I noticed that my harbor restarts every 24 hours I deployed it in HA mode via ArgoCD (helm), Chart version in 1.14.0 (latest) It using external HA Postgres and external HA Redis with sentinel, and S3 as backend to store data k8s events:
What can I check or try to patch? Appreciate any help!