bitnami / charts

Bitnami Helm Charts
https://bitnami.com
Other
9.06k stars 9.24k forks source link

Harbor: harbor-trivy-0 not ready #3631

Closed osaffer closed 4 years ago

osaffer commented 4 years ago

Which chart: stable/harbor 7.1.1 2.0.2

Describe the bug

trivy cannot connect to harbor-redis-master, connection refused

ERROR: worker.fetch - dial tcp 172.26.33.49:6379: connect: connection refused

To Reproduce Steps to reproduce the behavior:

  1. helm install harbor stable/harbor --values harbor.values -n harbor
  2. in harbor values, I have disabled all about tls Expected behavior A clear and concise description of what you expected to happen.

Version of Helm and Kubernetes:

Logs of REDIS :+1:

kubectl logs -f harbor-redis-master-0 -n harbor -c redis
redis 10:47:57.77 INFO  ==> ** Starting Redis **
1:C 08 Sep 2020 10:47:57.783 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 08 Sep 2020 10:47:57.783 # Redis version=6.0.6, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 08 Sep 2020 10:47:57.783 # Configuration loaded
1:M 08 Sep 2020 10:47:57.790 * Running mode=standalone, port=6379.
**1:M 08 Sep 2020 10:47:57.790 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
1:M 08 Sep 2020 10:47:57.790 # Server initialized
1:M 08 Sep 2020 10:47:57.790 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.**
1:M 08 Sep 2020 10:47:57.791 * Ready to accept connections

Core LOGS

kubectl logs -f harbor-core-77449d8489-nkjjw -n harbor -c core
 10:48:37.87 
 10:48:37.87 Welcome to the Bitnami harbor-core container
 10:48:37.87 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-harbor-core
 10:48:37.87 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-harbor-core/issues
 10:48:37.87 
 10:48:37.87 INFO  ==> ** Starting Harbor Core setup **
 10:48:37.88 INFO  ==> Validating Core settings...
 10:48:37.89 INFO  ==> ** Harbor Core setup finished! **

2020-09-08T10:48:37Z [INFO] [/controller/artifact/processor/processor.go:58]: the processor to process media type application/vnd.oci.image.index.v1+json registered
2020-09-08T10:48:37Z [INFO] [/controller/artifact/processor/processor.go:58]: the processor to process media type application/vnd.docker.distribution.manifest.list.v2+json registered
2020-09-08T10:48:37Z [INFO] [/controller/artifact/processor/processor.go:58]: the processor to process media type application/vnd.docker.distribution.manifest.v1+prettyjws registered
2020-09-08T10:48:37Z [INFO] [/controller/artifact/processor/processor.go:58]: the processor to process media type application/vnd.oci.image.config.v1+json registered
2020-09-08T10:48:37Z [INFO] [/controller/artifact/processor/processor.go:58]: the processor to process media type application/vnd.docker.container.image.v1+json registered
2020-09-08T10:48:37Z [INFO] [/controller/artifact/processor/processor.go:58]: the processor to process media type application/vnd.cncf.helm.config.v1+json registered
2020-09-08T10:48:37Z [INFO] [/controller/artifact/processor/processor.go:58]: the processor to process media type application/vnd.cnab.manifest.v1 registered
2020-09-08T10:48:37Z [DEBUG] [/pkg/permission/evaluator/rbac/casbin_match.go:65]: Starting regexp store purge in 7m0s
2020-09-08T10:48:37Z [INFO] [/replication/adapter/native/adapter.go:36]: the factory for adapter docker-registry registered
2020-09-08T10:48:37Z [INFO] [/replication/adapter/harbor/adaper.go:31]: the factory for adapter harbor registered
2020-09-08T10:48:37Z [INFO] [/replication/adapter/dockerhub/adapter.go:25]: Factory for adapter docker-hub registered
2020-09-08T10:48:37Z [INFO] [/replication/adapter/huawei/huawei_adapter.go:27]: the factory of Huawei adapter was registered
2020-09-08T10:48:37Z [INFO] [/replication/adapter/googlegcr/adapter.go:29]: the factory for adapter google-gcr registered
2020-09-08T10:48:37Z [INFO] [/replication/adapter/awsecr/adapter.go:47]: the factory for adapter aws-ecr registered
2020-09-08T10:48:37Z [INFO] [/replication/adapter/azurecr/adapter.go:15]: Factory for adapter azure-acr registered
2020-09-08T10:48:37Z [INFO] [/replication/adapter/aliacr/adapter.go:31]: the factory for adapter ali-acr registered
2020-09-08T10:48:37Z [INFO] [/replication/adapter/jfrog/adapter.go:30]: the factory of jfrog artifactory adapter was registered
2020-09-08T10:48:37Z [INFO] [/replication/adapter/quayio/adapter.go:38]: the factory of Quay.io adapter was registered
2020-09-08T10:48:37Z [INFO] [/replication/adapter/helmhub/adapter.go:30]: the factory for adapter helm-hub registered
2020-09-08T10:48:37Z [INFO] [/replication/adapter/gitlab/adapter.go:17]: the factory for adapter gitlab registered
2020-09-08T10:48:37Z [DEBUG] [/core/auth/authenticator.go:126]: Registered authentication helper for auth mode: http_auth
2020-09-08T10:48:37Z [DEBUG] [/core/auth/authenticator.go:126]: Registered authentication helper for auth mode: db_auth
2020-09-08T10:48:37Z [DEBUG] [/core/auth/authenticator.go:126]: Registered authentication helper for auth mode: ldap_auth
2020-09-08T10:48:37Z [DEBUG] [/core/auth/authenticator.go:126]: Registered authentication helper for auth mode: oidc_auth
2020-09-08T10:48:37Z [DEBUG] [/core/auth/authenticator.go:126]: Registered authentication helper for auth mode: uaa_auth
2020-09-08T10:48:37Z [DEBUG] [/pkg/notifier/topic/topics.go:23]: topic http is subscribed
2020-09-08T10:48:37Z [DEBUG] [/pkg/notifier/topic/topics.go:23]: topic slack is subscribed
2020-09-08T10:48:37Z [INFO] [/core/controllers/base.go:299]: Config path: /etc/core/app.conf
2020-09-08T10:48:37Z [INFO] [/core/main.go:111]: initializing configurations...
2020-09-08T10:48:37Z [INFO] [/core/config/config.go:83]: key path: /etc/core/key
2020-09-08T10:48:37Z [INFO] [/core/config/config.go:60]: init secret store
2020-09-08T10:48:37Z [INFO] [/core/config/config.go:63]: init project manager
2020-09-08T10:48:37Z [INFO] [/core/config/config.go:95]: initializing the project manager based on local database...
2020-09-08T10:48:37Z [INFO] [/core/main.go:113]: configurations initialization completed
2020-09-08T10:48:37Z [INFO] [/common/dao/base.go:84]: Registering database: type-PostgreSQL host-harbor-postgresql port-5432 databse-registry sslmode-"disable"
2020-09-08T10:48:37Z [INFO] [/common/dao/base.go:89]: Register database completed
2020-09-08T10:48:37Z [DEBUG] [/migration/migration.go:41]: current database schema version: 0
2020-09-08T10:48:38Z [INFO] [/common/dao/pgsql.go:127]: Upgrading schema for pgsql ...
2020-09-08T10:48:41Z [DEBUG] [/migration/migration.go:57]: current data version: 0
2020-09-08T10:48:41Z [INFO] [/core/main.go:78]: User id: 1 updated its encrypted password successfully.
2020-09-08T10:48:41Z [INFO] [/chartserver/cache.go:184]: Enable redis cache for chart caching
2020-09-08T10:48:41Z [INFO] [/chartserver/reverse_proxy.go:60]: New chart server traffic proxy with middlewares
2020-09-08T10:48:41Z [DEBUG] [/core/api/chart_repository.go:612]: Chart storage server is set to http://harbor-chartmuseum
2020-09-08T10:48:41Z [INFO] [/core/api/chart_repository.go:613]: API controller for chart repository server is successfully initialized
2020-09-08T10:48:41Z [INFO] [/core/main.go:189]: Registering Trivy scanner
2020-09-08T10:48:41Z [INFO] [/common/dao/base.go:64]: initialized clair database
2020-09-08T10:48:41Z [INFO] [/core/main.go:211]: Registering Clair scanner
2020-09-08T10:48:41Z [INFO] [/pkg/scan/init.go:54]: Successfully registered Trivy scanner at http://harbor-trivy:8080
2020-09-08T10:48:41Z [INFO] [/pkg/scan/init.go:54]: Successfully registered Clair scanner at http://harbor-clair:8080
2020-09-08T10:48:41Z [INFO] [/core/main.go:229]: Setting Trivy as default scanner
2020-09-08T10:48:41Z [DEBUG] [/replication/replication.go:92]: the replication initialization completed
2020-09-08T10:48:41Z [INFO] [/core/main.go:156]: initializing notification...
2020-09-08T10:48:41Z [INFO] [/pkg/notification/notification.go:47]: notification initialization completed
2020-09-08T10:48:41Z [INFO] [/core/main.go:175]: Version: , Git commit: 
2020/09/08 10:48:41.278 [I] [asm_amd64.s:1357]  http server Running on http://:8080
**2020-09-08T10:49:02Z [DEBUG] [/server/middleware/log/log.go:30]: attach request id 2b4af8d6-968b-46ab-a995-0131a961ef54 to the logger for the request GET /api/v2.0/ping
2020-09-08T10:49:02Z [DEBUG] [/server/middleware/security/unauthorized.go:29][requestID="2b4af8d6-968b-46ab-a995-0131a961ef54"]: an unauthorized security context generated for request GET /api/v2.0/ping
2020/09/08 10:49:02.009 [D] [transaction.go:62]  |      127.0.0.1| 200 |   1.517391ms|   match| GET      /api/v2.0/ping   r:/api/v2.0/ping
2020-09-08T10:49:02Z [DEBUG] [/server/middleware/log/log.go:30]: attach request id 11ce1675-401e-45c9-aa22-4e00463ddbc7 to the logger for the request GET /api/v2.0/ping
2020-09-08T10:49:02Z [DEBUG] [/server/middleware/security/unauthorized.go:29][requestID="11ce1675-401e-45c9-aa22-4e00463ddbc7"]: an unauthorized security context generated for request GET /api/v2.0/ping
2020/09/08 10:49:02.124 [D] [transaction.go:62]  |      127.0.0.1| 200 |   1.169587ms|   match| GET      /api/v2.0/ping   r:/api/v2.0/ping
carrodher commented 4 years ago

Hi, thanks for creating this issue. Some questions:

osaffer commented 4 years ago

Hi,

First of all, thanks for replying me .

For the first point : It is not a typo.

helm repo list
NAME            URL                                             
stable          https://charts.bitnami.com/bitnami   

helm search repo harbor
NAME            CHART VERSION   APP VERSION DESCRIPTION                                       
harbor/harbor   1.4.2           2.0.2       An open source trusted cloud native registry th...
stable/harbor   7.1.1           2.0.2       Harbor is an an open source trusted cloud nativ...

Second point:

internalTLS:
  enabled: false
service:
# K8s service type
  ## Allowed values are "ClusterIP", "NodePort" or "LoadBalancer"
  ##
  type: ClusterIP
    tls:
      enabled: false

ingress:
  enabled: false

(I want to use Istio , test purpose), anyway, even with enabled: true, it does not matter.

externalURL: http://my-url.my.domain

Last point:

I have just tried with the default values.yaml got with helm inspect values.

Same issue, harbor-trivy cannot connect to redis

ERROR: worker.fetch - dial tcp 172.26.177.236:6379: connect: connection refused

While redis seems to be ready to accept connection:

kubectl logs -f pod/harbor-redis-master-0 -n harbor -c redis
redis 13:15:28.22 INFO  ==> ** Starting Redis **
1:C 08 Sep 2020 13:15:28.318 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 08 Sep 2020 13:15:28.318 # Redis version=6.0.6, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 08 Sep 2020 13:15:28.318 # Configuration loaded
1:M 08 Sep 2020 13:15:28.364 * Running mode=standalone, port=6379.
1:M 08 Sep 2020 13:15:28.365 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
1:M 08 Sep 2020 13:15:28.365 # Server initialized
1:M 08 Sep 2020 13:15:28.365 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
1:M 08 Sep 2020 13:15:28.366 * Ready to accept connections

Thank you !

carrodher commented 4 years ago

Ok, you added the bitnami repo named as stable, but in the end, you're using the latest version of the Bitnami Harbor chart.

Some tests on my side:

Case 1: Modifying the existing values.yaml

I just modified the existing values.yaml, setting the same parameters you are adding in a separate values file (the other ones are already set to the default value):

@ bitnami/harbor/values.yaml:368 @ service:
  ## K8s service type
  ## Allowed values are "ClusterIP", "NodePort" or "LoadBalancer"
  ##
-  type: LoadBalancer
+  type: ClusterIP
  ## TLS parameters
  ##
  tls:
     ## Enable TLS for external access
     ## Note: When type is "Ingress" and TLS is disabled, the port must be included
     ## in the command when pulling/pushing images.
     ## ref: https://github.com/goharbor/harbor/issues/5291
     ##
-    enabled: true
+    enabled: false
    ## Existing secret name containing your own TLS certificates.
    ## The secret contains keys named:
    ## "tls.crt" - the certificate (required)

@ bitnami/harbor/values.yaml:456 @ ingress:
## the IP address of k8s node. If Harbor is deployed behind the proxy,
## set it as the URL of proxy
##
-externalURL: https://core.harbor.domain
+externalURL: http://my-url.my.domain

## SecurityContext configuration
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
$ helm install harbor bitnami/harbor -f values.yaml
$ kubectl get pods
NAME                                    READY   STATUS    RESTARTS   AGE
harbor-chartmuseum-f66fd57-65bc7        1/1     Running   0          5m20s
harbor-clair-65d4769d5-s782v            2/2     Running   1          5m20s
harbor-core-85cc984d85-ghfs9            1/1     Running   0          5m20s
harbor-jobservice-55fb8f97dd-6bv7x      1/1     Running   0          5m20s
harbor-nginx-697cc74986-7ghgm           1/1     Running   0          5m20s
harbor-notary-server-74d77bbc44-fpk6j   1/1     Running   0          5m20s
harbor-notary-signer-8fbdbb74f-g89z8    1/1     Running   0          5m20s
harbor-portal-78c744d54b-qr9zp          1/1     Running   0          5m19s
harbor-postgresql-0                     1/1     Running   0          5m19s
harbor-redis-master-0                   1/1     Running   0          5m19s
harbor-registry-b7f687dfd-kqmzp         2/2     Running   0          5m19s
harbor-trivy-0                          1/1     Running   0          5m19s

$ kubectl logs -f harbor-trivy-0
harbor-adapter-trivy 15:53:17.79
harbor-adapter-trivy 15:53:17.79 Welcome to the Bitnami harbor-adapter-trivy container
harbor-adapter-trivy 15:53:17.79 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-harbor-adapter-trivy
harbor-adapter-trivy 15:53:17.80 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-harbor-adapter-trivy/issues
harbor-adapter-trivy 15:53:17.80
harbor-adapter-trivy 15:53:17.80 INFO  ==> ** Starting Harbor Adapter Trivy setup **
harbor-adapter-trivy 15:53:17.82 INFO  ==> ** Harbor Adapter Trivy setup finished! **
harbor-adapter-trivy 15:53:17.82 INFO  ==> ** Starting Harbor Adapter Trivy **
{"built_at":"unknown","commit":"none","level":"info","msg":"Starting harbor-scanner-trivy","time":"2020-09-08T15:53:17Z","version":"dev"}
{"level":"debug","msg":"Current process","pid":1,"time":"2020-09-08T15:53:17Z"}
{"gid":0,"home_dir":"/","level":"debug","msg":"Current user","time":"2020-09-08T15:53:17Z","uid":1001}
{"level":"debug","mode":"dgrwxrwxr-x","msg":"trivy cache dir permissions","time":"2020-09-08T15:53:17Z"}
{"level":"debug","mode":"dgrwxrwxr-x","msg":"trivy reports dir permissions","time":"2020-09-08T15:53:17Z"}
{"addr":":8080","level":"warning","msg":"Starting API server without TLS","time":"2020-09-08T15:53:17Z"}

Everything seems up and running.


Case 2: Creating a custom values.yaml

Then I created the same values.yaml:

internalTLS:
  enabled: false
service:
  type: ClusterIP
  tls:
    enabled: false

ingress:
  enabled: false

externalURL: http://my-url.my.domain

and installing the chart the error appears:

$ helm install harbor2 bitnami/harbor -f my-values.yaml

$ kubectl get pods
NAME                                     READY   STATUS    RESTARTS   AGE
harbor2-chartmuseum-646d95fb9f-c9r9w     1/1     Running   0          2m2s
harbor2-clair-59d46ffc69-mb8zw           2/2     Running   1          2m2s
harbor2-core-6f7c496c47-k6pf8            1/1     Running   0          2m2s
harbor2-jobservice-7d885b699-rhgb8       1/1     Running   0          2m2s
harbor2-nginx-57896df8fb-f4b6b           1/1     Running   0          2m2s
harbor2-notary-server-5b86fcd465-px54l   1/1     Running   0          2m2s
harbor2-notary-signer-574c54599d-klw6h   1/1     Running   0          2m2s
harbor2-portal-b5d87cdb-qmqvq            1/1     Running   0          2m2s
harbor2-postgresql-0                     1/1     Running   0          2m1s
harbor2-redis-master-0                   1/1     Running   0          2m1s
harbor2-registry-f59759858-lzj46         2/2     Running   0          2m1s
harbor2-trivy-0                          1/1     Running   0          2m1s

$ kubectl logs -f harbor2-trivy-0
harbor-adapter-trivy 15:59:25.08
harbor-adapter-trivy 15:59:25.09 Welcome to the Bitnami harbor-adapter-trivy container
harbor-adapter-trivy 15:59:25.09 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-harbor-adapter-trivy
harbor-adapter-trivy 15:59:25.09 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-harbor-adapter-trivy/issues
harbor-adapter-trivy 15:59:25.10
harbor-adapter-trivy 15:59:25.10 INFO  ==> ** Starting Harbor Adapter Trivy setup **
harbor-adapter-trivy 15:59:25.13 INFO  ==> ** Harbor Adapter Trivy setup finished! **
harbor-adapter-trivy 15:59:25.15 INFO  ==> ** Starting Harbor Adapter Trivy **
{"built_at":"unknown","commit":"none","level":"info","msg":"Starting harbor-scanner-trivy","time":"2020-09-08T15:59:25Z","version":"dev"}
{"level":"debug","msg":"Current process","pid":1,"time":"2020-09-08T15:59:25Z"}
{"gid":0,"home_dir":"/","level":"debug","msg":"Current user","time":"2020-09-08T15:59:25Z","uid":1001}
{"level":"debug","mode":"dgrwxrwxr-x","msg":"trivy cache dir permissions","time":"2020-09-08T15:59:25Z"}
{"level":"debug","mode":"dgrwxrwxr-x","msg":"trivy reports dir permissions","time":"2020-09-08T15:59:25Z"}
ERROR: write_concurrency_controls_max_concurrency - dial tcp 10.23.249.46:6379: connect: connection refused
{"addr":":8080","level":"warning","msg":"Starting API server without TLS","time":"2020-09-08T15:59:26Z"}
ERROR: heartbeat - dial tcp 10.23.249.46:6379: connect: connection refused
ERROR: requeuer.process - dial tcp 10.23.249.46:6379: connect: connection refused
ERROR: write_known_jobs - dial tcp 10.23.249.46:6379: connect: connection refused
ERROR: requeuer.process - dial tcp 10.23.249.46:6379: connect: connection refused
ERROR: heartbeat - dial tcp 10.23.249.46:6379: connect: connection refused
ERROR: periodic_enqueuer.should_enqueue - dial tcp 10.23.249.46:6379: connect: connection refused

Case 3: Setting the parameters with --set

In this case I am setting the parameters as part of the install commad

$ helm install harbor3 --set service.type=ClusterIP --set service.tls.enabled=false --set externalURL=http://my-url.my.domain bitnami/harbor

$ kubectl get pods
NAME                                           READY   STATUS    RESTARTS   AGE
harbor3-chartmuseum-5c6c996476-ql9pt     1/1     Running   0          51s
harbor3-clair-8699f4b9d6-6tsvj           2/2     Running   0          51s
harbor3-core-8465fc97-rtv9x              1/1     Running   0          51s
harbor3-jobservice-6cfc65b484-5hjkf      0/1     Running   0          51s
harbor3-nginx-7686dff464-hfxb7           1/1     Running   0          51s
harbor3-notary-server-5898cc7bd-q2z9b    1/1     Running   0          51s
harbor3-notary-signer-6b8d68d97d-wldlj   1/1     Running   0          51s
harbor3-portal-566d57cdf7-t48zj          1/1     Running   0          50s
harbor3-postgresql-0                     1/1     Running   0          50s
harbor3-redis-master-0                   1/1     Running   0          50s
harbor3-registry-64dd9d7d6f-psdm5        2/2     Running   0          50s
harbor3-trivy-0                          1/1     Running   0          50s

$ kubectl logs -f harbor3-trivy-0
harbor-adapter-trivy 06:10:10.44
harbor-adapter-trivy 06:10:10.45 Welcome to the Bitnami harbor-adapter-trivy container
harbor-adapter-trivy 06:10:10.45 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-harbor-adapter-trivy
harbor-adapter-trivy 06:10:10.46 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-harbor-adapter-trivy/issues
harbor-adapter-trivy 06:10:10.46
harbor-adapter-trivy 06:10:10.46 INFO  ==> ** Starting Harbor Adapter Trivy setup **
harbor-adapter-trivy 06:10:10.48 INFO  ==> ** Harbor Adapter Trivy setup finished! **
harbor-adapter-trivy 06:10:10.49 INFO  ==> ** Starting Harbor Adapter Trivy **
{"built_at":"unknown","commit":"none","level":"info","msg":"Starting harbor-scanner-trivy","time":"2020-09-09T06:10:10Z","version":"dev"}
{"level":"debug","msg":"Current process","pid":1,"time":"2020-09-09T06:10:10Z"}
{"gid":0,"home_dir":"/","level":"debug","msg":"Current user","time":"2020-09-09T06:10:10Z","uid":1001}
{"level":"debug","mode":"dgrwxrwxr-x","msg":"trivy cache dir permissions","time":"2020-09-09T06:10:10Z"}
{"level":"debug","mode":"dgrwxrwxr-x","msg":"trivy reports dir permissions","time":"2020-09-09T06:10:10Z"}
ERROR: write_concurrency_controls_max_concurrency - dial tcp 10.23.244.127:6379: connect: connection refused
{"addr":":8080","level":"warning","msg":"Starting API server without TLS","time":"2020-09-09T06:10:11Z"}
ERROR: heartbeat - dial tcp 10.23.244.127:6379: connect: connection refused
ERROR: requeuer.process - dial tcp 10.23.244.127:6379: connect: connection refused
ERROR: requeuer.process - dial tcp 10.23.244.127:6379: connect: connection refused
ERROR: requeuer.process - dial tcp 10.23.244.127:6379: connect: connection refused
ERROR: requeuer.process - dial tcp 10.23.244.127:6379: connect: connection refused
ERROR: heartbeat - dial tcp 10.23.244.127:6379: connect: connection refused
ERROR: periodic_enqueuer.should_enqueue - dial tcp 10.23.244.127:6379: connect: connection refused
ERROR: periodic_enqueuer.loop.enqueue - dial tcp 10.23.244.127:6379: connect: connection refused
ERROR: requeuer.process - dial tcp 10.23.244.127:6379: connect: connection refused
ERROR: dead_pool_reaper.reap - dial tcp 10.23.244.127:6379: connect: connection refused
ERROR: heartbeat - dial tcp 10.23.244.127:6379: connect: connection refused
ERROR: write_known_jobs - dial tcp 10.23.244.127:6379: connect: connection refused
ERROR: worker.fetch - dial tcp 10.23.244.127:6379: connect: connection refused

in this case, the issue also appears.


What is different between installation methods?

When using a custom values file (case 2) and parameters with --set (case 3), the values applied to the installation are the same in both cases; but when modifying the existing values.yaml (case 1) the parameters that are applied are different.

$ helm get values --all harbor > modifying_values.txt
$ helm get values --all harbor2 > using_external_file.txt
$ helm get values --all harbor3 > setting_parameters.txt
$ colordiff using_external_file.txt setting_parameters.txt
# No output means the same content

$ colordiff modifying_values.txt using_external_file.txt
8c8
-   tag: 0.12.0-debian-10-r139
---
+   tag: 0.12.0-debian-10-r119
13a14,15
+   args: null
+   chartPostFormFieldName: null
14a17,18
+   command: null
+   contextPath: null
22a27,28
+   extraEnvVarsCM: null
+   extraEnvVarsSecret: null
24a31
+   indexLimit: null
25a33
+   lifecycleHooks: null
33a42,43
+   maxStorageObjects: null
+   maxUploadSize: null
36a47
+   provPostFormFieldName: null
49c60,61
-   tls: {}
---
+   tls:
+     existingSecret: null
55a68,69
+     args: null
+     command: null
58a73,74
+     extraEnvVarsCM: null
+     extraEnvVarsSecret: null
59a76
+     lifecycleHooks: null
79a97,98
+   httpProxy: null
+   httpsProxy: null
85a105,106
+     args: null
+     command: null
88a110,111
+     extraEnvVarsCM: null
+     extraEnvVarsSecret: null
89a113
+     lifecycleHooks: null
108c132,133
-   tls: {}
---
+   tls:
+     existingSecret: null
119c144
-   tag: 2.0.2-debian-10-r20
---
+   tag: 2.0.2-debian-10-r0
126c151
-   tag: 2.0.2-debian-10-r21
---
+   tag: 2.0.2-debian-10-r0
136a162,163
+   args: null
+   command: null
140a168,169
+   extraEnvVarsCM: null
+   extraEnvVarsSecret: null
143a173
+   lifecycleHooks: null
165a196
+   secretKey: null
168c199,200
-   tls: {}
---
+   tls:
+     existingSecret: null
169a202
+   uaaSecretName: null
178c211
-   tag: 2.0.2-debian-10-r19
---
+   tag: 2.0.2-debian-10-r0
179a213,216
+   clairDatabase: null
+   clairPassword: null
+   clairUsername: null
+   coreDatabase: null
180a218,223
+   notaryServerDatabase: null
+   notaryServerPassword: null
+   notaryServerUsername: null
+   notarySignerDatabase: null
+   notarySignerPassword: null
+   notarySignerUsername: null
182a226
+   sslmode: null
195a240,241
> fullnameOverride: null
> harborAdminPassword: null
210a257,258
+   args: null
+   command: null
213a262,263
+   extraEnvVarsCM: null
+   extraEnvVarsSecret: null
217a268
+   lifecycleHooks: null
242c293,294
-   tls: {}
---
+   tls:
+     existingSecret: null
252c304
-   tag: 2.0.2-debian-10-r19
---
+   tag: 2.0.2-debian-10-r0
253a306
> nameOverride: null
255a309
+   args: null
256a311
+   command: null
259a315,316
+   extraEnvVarsCM: null
+   extraEnvVarsSecret: null
262a320
+   lifecycleHooks: null
294c352
-   tag: 1.19.2-debian-10-r15
---
+   tag: 1.19.1-debian-10-r23
299a358,359
+     args: null
+     command: null
302a363,364
+     extraEnvVarsCM: null
+     extraEnvVarsSecret: null
305a368
+     lifecycleHooks: null
332a396,397
+     args: null
+     command: null
335a401,402
+     extraEnvVarsCM: null
+     extraEnvVarsSecret: null
338a406
+     lifecycleHooks: null
370c438
-   tag: 2.0.2-debian-10-r20
---
+   tag: 2.0.2-debian-10-r0
377c445
-   tag: 2.0.2-debian-10-r19
---
+   tag: 2.0.2-debian-10-r0
384a453
+       realm: null
385a455
+     caBundleSecretName: null
387a458
+       maxthreads: null
390a462
+       chunksize: null
392c464,476
-     oss: {}
---
+       rootdirectory: null
+     oss:
+       accesskeyid: null
+       accesskeysecret: null
+       bucket: null
+       chunksize: null
+       encrypt: null
+       endpoint: null
+       internal: null
+       region: null
+       rootdirectory: null
+       secretkey: null
+       secure: null
393a478
+       accesskey: null
394a480,482
+       chunksize: null
+       encrypt: null
+       keyid: null
395a484,490
+       regionendpoint: null
+       rootdirectory: null
+       secretkey: null
+       secure: null
+       sse: null
+       storageclass: null
+       v4auth: null
396a492
+       accesskey: null
397a494,510
+       authversion: null
+       chunksize: null
+       container: null
+       domain: null
+       domainid: null
+       endpointtype: null
+       insecureskipverify: null
+       password: null
+       prefix: null
+       region: null
+       secretkey: null
+       tempurlcontainerkey: null
+       tempurlmethods: null
+       tenant: null
+       tenantid: null
+       trustid: null
+       username: null
426a540,541
+   args: null
+   command: null
429a545,546
+   extraEnvVarsCM: null
+   extraEnvVarsSecret: null
432a550
+   lifecycleHooks: null
455c573,574
-   tls: {}
---
+   tls:
+     existingSecret: null
465c584
-   tag: 2.0.2-debian-10-r19
---
+   tag: 2.0.2-debian-10-r2
568a688
+   nameOverride: null
750a871
+   nameOverride: null
879a1001,1002
+     args: null
+     command: null
882a1006,1007
+     extraEnvVarsCM: null
+     extraEnvVarsSecret: null
883a1009
+     lifecycleHooks: null
922a1049,1050
+     args: null
+     command: null
925a1054,1055
+     extraEnvVarsCM: null
+     extraEnvVarsSecret: null
926a1057
+     lifecycleHooks: null
945c1076,1077
-   tls: {}
---
+   tls:
+     existingSecret: null
955c1087
-   tag: 2.0.2-debian-10-r19
---
+   tag: 2.0.2-debian-10-r0
962c1094
-   tag: 2.0.2-debian-10-r20
---
+   tag: 2.0.2-debian-10-r0
964a1097,1098
+   externalTrafficPolicy: null
+   loadBalancerIP: null
966c1100,1103
-   nodePorts: {}
---
+   nodePorts:
+     http: null
+     https: null
+     notary: null
978a1116
+   args: null
980a1119
+   command: null
985a1125,1126
+   extraEnvVarsCM: null
+   extraEnvVarsSecret: null
991a1133
+   lifecycleHooks: null
1020c1162,1163
-   tls: {}
---
+   tls:
+     existingSecret: null
1031c1174
-   tag: 2.0.2-debian-10-r22
---
+   tag: 2.0.2-debian-10-r1

it seems when modifying the existing values.yaml (case 1) the objects or null parameters are not expanded/set, but when using a custom values file (case 2) or --set (case 3), yes. In the same way, the tags are not updated when in cases 2 and 3.

In the end, it seems you can workaround the issue by editing the existing values.yaml, but I would like to understand what is happening.

osaffer commented 4 years ago

Hi,

Thank you very much about all your different tries, but, I am not sure to understand what you did to make it work ...

My steps :+1:

  1. helm fetch stable/harbor --untar
  2. I just modified the existing values.yaml, setting the same parameters by editing values.yaml
  3. helm install harbor . -n harbor

What did i miss ?

carrodher commented 4 years ago

Starting from scratch (new namespace, no existing PVs/PVCs) and running the following, the error doesn't appear:

$ helm fetch stable/harbor --untar
$ cd harbor
$ vim values.yaml # Modifying the values
$ helm install harbor . -f values.yaml -n harbor

But doing the same without -f values.yaml even doing helm install ., the issue appears. Also doing the helm get values check the same difference in the values reported in the previous comment appears

osaffer commented 4 years ago

It seems problem occurs when I label the namespace harbor with istio-injection=enabled kubectl label namespace harbor istio-injection=enabled

I did different test:

It is quite strange ...

carrodher commented 4 years ago

AFAIK there are some changes required to fully support istio, for example, there are some PRs adding istio compatibility for other charts, see https://github.com/bitnami/charts/pulls?q=is%3Apr+istio+is%3Aclosed

osaffer commented 4 years ago

Where in the values.yaml file I should add a label app: ?

Otherwise I managed to create the virtualservice and gateway to reach the portal page ...

carrodher commented 4 years ago

You can use commonLabels that is going to be added to all deployed objects, or registry.nodeSelector/registry.podLabels to add additional labels to the pod, instead of registry you can use other components.

stale[bot] commented 4 years ago

This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.

FraPazGal commented 4 years ago

Hi @osaffer,

Did you manage to deploy harbor without issues? If not, please do not doubt to tell us so we can keep helping.

osaffer commented 4 years ago
internalTLS:
  enabled: false
type: ClusterIP
   tls:
     enabled: false
externalURL: http://my-hostname

Like that I configure the TLS external URL with Istio. So I can login to Harbor WebUI.

I thought I could use the admin user to login to the register, but I noticed this credentials in values.xml file, I can do docker login with that credentials

 credentials:
    username: 'harbor_registry_user'
    password: 'harbor_registry_password'

This is not reallyy what I wish. I would like to login with Users created into WebUi of Harbor.

This is my istio ingress configuration.

I access to webui by https://my-hostname

cat harborTLS_GOOD.yaml

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: harbor-gateway
  namespace: harbor
spec:
  selector:
    istio: ingressgateway
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "my-hostname"
    tls:
      httpsRedirect: true
  - port:
      number: 443
      name: https
      protocol: HTTPS
    hosts:
    - "my-hostname"
    tls:
      mode: SIMPLE
      credentialName: harbor-tls

---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: harbor-vs
  namespace: harbor
spec:
  hosts:
  - "my-hostname"
  gateways:
  - harbor-gateway
  http:
  - match:
    - uri:
        prefix: /api/
    rewrite:
      uri: /api/
    route:
    - destination:
        host: harbor
        port:
          number: 80
  - match:
    - uri:
        prefix: /service/
    rewrite:
      uri: /service/
    route:
    - destination:
        host: harbor
        port:
          number: 80
  - match:
    - uri:
        prefix: /v2/
    rewrite:
      uri: /v2/
    route:
    - destination:
        host: harbor-registry
        port:
          number: 5000
  - match:
    - uri:
        prefix: /chartrepo/
    rewrite:
      uri: /chartrepo/
    route:
    - destination:
        host: harbor
        port:
          number: 80
  - match:
    - uri:
        prefix: /c/
    rewrite:
      uri: /c/
    route:
    - destination:
        host: harbor
        port:
          number: 80

---

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: portal-gateway
  namespace: harbor
spec:
  selector:
    istio: ingressgateway
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "my-hostname"
    tls:
      httpsRedirect: true
  - port:
      number: 443
      name: https
      protocol: HTTPS
    hosts:
    - "my-hostname"
    tls:
      mode: SIMPLE
      credentialName: harbor-tls
FraPazGal commented 4 years ago

Hi @osaffer,

As far as I understand, you are able to connect to the Harbor WebUI right? In that case, I'm not sure if there is any problem with TLS or Istio.

Regarding the users configuration, the users that access the WebUI (admin and the ones you create in the admin panel) are different from the user of the registry, whose credentials are indeed what you posted. I might be wrong, but they should be two types of users and that would be the cause as to why you can't use them interchangeably.

osaffer commented 4 years ago

Hi,

It was my mistake I did not use the registery account to push the image . But despite the image is pushed successful , I cannot see it in the harbor UI, the repository is not created.

But I can see it if I do https://my harbor/v2/_catalog

Regards

FraPazGal commented 4 years ago

Hi @osaffer,

That is indeed strange, are you pushing the image to the project's repository? As far as I know, you don't need to explicitly create the repository, only the project. Once the project is created you only need to do:

$ docker push <harbor_address>/<project_name>/<image_name>:<tag>
osaffer commented 4 years ago

Hi This is what I did, I just pushed it to the project , the push works, but the repository is not created. I saw some errors in harbor notary signer, but I will share logs with you on Monday, it

osaffer commented 4 years ago

I disabled all about TLS in values file.

TLS is set up in istio

kubectl logs -f -n harbor pod/harbor-registry-5b49db4f7f-vw29r -c registry level=error msg="retryingsink: error writing events: httpSink{http://harbor-core/service/notifications}: response status 404 Not Found unaccepted, retrying" time="2020-10-05T12:34:40.899536389Z" level=warning msg="httpSink{http://harbor-core/service/notifications} encountered too many errors, backing off"

kubectl logs -f -n harbor pod/harbor-notary-signer-955975f8d-csfzt grpc: Server.Serve failed to complete security handshake from "172.20.2.1:43592": EOF

kubectl logs -f -n harbor pod/harbor-notary-server-659696fdf-f5fkd grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 172.26.133.132:7899: connect: connection refused"; Reconnecting to {harbor-notary-signer:7899 } {"level":"error","msg":"Trust not fully operational: rpc error: code = 14 desc = grpc: the connection is unavailable","

FraPazGal commented 4 years ago

Hi @osaffer,

From what I understand, you got Harbor working with internal TLS but managing it with Istio is what is causing problems, is that right?

Have you tried following my colleague advice on https://github.com/bitnami/charts/issues/3631#issuecomment-689499465?

osaffer commented 4 years ago

Hi,

I have just redeployed Harbor with helm, without notary. So, the configuration with istio seems ok, because I can access the WebUI, and, I can log in and push an image with docker command. But, in WebUI, the repository is not visible.

Also I saw differents how to, and people are able to docker login with for example the admin account of harbor, and me I have to use the register user account described in values file.

How can I configure to docker login with user created in the WebUi of Harbor ?

I think there is something wrong with helm .

These are my values :: internalTLS: enabled: false service: type: ClusterIP tls: enabled: false

ingress: enabled: true I let TRUE in order to dont install the nginx pod, after installation I delete the ingress created, I use my istio ingress

externalURL: http://istio.do.gov.lu

I let http, because https will be managed with my istio configuration.

relativeurls: false credentials: username: 'register' password: '4Register'

Notary parameters

notary: enabled: false

These are my values.

This register account is for me a non sense, because so, for what to create some users in Harbor, if with these users we cannot docker login and push images.

Documentation does not seem very clear.

Thanks you

osaffer commented 4 years ago

docker tag debian:1.0 my-hostname/library/debian:1.0 docker push my-hostname/library/debian:1.0 The push refers to repository [my-hostname/library/debian] 4ef54afed780: Pushed 1.0: digest: sha256:5658c83d366defa40f1e43cd9dbee4c8af2d86c31878f41296978b4cea4344bc size: 529

Image is well pushed:

Logs in registry pod:

ime="2020-10-07T11:50:59.091813417Z" level=debug msg="authorizing request" go.version=go1.14.6 http.request.host=my-hostname http.request.id=99166aa3-559f-4989-b12c-9ea0286a48cb http.request.method=GET http.request.remoteaddr=172.20.1.0 http.request.uri="/v2/" http.request.useragent="docker/19.03.12 go/go1.14.4 git-commit/48a6621 kernel/5.7.0-kali1-amd64 os/linux arch/amd64 UpstreamClient(Docker-Client/19.03.12 (linux))" time="2020-10-07T11:50:59.091888975Z" level=warning msg="error authorizing context: basic authentication challenge for realm "harbor-registry-basic-realm": invalid authorization credential" go.version=go1.14.6 http.request.host=my-hostname http.request.id=99166aa3-559f-4989-b12c-9ea0286a48cb http.request.method=GET http.request.remoteaddr=172.20.1.0 http.request.uri="/v2/" http.request.useragent="docker/19.03.12 go/go1.14.4 git-commit/48a6621 kernel/5.7.0-kali1-amd64 os/linux arch/amd64 UpstreamClient(Docker-Client/19.03.12 (linux))" 172.20.3.153 - - [07/Oct/2020:11:50:59 +0000] "GET /v2/ HTTP/1.1" 401 87 "" "docker/19.03.12 go/go1.14.4 git-commit/48a6621 kernel/5.7.0-kali1-amd64 os/linux arch/amd64 UpstreamClient(Docker-Client/19.03.12 \(linux\))" time="2020-10-07T11:50:59.117829Z" level=debug msg="authorizing request" go.version=go1.14.6 http.request.host=my-hostname http.request.id=888d1cb3-7e2c-4959-ab65-b518f416a6d0 http.request.method=HEAD http.request.remoteaddr=172.20.3.1 http.request.uri="/v2/library/debian/blobs/sha256:57df1a1f1ad841deaf50c8f662d77e93b4b17af776ed66148116607f9aceffa8" http.request.useragent="docker/19.03.12 go/go1.14.4 git-commit/48a6621 kernel/5.7.0-kali1-amd64 os/linux arch/amd64 UpstreamClient(Docker-Client/19.03.12 (linux))" vars.digest="sha256:57df1a1f1ad841deaf50c8f662d77e93b4b17af776ed66148116607f9aceffa8" vars.name="library/debian" time="2020-10-07T11:50:59.206077801Z" level=info msg="authorized request" go.version=go1.14.6 http.request.host=my-hostname http.request.id=888d1cb3-7e2c-4959-ab65-b518f416a6d0 http.request.method=HEAD http.request.remoteaddr=172.20.3.1 http.request.uri="/v2/library/debian/blobs/sha256:57df1a1f1ad841deaf50c8f662d77e93b4b17af776ed66148116607f9aceffa8" http.request.useragent="docker/19.03.12 go/go1.14.4 git-commit/48a6621 kernel/5.7.0-kali1-amd64 os/linux arch/amd64 UpstreamClient(Docker-Client/19.03.12 (linux))" vars.digest="sha256:57df1a1f1ad841deaf50c8f662d77e93b4b17af776ed66148116607f9aceffa8" vars.name="library/debian" time="2020-10-07T11:50:59.206203119Z" level=debug msg=GetBlob auth.user.name=register go.version=go1.14.6 http.request.host=my-hostname http.request.id=888d1cb3-7e2c-4959-ab65-b518f416a6d0 http.request.method=HEAD http.request.remoteaddr=172.20.3.1 http.request.uri="/v2/library/debian/blobs/sha256:57df1a1f1ad841deaf50c8f662d77e93b4b17af776ed66148116607f9aceffa8" http.request.useragent="docker/19.03.12 go/go1.14.4 git-commit/48a6621 kernel/5.7.0-kali1-amd64 os/linux arch/amd64 UpstreamClient(Docker-Client/19.03.12 (linux))" vars.digest="sha256:57df1a1f1ad841deaf50c8f662d77e93b4b17af776ed66148116607f9aceffa8" vars.name="library/debian" time="2020-10-07T11:50:59.208394748Z" level=info msg="redis: connect harbor-redis-master:6379" go.version=go1.14.6 instance.id=b04d4635-cab6-452d-b2a0-8711ffcf4e5c redis.connect.duration=2.070877ms service=registry version=v2.7.1 time="2020-10-07T11:50:59.210375886Z" level=info msg="redis: connect harbor-redis-master:6379" go.version=go1.14.6 instance.id=b04d4635-cab6-452d-b2a0-8711ffcf4e5c redis.connect.duration=1.072799ms service=registry version=v2.7.1 time="2020-10-07T11:50:59.212270762Z" level=info msg="redis: connect harbor-redis-master:6379" go.version=go1.14.6 instance.id=b04d4635-cab6-452d-b2a0-8711ffcf4e5c redis.connect.duration=1.552308ms service=registry version=v2.7.1 time="2020-10-07T11:50:59.212511728Z" level=debug msg="filesystem.URLFor("/docker/registry/v2/blobs/sha256/57/57df1a1f1ad841deaf50c8f662d77e93b4b17af776ed66148116607f9aceffa8/data")" auth.user.name=register go.version=go1.14.6 http.request.host=my-hostname http.request.id=888d1cb3-7e2c-4959-ab65-b518f416a6d0 http.request.method=HEAD http.request.remoteaddr=172.20.3.1 http.request.uri="/v2/library/debian/blobs/sha256:57df1a1f1ad841deaf50c8f662d77e93b4b17af776ed66148116607f9aceffa8" http.request.useragent="docker/19.03.12 go/go1.14.4 git-commit/48a6621 kernel/5.7.0-kali1-amd64 os/linux arch/amd64 UpstreamClient(Docker-Client/19.03.12 (linux))" trace.duration=16.855µs trace.file="/bitnami/blacksmith-sandox/docker-distribution-2.7.1/src/github.com/docker/distribution/registry/storage/driver/base/base.go" trace.func="github.com/docker/distribution/registry/storage/driver/base.(Base).URLFor" trace.id=f3438854-42d9-44e6-82e1-0c0d11284aa6 trace.line=217 vars.digest="sha256:57df1a1f1ad841deaf50c8f662d77e93b4b17af776ed66148116607f9aceffa8" vars.name="library/debian" time="2020-10-07T11:50:59.214012142Z" level=info msg="redis: connect harbor-redis-master:6379" go.version=go1.14.6 instance.id=b04d4635-cab6-452d-b2a0-8711ffcf4e5c redis.connect.duration=1.353131ms service=registry version=v2.7.1 172.20.3.153 - - [07/Oct/2020:11:50:59 +0000] "HEAD /v2/library/debian/blobs/sha256:57df1a1f1ad841deaf50c8f662d77e93b4b17af776ed66148116607f9aceffa8 HTTP/1.1" 200 0 "" "docker/19.03.12 go/go1.14.4 git-commit/48a6621 kernel/5.7.0-kali1-amd64 os/linux arch/amd64 UpstreamClient(Docker-Client/19.03.12 \(linux\))" time="2020-10-07T11:50:59.215039114Z" level=info msg="response completed" go.version=go1.14.6 http.request.host=my-hostname http.request.id=888d1cb3-7e2c-4959-ab65-b518f416a6d0 http.request.method=HEAD http.request.remoteaddr=172.20.3.1 http.request.uri="/v2/library/debian/blobs/sha256:57df1a1f1ad841deaf50c8f662d77e93b4b17af776ed66148116607f9aceffa8" http.request.useragent="docker/19.03.12 go/go1.14.4 git-commit/48a6621 kernel/5.7.0-kali1-amd64 os/linux arch/amd64 UpstreamClient(Docker-Client/19.03.12 (linux))" http.response.contenttype="application/octet-stream" http.response.duration=98.758368ms http.response.status=200 http.response.written=0 time="2020-10-07T11:50:59.247497363Z" level=debug msg="authorizing request" go.version=go1.14.6 http.request.host=my-hostname http.request.id=ec765928-4775-424e-a580-93ef3b47fd92 http.request.method=HEAD http.request.remoteaddr=172.20.2.0 http.request.uri="/v2/library/debian/blobs/sha256:1846324f1f116e1c06f887b81f733494eeb35f9f0172d9876dbc058df0808a73" http.request.useragent="docker/19.03.12 go/go1.14.4 git-commit/48a6621 kernel/5.7.0-kali1-amd64 os/linux arch/amd64 UpstreamClient(Docker-Client/19.03.12 (linux))" vars.digest="sha256:1846324f1f116e1c06f887b81f733494eeb35f9f0172d9876dbc058df0808a73" vars.name="library/debian" time="2020-10-07T11:50:59.334574455Z" level=info msg="authorized request" go.version=go1.14.6 http.request.host=my-hostname http.request.id=ec765928-4775-424e-a580-93ef3b47fd92 http.request.method=HEAD http.request.remoteaddr=172.20.2.0 http.request.uri="/v2/library/debian/blobs/sha256:1846324f1f116e1c06f887b81f733494eeb35f9f0172d9876dbc058df0808a73" http.request.useragent="docker/19.03.12 go/go1.14.4 git-commit/48a6621 kernel/5.7.0-kali1-amd64 os/linux arch/amd64 UpstreamClient(Docker-Client/19.03.12 (linux))" vars.digest="sha256:1846324f1f116e1c06f887b81f733494eeb35f9f0172d9876dbc058df0808a73" vars.name="library/debian" time="2020-10-07T11:50:59.334807265Z" level=debug msg=GetBlob auth.user.name=register go.version=go1.14.6 http.request.host=my-hostname http.request.id=ec765928-4775-424e-a580-93ef3b47fd92 http.request.method=HEAD http.request.remoteaddr=172.20.2.0 http.request.uri="/v2/library/debian/blobs/sha256:1846324f1f116e1c06f887b81f733494eeb35f9f0172d9876dbc058df0808a73" http.request.useragent="docker/19.03.12 go/go1.14.4 git-commit/48a6621 kernel/5.7.0-kali1-amd64 os/linux arch/amd64 UpstreamClient(Docker-Client/19.03.12 (linux))" vars.digest="sha256:1846324f1f116e1c06f887b81f733494eeb35f9f0172d9876dbc058df0808a73" vars.name="library/debian" time="2020-10-07T11:50:59.336795739Z" level=info msg="redis: connect harbor-redis-master:6379" go.version=go1.14.6 instance.id=b04d4635-cab6-452d-b2a0-8711ffcf4e5c redis.connect.duration=1.914069ms service=registry version=v2.7.1 time="2020-10-07T11:50:59.338069779Z" level=info msg="redis: connect harbor-redis-master:6379" go.version=go1.14.6 instance.id=b04d4635-cab6-452d-b2a0-8711ffcf4e5c redis.connect.duration=946.567µs service=registry version=v2.7.1 time="2020-10-07T11:50:59.339974976Z" level=info msg="redis: connect harbor-redis-master:6379" go.version=go1.14.6 instance.id=b04d4635-cab6-452d-b2a0-8711ffcf4e5c redis.connect.duration=1.077311ms service=registry version=v2.7.1 time="2020-10-07T11:50:59.34017913Z" level=debug msg="filesystem.URLFor("/docker/registry/v2/blobs/sha256/18/1846324f1f116e1c06f887b81f733494eeb35f9f0172d9876dbc058df0808a73/data")" auth.user.name=register go.version=go1.14.6 http.request.host=my-hostname http.request.id=ec765928-4775-424e-a580-93ef3b47fd92 http.request.method=HEAD http.request.remoteaddr=172.20.2.0 http.request.uri="/v2/library/debian/blobs/sha256:1846324f1f116e1c06f887b81f733494eeb35f9f0172d9876dbc058df0808a73" http.request.useragent="docker/19.03.12 go/go1.14.4 git-commit/48a6621 kernel/5.7.0-kali1-amd64 os/linux arch/amd64 UpstreamClient(Docker-Client/19.03.12 (linux))" trace.duration=17.082µs trace.file="/bitnami/blacksmith-sandox/docker-distribution-2.7.1/src/github.com/docker/distribution/registry/storage/driver/base/base.go" trace.func="github.com/docker/distribution/registry/storage/driver/base.(Base).URLFor" trace.id=3df9cf74-b85a-45b4-9c01-eab3b626e2ff trace.line=217 vars.digest="sha256:1846324f1f116e1c06f887b81f733494eeb35f9f0172d9876dbc058df0808a73" vars.name="library/debian" time="2020-10-07T11:50:59.341103218Z" level=info msg="redis: connect harbor-redis-master:6379" go.version=go1.14.6 instance.id=b04d4635-cab6-452d-b2a0-8711ffcf4e5c redis.connect.duration=867.528µs service=registry version=v2.7.1 time="2020-10-07T11:50:59.342027888Z" level=info msg="response completed" go.version=go1.14.6 http.request.host=my-hostname http.request.id=ec765928-4775-424e-a580-93ef3b47fd92 http.request.method=HEAD http.request.remoteaddr=172.20.2.0 http.request.uri="/v2/library/debian/blobs/sha256:1846324f1f116e1c06f887b81f733494eeb35f9f0172d9876dbc058df0808a73" http.request.useragent="docker/19.03.12 go/go1.14.4 git-commit/48a6621 kernel/5.7.0-kali1-amd64 os/linux arch/amd64 UpstreamClient(Docker-Client/19.03.12 (linux))" http.response.contenttype="application/octet-stream" http.response.duration=95.414459ms http.response.status=200 http.response.written=0 172.20.3.153 - - [07/Oct/2020:11:50:59 +0000] "HEAD /v2/library/debian/blobs/sha256:1846324f1f116e1c06f887b81f733494eeb35f9f0172d9876dbc058df0808a73 HTTP/1.1" 200 0 "" "docker/19.03.12 go/go1.14.4 git-commit/48a6621 kernel/5.7.0-kali1-amd64 os/linux arch/amd64 UpstreamClient(Docker-Client/19.03.12 \(linux\))" time="2020-10-07T11:50:59.361442018Z" level=debug msg="authorizing request" go.version=go1.14.6 http.request.contenttype="application/vnd.docker.distribution.manifest.v2+json" http.request.host=my-hostname http.request.id=66468f90-b7f5-41ec-a548-d564ff83a263 http.request.method=PUT http.request.remoteaddr=172.20.1.0 http.request.uri="/v2/library/debian/manifests/1.0" http.request.useragent="docker/19.03.12 go/go1.14.4 git-commit/48a6621 kernel/5.7.0-kali1-amd64 os/linux arch/amd64 UpstreamClient(Docker-Client/19.03.12 (linux))" vars.name="library/debian" vars.reference=1.0 time="2020-10-07T11:50:59.425729994Z" level=error msg="retryingsink: error writing events: httpSink{http://harbor-core/service/notifications}: response status 404 Not Found unaccepted, retrying" time="2020-10-07T11:50:59.425754102Z" level=warning msg="httpSink{http://harbor-core/service/notifications} encountered too many errors, backing off" time="2020-10-07T11:50:59.446348038Z" level=info msg="authorized request" go.version=go1.14.6 http.request.contenttype="application/vnd.docker.distribution.manifest.v2+json" http.request.host=my-hostname http.request.id=66468f90-b7f5-41ec-a548-d564ff83a263 http.request.method=PUT http.request.remoteaddr=172.20.1.0 http.request.uri="/v2/library/debian/manifests/1.0" http.request.useragent="docker/19.03.12 go/go1.14.4 git-commit/48a6621 kernel/5.7.0-kali1-amd64 os/linux arch/amd64 UpstreamClient(Docker-Client/19.03.12 (linux))" vars.name="library/debian" vars.reference=1.0 time="2020-10-07T11:50:59.446416113Z" level=debug msg=PutImageManifest auth.user.name=register go.version=go1.14.6 http.request.contenttype="application/vnd.docker.distribution.manifest.v2+json" http.request.host=my-hostname http.request.id=66468f90-b7f5-41ec-a548-d564ff83a263 http.request.method=PUT http.request.remoteaddr=172.20.1.0 http.request.uri="/v2/library/debian/manifests/1.0" http.request.useragent="docker/19.03.12 go/go1.14.4 git-commit/48a6621 kernel/5.7.0-kali1-amd64 os/linux arch/amd64 UpstreamClient(Docker-Client/19.03.12 (linux))" vars.name="library/debian" vars.reference=1.0 time="2020-10-07T11:50:59.446559673Z" level=debug msg="Putting a Docker Manifest!" auth.user.name=register go.version=go1.14.6 http.request.contenttype="application/vnd.docker.distribution.manifest.v2+json" http.request.host=my-hostname http.request.id=66468f90-b7f5-41ec-a548-d564ff83a263 http.request.method=PUT http.request.remoteaddr=172.20.1.0 http.request.uri="/v2/library/debian/manifests/1.0" http.request.useragent="docker/19.03.12 go/go1.14.4 git-commit/48a6621 kernel/5.7.0-kali1-amd64 os/linux arch/amd64 UpstreamClient(Docker-Client/19.03.12 (linux))" vars.name="library/debian" vars.reference=1.0 time="2020-10-07T11:50:59.446581486Z" level=debug msg="(manifestStore).Put" auth.user.name=register go.version=go1.14.6 http.request.contenttype="application/vnd.docker.distribution.manifest.v2+json" http.request.host=my-hostname http.request.id=66468f90-b7f5-41ec-a548-d564ff83a263 http.request.method=PUT http.request.remoteaddr=172.20.1.0 http.request.uri="/v2/library/debian/manifests/1.0" http.request.useragent="docker/19.03.12 go/go1.14.4 git-commit/48a6621 kernel/5.7.0-kali1-amd64 os/linux arch/amd64 UpstreamClient(Docker-Client/19.03.12 (linux))" vars.name="library/debian" vars.reference=1.0 time="2020-10-07T11:50:59.44659671Z" level=debug msg="(schema2ManifestHandler).Put" auth.user.name=register go.version=go1.14.6 http.request.contenttype="application/vnd.docker.distribution.manifest.v2+json" http.request.host=my-hostname http.request.id=66468f90-b7f5-41ec-a548-d564ff83a263 http.request.method=PUT http.request.remoteaddr=172.20.1.0 http.request.uri="/v2/library/debian/manifests/1.0" http.request.useragent="docker/19.03.12 go/go1.14.4 git-commit/48a6621 kernel/5.7.0-kali1-amd64 os/linux arch/amd64 UpstreamClient(Docker-Client/19.03.12 (linux))" vars.name="library/debian" vars.reference=1.0 time="2020-10-07T11:50:59.448267593Z" level=info msg="redis: connect harbor-redis-master:6379" go.version=go1.14.6 instance.id=b04d4635-cab6-452d-b2a0-8711ffcf4e5c redis.connect.duration=1.627795ms service=registry version=v2.7.1 time="2020-10-07T11:50:59.450507179Z" level=info msg="redis: connect harbor-redis-master:6379" go.version=go1.14.6 instance.id=b04d4635-cab6-452d-b2a0-8711ffcf4e5c redis.connect.duration=1.94075ms service=registry version=v2.7.1 time="2020-10-07T11:50:59.452476426Z" level=info msg="redis: connect harbor-redis-master:6379" go.version=go1.14.6 instance.id=b04d4635-cab6-452d-b2a0-8711ffcf4e5c redis.connect.duration=1.613269ms service=registry version=v2.7.1 time="2020-10-07T11:50:59.452713741Z" level=debug msg="filesystem.Stat("/docker/registry/v2/blobs/sha256/56/5658c83d366defa40f1e43cd9dbee4c8af2d86c31878f41296978b4cea4344bc/data")" auth.user.name=register go.version=go1.14.6 http.request.contenttype="application/vnd.docker.distribution.manifest.v2+json" http.request.host=my-hostname http.request.id=66468f90-b7f5-41ec-a548-d564ff83a263 http.request.method=PUT http.request.remoteaddr=172.20.1.0 http.request.uri="/v2/library/debian/manifests/1.0" http.request.useragent="docker/19.03.12 go/go1.14.4 git-commit/48a6621 kernel/5.7.0-kali1-amd64 os/linux arch/amd64 UpstreamClient(Docker-Client/19.03.12 (linux))" trace.duration=48.303µs trace.file="/bitnami/blacksmith-sandox/docker-distribution-2.7.1/src/github.com/docker/distribution/registry/storage/driver/base/base.go" trace.func="github.com/docker/distribution/registry/storage/driver/base.(Base).Stat" trace.id=4f636a2a-e88c-44bc-95ad-10c0eeb1f0e1 trace.line=155 vars.name="library/debian" vars.reference=1.0 time="2020-10-07T11:50:59.454476932Z" level=info msg="redis: connect harbor-redis-master:6379" go.version=go1.14.6 instance.id=b04d4635-cab6-452d-b2a0-8711ffcf4e5c redis.connect.duration=1.579268ms service=registry version=v2.7.1 time="2020-10-07T11:50:59.469776283Z" level=debug msg="filesystem.PutContent("/docker/registry/v2/repositories/library/debian/_manifests/revisions/sha256/5658c83d366defa40f1e43cd9dbee4c8af2d86c31878f41296978b4cea4344bc/link")" auth.user.name=register go.version=go1.14.6 http.request.contenttype="application/vnd.docker.distribution.manifest.v2+json" http.request.host=my-hostname http.request.id=66468f90-b7f5-41ec-a548-d564ff83a263 http.request.method=PUT http.request.remoteaddr=172.20.1.0 http.request.uri="/v2/library/debian/manifests/1.0" http.request.useragent="docker/19.03.12 go/go1.14.4 git-commit/48a6621 kernel/5.7.0-kali1-amd64 os/linux arch/amd64 UpstreamClient(Docker-Client/19.03.12 (linux))" trace.duration=15.011629ms trace.file="/bitnami/blacksmith-sandox/docker-distribution-2.7.1/src/github.com/docker/distribution/registry/storage/driver/base/base.go" trace.func="github.com/docker/distribution/registry/storage/driver/base.(Base).PutContent" trace.id=9c2f2b4a-4be0-41de-8ca7-8f92eca591bb trace.line=110 vars.name="library/debian" vars.reference=1.0 time="2020-10-07T11:50:59.48221644Z" level=debug msg="filesystem.PutContent("/docker/registry/v2/repositories/library/debian/_manifests/tags/1.0/index/sha256/5658c83d366defa40f1e43cd9dbee4c8af2d86c31878f41296978b4cea4344bc/link")" auth.user.name=register go.version=go1.14.6 http.request.contenttype="application/vnd.docker.distribution.manifest.v2+json" http.request.host=my-hostname http.request.id=66468f90-b7f5-41ec-a548-d564ff83a263 http.request.method=PUT http.request.remoteaddr=172.20.1.0 http.request.uri="/v2/library/debian/manifests/1.0" http.request.useragent="docker/19.03.12 go/go1.14.4 git-commit/48a6621 kernel/5.7.0-kali1-amd64 os/linux arch/amd64 UpstreamClient(Docker-Client/19.03.12 (linux))" trace.duration=12.1735ms trace.file="/bitnami/blacksmith-sandox/docker-distribution-2.7.1/src/github.com/docker/distribution/registry/storage/driver/base/base.go" trace.func="github.com/docker/distribution/registry/storage/driver/base.(Base).PutContent" trace.id=aae56ccf-cd90-452b-91e3-612e5dd5ba23 trace.line=110 vars.name="library/debian" vars.reference=1.0 time="2020-10-07T11:50:59.494793647Z" level=debug msg="filesystem.PutContent("/docker/registry/v2/repositories/library/debian/_manifests/tags/1.0/current/link")" auth.user.name=register go.version=go1.14.6 http.request.contenttype="application/vnd.docker.distribution.manifest.v2+json" http.request.host=my-hostname http.request.id=66468f90-b7f5-41ec-a548-d564ff83a263 http.request.method=PUT http.request.remoteaddr=172.20.1.0 http.request.uri="/v2/library/debian/manifests/1.0" http.request.useragent="docker/19.03.12 go/go1.14.4 git-commit/48a6621 kernel/5.7.0-kali1-amd64 os/linux arch/amd64 UpstreamClient(Docker-Client/19.03.12 (linux))" trace.duration=12.494869ms trace.file="/bitnami/blacksmith-sandox/docker-distribution-2.7.1/src/github.com/docker/distribution/registry/storage/driver/base/base.go" trace.func="github.com/docker/distribution/registry/storage/driver/base.(Base).PutContent" trace.id=71f84f9a-e3dc-47a2-a699-28e458e802fb trace.line=110 vars.name="library/debian" vars.reference=1.0 time="2020-10-07T11:50:59.494923485Z" level=debug msg="Succeeded in putting manifest!" auth.user.name=register go.version=go1.14.6 http.request.contenttype="application/vnd.docker.distribution.manifest.v2+json" http.request.host=my-hostname http.request.id=66468f90-b7f5-41ec-a548-d564ff83a263 http.request.method=PUT http.request.remoteaddr=172.20.1.0 http.request.uri="/v2/library/debian/manifests/1.0" http.request.useragent="docker/19.03.12 go/go1.14.4 git-commit/48a6621 kernel/5.7.0-kali1-amd64 os/linux arch/amd64 UpstreamClient(Docker-Client/19.03.12 (linux))" vars.name="library/debian" vars.reference=1.0 time="2020-10-07T11:50:59.495091836Z" level=info msg="response completed" go.version=go1.14.6 http.request.contenttype="application/vnd.docker.distribution.manifest.v2+json" http.request.host=my-hostname http.request.id=66468f90-b7f5-41ec-a548-d564ff83a263 http.request.method=PUT http.request.remoteaddr=172.20.1.0 http.request.uri="/v2/library/debian/manifests/1.0" http.request.useragent="docker/19.03.12 go/go1.14.4 git-commit/48a6621 kernel/5.7.0-kali1-amd64 os/linux arch/amd64 UpstreamClient(Docker-Client/19.03.12 (linux))" http.response.duration=134.583621ms http.response.status=201 http.response.written=0 172.20.3.153 - - [07/Oct/2020:11:50:59 +0000] "PUT /v2/library/debian/manifests/1.0 HTTP/1.1" 201 0 "" "docker/19.03.12 go/go1.14.4 git-commit/48a6621 kernel/5.7.0-kali1-amd64 os/linux arch/amd64 UpstreamClient(Docker-Client/19.03.12 \(linux\))" 172.20.2.82 - - [07/Oct/2020:11:51:00 +0000] "GET / HTTP/1.1" 200 0 "" "Go-http-client/1.1" time="2020-10-07T11:51:00.429246382Z" level=error msg="retryingsink: error writing events: httpSink{http://harbor-core/service/notifications}: response status 404 Not Found unaccepted, retrying" time="2020-10-07T11:51:00.429265881Z" level=warning msg="httpSink{http://harbor-core/service/notifications} encountered too many errors, backing off" 172.20.2.1 - - [07/Oct/2020:11:51:00 +0000] "GET / HTTP/1.1" 200 0 "" "kube-probe/1.19" time="2020-10-07T11:51:01.432693569Z" level=error msg="retryingsink: error writing events: httpSink{http://harbor-core/service/notifications}: response status 404 Not Found unaccepted, retrying" time="2020-10-07T11:51:01.432716696Z" level=warning msg="httpSink{http://harbor-core/service/notifications} encountered too many errors, backing off" time="2020-10-07T11:51:02.436411825Z" level=error msg="retryingsink: error writing events: httpSink{http://harbor-core/service/notifications}: response status 404 Not Found unaccepted, retrying" time="2020-10-07T11:51:02.436438452Z" level=warning msg="httpSink{http://harbor-core/service/notifications} encountered too many errors, backing off"

curl https://register:register@my-hostname/v2/_catalog {"repositories":["library/debian"]}

curl http://harbor-core/service/notifications <!DOCTYPE HTML>

404 - Page Not Found