goharbor / harbor

An open source trusted cloud native registry project that stores, signs, and scans content.
https://goharbor.io
Apache License 2.0
24.18k stars 4.76k forks source link

no way to config multi-hostname for a single harbor instance #8243

Open rbhuang opened 5 years ago

rbhuang commented 5 years ago

Our scenario is we have deployed Harbor in our data center and we have internal and external hostnames for harbor instance. We are trying to use CI to push images to Harbor's internal hostname. Other systems may fetch the images from Habor through either internal or external hostnames. I found Harbor only support one hostname so I would like to seek your help to check how to config Harbor to support two hostname scenario. thanks rb

reasonerjt commented 5 years ago

I don't think of good resolution because before read/write to registry you need to fetch a token via the endpoint: https://github.com/goharbor/harbor/blob/master/make/photon/prepare/templates/registry/config.yml.jinja#L31

My very immature thought, maybe you can try to modify the config.yml of registry after installation of Harbor according to https://docs.docker.com/registry/configuration/ to enable htpasswd, and use htpasswd for internal.

reasonerjt commented 5 years ago

@xaleeks I suggest mark it as won't fix

neo502721 commented 5 years ago

we have save scenario, we scp images to the harbor host first, then push to harbor

holinnn commented 4 years ago

We also have a similar scenario, we need to expose Harbor both internally and externally. Internally for obvious reasons to push/pull images from clusters and with VPN. Externally because we use other cloud providers, and this external endpoint will be restricted by IPs. So we need 2 hostnames (one per LoadBalancer) but it seems that we hit the issue described by @reasonerjt (when doing docker login with the external domain it redirects to the internal one)

What would be the best way to achieve this ? I thought about creating another core deployment and service, pointing to the existing one, and exposing this deployement with the external ingress. But I'm not sure if it will create issues, like race conditions or other problems.

Timoses commented 4 years ago

Facing the same issue: We have an internal network (for the clusters to fetch the images) and an external network attached.

Using the Web UI from either network/domain works fine.

However, if the hostname is set to the domain leading to the internal network, then pushing replication from another external harbor instance fails:

2020-06-11T10:49:48Z [INFO] [/replication/transfer/image/transfer.go:125]: client for destination registry [type: harbor, URL: https://harbor.ext, insecure: true] created
 2020-06-11T10:49:48Z [INFO] [/replication/transfer/image/transfer.go:158]: copying ourproj/templateservice:[0.0.3](source registry) to destproj/templateservice:[0.0.3](destination registry)...
 2020-06-11T10:49:48Z [INFO] [/replication/transfer/image/transfer.go:179]: copying ourproj/templateservice:0.0.3(source registry) to destproj/templateservice:0.0.3(destination registry)...
 2020-06-11T10:49:48Z [INFO] [/replication/transfer/image/transfer.go:285]: pulling the manifest of artifact ourproj/templateservice:0.0.3 ...
 2020-06-11T10:49:48Z [INFO] [/replication/transfer/image/transfer.go:291]: the manifest of artifact ourproj/templateservice:0.0.3 pulled
 2020-06-11T10:49:48Z [*ERROR*] [/replication/transfer/image/transfer.go:299]: failed to check the existence of the manifest of artifact destproj/templateservice:0.0.3 on the destination registry: Get https://harbor.int/service/token?scope=repository%3Adestproj%2Ftemplateservice%3Apull&service=harbor-registry: dial tcp: lookup harbor.int on 10.10.10.10:53: no such host

Although it initially targets the harbor.ext domain, it then switches trying to contact harbor.int which of course will not work from an external network.

Likewise, setting the hostname to the external domain name leads to a working replication. However the cluster is no longer able to pull the images.

We will currently opt to manually pull the image from the external registry and manually push them to our registry.

Hopefully, there will be an option to use the registry over various networks with differing domain names in the future (or another solution).

Timoses commented 4 years ago

Note in regard to Kubernetes: When the Harbor instance is configured with domain name harbor.ext and Kubernetes is configured with an image from harbor.int then the Kubernetes node fails to pull the image (tested on a cluster running Docker as container runtime).

So it looks as though pulling from an instance through a domain (or network?) different from the one configured in the Harbor instance does not work (I did not conduct a specific test, just witnessed this occurring when I configured another domain in the Harbor instance).

emeryao commented 3 years ago

+1 same scenario here I deployed a harbor instance on my LAN server and expose it to Internet via aliyun which is billing by network traffic so I want to push image to harbor via a Intranet/LAN domain and pull images from a Internet domain (outside our LAN)

cobolbaby commented 3 years ago

+1

fly0512 commented 3 years ago

+1

qcu266 commented 3 years ago

+1

ChristianCiach commented 3 years ago

Same use-case here: Different hostnames for internal and external access.

Why does harbor insist on using the configured hostname? Can't harbor just use the hostname of the current http request for constructing the redirect to the token endpoint?

marvindaviddiaz commented 3 years ago

+1

shenshouer commented 3 years ago

The domain set on core component in app.conf. You can use each domain for one core component and expose services for all core

withlin commented 2 years ago

+1

jhanos commented 2 years ago

+1

yhm-amber commented 2 years ago

same issue:

I use this to install it:

helm install -n hub --create-namespace --set 'expose.type=nodePort,expose.tls.enabled=false,expose.nodePort.ports.http.nodePort=30002,expose.tls.commonName=.*,externalURL=http://.*,harborAdminPassword=adminadmin,secretKey=not-a-secure-key' -- hub-harbor harbor/harbor

but when I run docker login harbor.hub.svc.cluster.local , I get:

Username: admin
Password:
Error response from daemon: Get http://harbor.hub.svc.cluster.local/v2/: Get http://.*/service/token?account=admin&client_id=docker&offline_token=true&service=harbor-registry: dial tcp: lookup .*: no such host

if change the .* to _ while install:

Username: admin
Password:
Error response from daemon: Get http://harbor.hub.svc.cluster.local/v2/: Get http://_/service/token?account=admin&client_id=docker&offline_token=true&service=harbor-registry: dial tcp: lookup _ on 10.96.0.10:53: no such host
samox73 commented 2 years ago

Any update on this? A multi-domain option would make it also way easier to minimize downtime. Let's take an example: We have harbor running at harbor.domain.org. Now, we want to redeploy (for whatever reason) a new instance pointed to by harbor.domain.net. Virtually zero downtime can be achieved by having a CNAME record for a third domain harbor.domain.com which points to one of the other domains. Such a scenario can be useful when you have to create a new k8s cluster without the possibility of migrating your current state.

Robbie558 commented 2 years ago

We have achieved access via multiple hostnames to a K8s hosted helm deployed Harbor instance, using the ingress expose option. We were able to do so by deploying a secondary K8s ingress based upon the one generated by the helm chart. We have a pair of nginx loadbalancers 'in-front' of our K8s cluster routing traffic to the Traefik entrypoints on the K8s cluster.

We can reach the WUI, and run docker push and pull commands using either hostname using this configuration.

Our environment setup is as follows:

Harbor Helm: v1.9.1 (Harbor v2.5.1) Kubernetes: v1.24 Ingress Provider: Traefik v2.7.1 Nginx: v1.21.5

An extract of our Harbor values.yaml file covering the pertinent detail:

externalURL: "harbor.cluster_name.service.domain"

expose: 
  type: ingress
  tls:
    certSource: secret
    secret:
      secretName: "harbor.cluster_name.service.domain"
  ingress:
    hosts:
     core: "harbor.cluster_name.service.domain"
    controller: default
    annotations:
      kubernetes.io/ingress.class: traefik
      traefik.ingress.kubernetes.io/router.entrypoints: websecure

registry:
  relativeurls: true

Our secondary ingress definition:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    ingress.kubernetes.io/proxy-body-size: "0"
    ingress.kubernetes.io/ssl-redirect: "true"
    kubernetes.io/ingress.class: traefik
    meta.helm.sh/release-name: harbor
    meta.helm.sh/release-namespace: harbor-namespace
    nginx.ingress.kubernetes.io/proxy-body-size: "0"
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    traefik.ingress.kubernetes.io/router.entrypoints: websecure
  labels:
    app: harbor
    release: harbor
  name: harbor-ingress-secondary
  namespace: harbor-namespace
spec:
  rules:
  - host: harbor.domain
    http:
      paths:
      - backend:
          service:
            name: harbor-core
            port:
              number: 80
        path: /api/
        pathType: Prefix
      - backend:
          service:
            name: harbor-core
            port:
              number: 80
        path: /service/
        pathType: Prefix
      - backend:
          service:
            name: harbor-core
            port:
              number: 80
        path: /v2
        pathType: Prefix
      - backend:
          service:
            name: harbor-core
            port:
              number: 80
        path: /chartrepo/
        pathType: Prefix
      - backend:
          service:
            name: harbor-core
            port:
              number: 80
        path: /c/
        pathType: Prefix
      - backend:
          service:
            name: harbor-portal
            port:
              number: 80
        path: /
        pathType: Prefix
  tls:
  - hosts:
    - harbor.domain
    secretName: harbor.cluster_name.service.domain

The NGINX server block to route traffic to our Traefik entrypoint. It is worth noting that the SSL certificates configured on the NGINX servers contain both domains (harbor.cluster_name.service.domain and harbor.domain) in their Subject Alternative Name definitions:

server {
  listen <loadbalancer_ip_address>:443 ssl http2;
  listen <loadbalancer_ip_address>:443 ssl http2;
  listen 443 ssl http2;
  status_zone kubernetes_cluster_name_https;
  server_name harbor.cluster_name.service.domain harbor.domain;
  keepalive_timeout 100;
  include /etc/nginx/ssl.conf;
  ssl_certificate /usr/share/nginx/ssl/harbor/cluster_name/server.crt;
  ssl_certificate_key /usr/share/nginx/ssl/harbor/cluster_name/server.key;
  server_tokens off;
  client_max_body_size 0;

  location / {
    proxy_pass "http://kubernetes_cluster_name_https";
    include /etc/nginx/proxy.conf;
    proxy_read_timeout  90s;
    proxy_set_header X-Forwarded-Proto https;
  }
}
DougTea commented 2 years ago

Here is my workaround; First,add a map directive config to nginx-configuration config-map of nginx ingress controller.

data:
  http-snippet: |
    map $upstream_http_www_authenticate $modified{
      default '';
      "~^(Bearer realm=\"https://)({your internal host name})(.*)" "$1$host$3";
    }

Map directive can only be added to http context and nginx-configuration config-map is the only place I found where we can edit http context configuration.

Then,in the harbor ingress manifest,we add header overwrite logic:

nginx.ingress.kubernetes.io/configuration-snippet: |
      proxy_hide_header www-authenticate;
      add_header www-authenticate $modified always;
antoffka commented 2 years ago

Here is my workaround; First,add a map directive config to nginx-configuration config-map of nginx ingress controller.

data:
  http-snippet: |
    map $upstream_http_www_authenticate $modified{
      default '';
      "~^(Bearer realm=\"https://)({your internal host name})(.*)" "$1$host$3";
    }

Map directive can only be added to http context and nginx-configuration config-map is the only place I found where we can edit http context configuration.

Then,in the harbor ingress manifest,we add header overwrite logic:

nginx.ingress.kubernetes.io/configuration-snippet: |
      proxy_hide_header www-authenticate;
      add_header www-authenticate $modified always;

This did work for me! Thank you very much for sharing @DougTea!

AllForNothing commented 1 year ago

@qnetter Can you have a look at this?

DimArmen commented 1 year ago

+1 I have a similar scenario running Harbor Multi-Region with DNS fail-over. I need one global hostname for doing the DNS fail-over and I need a second regional host-name to perform the AWS Route53 health-checks per region. Unfortunately seems like only one hostname is supported.

nueavv commented 1 year ago

+1

hh831 commented 1 year ago

+1...

mddamato commented 1 year ago

+1

silverm0on commented 1 year ago

+1

shelmingsong commented 1 year ago

+1

lu-you commented 8 months ago

Here is my workaround; First,add a map directive config to nginx-configuration config-map of nginx ingress controller.这是我的解决方法;首先,在nginx入口控制器的nginx-configuration config-map中添加map指令config。

data:
  http-snippet: |
    map $upstream_http_www_authenticate $modified{
      default '';
      "~^(Bearer realm=\"https://)({your internal host name})(.*)" "$1$host$3";
    }

Map directive can only be added to http context and nginx-configuration config-map is the only place I found where we can edit http context configuration.map 指令只能添加到 http 上下文中,nginx-configuration config-map 是我发现的唯一可以编辑 http 上下文配置的地方。 Then,in the harbor ingress manifest,we add header overwrite logic:然后,在 harbor 入口清单中,我们添加 header 覆盖逻辑:

nginx.ingress.kubernetes.io/configuration-snippet: |
      proxy_hide_header www-authenticate;
      add_header www-authenticate $modified always;

This did work for me! Thank you very much for sharing @DougTea!这确实对我有用!非常感谢您的分享!

Why didn't I use it and the wrong password was wrong

$ podman login harbor.xxx.com Authenticating with existing credentials for harbor.xxx.com Existing credentials are invalid, please enter valid username and password Username (harborAdmin): Password: Error: logging into "harbor.xxx.com": invalid username/password

DougTea commented 8 months ago

@lu-you check your nginx.conf in your ingress controller

lu-you commented 8 months ago

@lu-you check your nginx.conf in your ingress controller在入口控制器中检查 nginx.conf

kubectl get cm -n ingress-nginx ingress-nginx-controller -o yaml apiVersion: v1 data: allow-snippet-annotations: "true" http-snippet: | map $upstream_http_www_authenticate $modified{ default ''; "~^(Bearer realm=\"https://)({inter domain})(.*)" "$1$host$3"; }

apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | meta.helm.sh/release-name: artifactory meta.helm.sh/release-namespace: skiff-artifactory nginx.ingress.kubernetes.io/configuration-snippet: | proxy_hide_header www-authenticate; add_header www-authenticate $modified always; nginx.ingress.kubernetes.io/proxy-body-size: "0" nginx.ingress.kubernetes.io/proxy-connect-timeout: "300" nginx.ingress.kubernetes.io/proxy-read-timeout: "600" nginx.ingress.kubernetes.io/proxy-send-timeout: "600" nginx.ingress.kubernetes.io/proxy-stream-timeout: "300" nginx.ingress.kubernetes.io/upstream-keepalive-timeout: "300"

please help me

lu-you commented 8 months ago

@lu-you check your nginx.conf in your ingress controller

please check it out

lu-you commented 8 months ago

nginx.ingress.kubernetes.io/configuration-snippet: | proxy_hide_header www-authenticate; add_header www-authenticate $modified always;

I tried the way you provided and got an error invalid username/password

Authenticating with existing credentials for harbor.xxxx.com Existing credentials are invalid, please enter valid username and password Username (Admin): Password: Error: logging into "harbor.xxx.com": invalid username/password

wanghongzhou commented 5 months ago

Facing the same issue: We have an internal network (for the clusters to fetch the images) and an external network attached.

Using the Web UI from either network/domain works fine.

However, if the hostname is set to the domain leading to the internal network, then pushing replication from another external harbor instance fails:

2020-06-11T10:49:48Z [INFO] [/replication/transfer/image/transfer.go:125]: client for destination registry [type: harbor, URL: https://harbor.ext, insecure: true] created
 2020-06-11T10:49:48Z [INFO] [/replication/transfer/image/transfer.go:158]: copying ourproj/templateservice:[0.0.3](source registry) to destproj/templateservice:[0.0.3](destination registry)...
 2020-06-11T10:49:48Z [INFO] [/replication/transfer/image/transfer.go:179]: copying ourproj/templateservice:0.0.3(source registry) to destproj/templateservice:0.0.3(destination registry)...
 2020-06-11T10:49:48Z [INFO] [/replication/transfer/image/transfer.go:285]: pulling the manifest of artifact ourproj/templateservice:0.0.3 ...
 2020-06-11T10:49:48Z [INFO] [/replication/transfer/image/transfer.go:291]: the manifest of artifact ourproj/templateservice:0.0.3 pulled
 2020-06-11T10:49:48Z [*ERROR*] [/replication/transfer/image/transfer.go:299]: failed to check the existence of the manifest of artifact destproj/templateservice:0.0.3 on the destination registry: Get https://harbor.int/service/token?scope=repository%3Adestproj%2Ftemplateservice%3Apull&service=harbor-registry: dial tcp: lookup harbor.int on 10.10.10.10:53: no such host

Although it initially targets the harbor.ext domain, it then switches trying to contact harbor.int which of course will not work from an external network.

Likewise, setting the hostname to the external domain name leads to a working replication. However the cluster is no longer able to pull the images.

We will currently opt to manually pull the image from the external registry and manually push them to our registry.

Hopefully, there will be an option to use the registry over various networks with differing domain names in the future (or another solution).

+1

DougTea commented 3 months ago

The core reason of this problem is the www_authenticate header response from harbor-core is always set by the value of envirionment $EXT_ENDPOINT in harbor core.My workarround is just rewriting the header in the ingress nginx reverse proxy.Try to verify the header is changed in the response.Tools like Chales/Fiddler may be useful. @lu-you

nueavv commented 2 months ago

I am currently using Istio's VirtualService to modify header values for handling multiple hostnames in my setup.

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: multiple-harbor-host
spec:
  hosts:
  - a.harbor.example.com
  - b.harbor.example.com
  gateways:
  - my-gateway
  http:
  - match:
    - uri:
        exact: /
    headers:
      request:
        set:
          Host: "b.harbor.example.com"
    route:
    - destination:
        host: harbor-core
        port:
          number: 80