goharbor / harbor-helm

The helm chart to deploy Harbor
Apache License 2.0
1.18k stars 759 forks source link

Web interface login broken #379

Closed jeandevops closed 4 years ago

jeandevops commented 5 years ago

Hello! I'm trying to use the helm chart (harbor-1.2.0 - app version 1.9.0) at Kubernetes 1.14.6, but I'm facing the same problem described at the issues:

https://github.com/goharbor/harbor/issues/4161 https://github.com/goharbor/harbor/issues/3418

For the first time, I can login and change the password. I even can login again one time or two, but after some time the login button brokes.

I'm hitting the web interface directly through Nginx component (no Ingress or proxy in the way), for this I've made some customizations at Nginx template deployment.yaml to use "hostNetwork" and "ClusterFirstWithHostNet" (because we use Kong as Ingress controller and Harbor brokes the authentication headers used by OpenID at registry - that's a problem related at issue #3114 at goharbor/harbor with "X-Forwarded-Proto $scheme", I've tried to follow the workarounds but had none success - but this is a talk for another day ^^)

That's a piece of log from the core pod:

2019-10-01T19:10:01Z [DEBUG] [/core/filter/security.go:227]: OIDC CLI modifier only handles request by docker CLI or helm CLI
2019-10-01T19:10:01Z [DEBUG] [/core/filter/security.go:442]: can not get user information from session
2019-10-01T19:10:01Z [DEBUG] [/core/filter/security.go:511]: user information is nil
2019-10-01T19:10:01Z [DEBUG] [/core/filter/security.go:525]: using local database project manager
2019-10-01T19:10:01Z [DEBUG] [/core/filter/security.go:527]: creating local database security context...
2019-10-01T19:10:01Z [DEBUG] [/common/dao/user.go:269]: Check if user admin is super user
2019-10-01T19:10:01Z [DEBUG] [/core/auth/authenticator.go:139]: Current AUTH_MODE is db_auth
2019/10/01 19:10:01 [D] [server.go:2774] |  172.18.12.232| 200 |  12.986366ms|   match| POST     /c/login   r:/c/login
2019-10-01T19:10:01Z [DEBUG] [/core/filter/security.go:227]: OIDC CLI modifier only handles request by docker CLI or helm CLI
2019-10-01T19:10:01Z [DEBUG] [/core/filter/security.go:442]: can not get user information from session
2019-10-01T19:10:01Z [DEBUG] [/core/filter/security.go:511]: user information is nil
2019-10-01T19:10:01Z [DEBUG] [/core/filter/security.go:525]: using local database project manager
2019-10-01T19:10:01Z [DEBUG] [/core/filter/security.go:527]: creating local database security context...
2019-10-01T19:10:01Z [ERROR] [/common/api/base.go:68]: GET /api/users/current failed with error: {"code":401,"message":"UnAuthorize"}
2019/10/01 19:10:01 [D] [server.go:2774] |  172.18.12.232| 401 |   3.158573ms|   match| GET      /api/users/current   r:/api/users/:id

And my values.yaml is like:

expose:
  type: clusterIP
  tls:
    enabled: true
    secretName: "my-harbor-secret"
    notarySecretName: ""
    commonName: ""
  ingress:
    hosts:
      core: core.harbor.domain
      notary: notary.harbor.domain
    controller: default
    annotations:
      ingress.kubernetes.io/ssl-redirect: "true"
      ingress.kubernetes.io/proxy-body-size: "0"
      nginx.ingress.kubernetes.io/ssl-redirect: "true"
      nginx.ingress.kubernetes.io/proxy-body-size: "0"
  clusterIP:
    name: harbor
    ports:
      httpPort: 80
      httpsPort: 443
      notaryPort: 4443
  nodePort:
    name: harbor
    ports:
      http:
        port: 80
        nodePort: 30002
      https:
        port: 443
        nodePort: 30003
      notary:
        port: 4443
        nodePort: 30004
  loadBalancer:
    name: harbor
    IP: ""
    ports:
      httpPort: 80
      httpsPort: 443
      notaryPort: 4443
    annotations: {}
    sourceRanges: []
externalURL: https://harbor.my.domain:8443
persistence:
  enabled: true
  resourcePolicy: "keep"
  persistentVolumeClaim:
    registry:
      existingClaim: "registry-pvc"
      storageClass: "-"
      subPath: ""
      accessMode: ReadWriteOnce
      size: 5Gi
    chartmuseum:
      existingClaim: "chartmuseum-pvc"
      storageClass: "-"
      subPath: ""
      accessMode: ReadWriteOnce
      size: 5Gi
    jobservice:
      existingClaim: "jobservice-pvc"
      storageClass: "-"
      subPath: ""
      accessMode: ReadWriteOnce
      size: 1Gi
    database:
      existingClaim: "database-pvc"
      storageClass: "-"
      subPath: ""
      accessMode: ReadWriteOnce
      size: 1Gi
    redis:
      existingClaim: "redis-pvc"
      storageClass: "-"
      subPath: ""
      accessMode: ReadWriteOnce
      size: 1Gi
  imageChartStorage:
    disableredirect: false
    type: filesystem
    filesystem:
      rootdirectory: /storage
    azure:
      accountname: accountname
      accountkey: base64encodedaccountkey
      container: containername
    gcs:
      bucket: bucketname
      encodedkey: base64-encoded-json-key-file
    s3:
      region: us-west-1
      bucket: bucketname
    swift:
      authurl: https://storage.myprovider.com/v3/auth
      username: username
      password: password
      container: containername
    oss:
      accesskeyid: accesskeyid
      accesskeysecret: accesskeysecret
      region: regionname
      bucket: bucketname
imagePullPolicy: IfNotPresent
imagePullSecrets:
logLevel: debug
harborAdminPassword: "Harbor12345"
secretKey: "not-a-secure-key"
proxy:
  httpProxy:
  httpsProxy:
  noProxy: 127.0.0.1,localhost,.local,.internal
  components:
    - core
    - jobservice
    - clair
nginx:
  image:
    repository: goharbor/nginx-photon
    tag: v1.9.0
  replicas: 1
  nodeSelector: {ingress: "true"}
  tolerations: []
  affinity: {}
  podAnnotations: {}
portal:
  image:
    repository: goharbor/harbor-portal
    tag: v1.9.0
  replicas: 1
  nodeSelector: {}
  tolerations: []
  affinity: {}
  podAnnotations: {}
core:
  image:
    repository: goharbor/harbor-core
    tag: v1.9.0
  replicas: 1
  nodeSelector: {}
  tolerations: []
  affinity: {}
  podAnnotations: {}
  secret: ""
  secretName: ""
jobservice:
  image:
    repository: goharbor/harbor-jobservice
    tag: v1.9.0
  replicas: 1
  maxJobWorkers: 10
  jobLogger: file
  nodeSelector: {}
  tolerations: []
  affinity: {}
  podAnnotations: {}
  secret: ""
registry:
  registry:
    image:
      repository: goharbor/registry-photon
      tag: v2.7.1-patch-2819-v1.9.0
  controller:
    image:
      repository: goharbor/harbor-registryctl
      tag: v1.9.0
  replicas: 1
  nodeSelector: {}
  tolerations: []
  affinity: {}
  podAnnotations: {}
  secret: ""
  relativeurls: false
  middleware:
    enabled: false
    type: cloudFront
    cloudFront:
      baseurl: example.cloudfront.net
      keypairid: KEYPAIRID
      duration: 3000s
      ipfilteredby: none
      privateKeySecret: "my-secret"
chartmuseum:
  enabled: true
  absoluteUrl: false
  image:
    repository: goharbor/chartmuseum-photon
    tag: v0.9.0-v1.9.0
  replicas: 1
  nodeSelector: {}
  tolerations: []
  affinity: {}
  podAnnotations: {}
clair:
  enabled: true
  image:
    repository: goharbor/clair-photon
    tag: v2.0.9-v1.9.0
  replicas: 1
  updatersInterval: 12
  nodeSelector: {}
  tolerations: []
  affinity: {}
  podAnnotations: {}
notary:
  enabled: true
  server:
    image:
      repository: goharbor/notary-server-photon
      tag: v0.6.1-v1.9.0
    replicas: 1
  signer:
    image:
      repository: goharbor/notary-signer-photon
      tag: v0.6.1-v1.9.0
    replicas: 1
  nodeSelector: {}
  tolerations: []
  affinity: {}
  podAnnotations: {}
  secretName: ""
database:
  type: internal
  internal:
    image:
      repository: goharbor/harbor-db
      tag: v1.9.0
    password: "changeit"
    nodeSelector: {}
    tolerations: []
    affinity: {}
  external:
    host: "192.168.0.1"
    port: "5432"
    username: "user"
    password: "password"
    coreDatabase: "registry"
    clairDatabase: "clair"
    notaryServerDatabase: "notary_server"
    notarySignerDatabase: "notary_signer"
    sslmode: "disable"
  maxIdleConns: 50
  maxOpenConns: 100
  podAnnotations: {}
redis:
  type: internal
  internal:
    image:
      repository: goharbor/redis-photon
      tag: v1.9.0
    nodeSelector: {}
    tolerations: []
    affinity: {}
  external:
    host: "192.168.0.2"
    port: "6379"
    coreDatabaseIndex: "0"
    jobserviceDatabaseIndex: "1"
    registryDatabaseIndex: "2"
    chartmuseumDatabaseIndex: "3"
    password: ""
  podAnnotations: {}

PS.: I've already tried to clean/disable cookies (at Firefox and Google Chrome), etc. I'm not using the master branch and I already tried to run without the all components (Notary, Clair and Chartmuseum)

Thanks in advance!

jeandevops commented 5 years ago

Hello... Some one there?

ywk253100 commented 5 years ago

Could you try to clean the cache of your browser?

jeandevops commented 5 years ago

Hello @ywk253100, thanks for the reply.

I've already cleaned and even tried to run in anonymous mode also.

tufank commented 4 years ago

I was getting "GET /api/users/current failed with error: {"code":401,"message":"UnAuthorize"}" for the admin user and "state mismatch" (https://github.com/goharbor/harbor/issues/9384) error for the OIDC users after a fresh installation (tested with both 1.9.0 and 1.9.1-dev). I haven't dived deep into it yet, but both of the error messages are gone after the only change from external (sentinel) redis server to the internal one.

alexellis commented 4 years ago

I appear to be having the same issue @alexnguyen91 what do you think?

sudo kubectl port-forward  svc/my-release-harbor-portal 80:80

Screenshot 2019-10-16 at 14 34 03

ywk253100 commented 4 years ago

@tufank Harbor only works with a single entry point Redis, more detail refer to the External Redis section of https://github.com/goharbor/harbor-helm/blob/master/docs/High%20Availability.md#configuration

ywk253100 commented 4 years ago

@alexellis If you expose Harbor via node port or cluster IP, you should forward the requests to nginx rather than portal

jeandevops commented 4 years ago

I'm using internal Redis and I'm hitting Nginx instead of portal... Any news about this login problem in fresh installation using the helm chart?

jeandevops commented 4 years ago

Guys I got a little hint. At version 1.0.0 and 1.1.0 of helm-chart the problem do not appear (I'm attempting to use the branch of version 1.2.0, that's a particular problem for this version of the chart)

secret104278 commented 4 years ago

Same issue here, 1.1.0 works well, however, 1.20, 1.30 fail

secret104278 commented 4 years ago

I found that the issue is came from unhealthy jobservice and redis. https://github.com/goharbor/harbor-helm/issues/480 When the login fail, I found that the response header of /c/login have two set-cookie, so /api/users/current will use the one that not yet login, and this seems to relate to unhealthy redis. After fix the permission of redis volume, this issue solved

avaussant commented 4 years ago

Just one add on for this ticket we have the same troubles on Harbor UI with external redis ha.

In my situation the redis was deploy from the stable/redis-ha chart and for me it was logical to activate the option stickyBalancing for haporxy. But this option induces the same issue multiple disconnections

the fix was to add the value to false (initial state) and add nginx.ingress.kubernetes.io/affinity-mode: "persistent" for the ingress controller, we have 3 replicas (3azs) for all critical components

reasonerjt commented 4 years ago

I think the title of this issue is too general which leads to different problems being discussed in one issue.

@jeandevops please clarify if you have found the root cause or not, is it b/c of the redis problem?

jeandevops commented 4 years ago

Tried the fix mentioned by @secret104278 and it solved the problem. Thanks everybody!

artbegolli commented 4 years ago

I don't think this should be closed. If there's an issue with redis in the default chart - it should be fixed and the chart should be updated. It looks like @secret104278 has created a PR to resolve this issue #593. @reasonerjt @ywk253100 Would either of you be able to review this?

fredleger commented 3 years ago

/reopen

magicoder10 commented 3 years ago

In case you are not able to login into harbor when exposing the service through ClusterIP, please make sure you are exposing both the portal & the api correctly:

apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  name: harbor
  namespace: harbor
spec:
  entryPoints:
    - websecure
  routes:
    - kind: Rule
      match: Host(`harbor.example.com`)
      services:
        - kind: Service
          name: harbor-harbor-portal
          namespace: harbor
          port: 80
          scheme: http
    - kind: Rule
      match: Host(`harbor.example.com`) && (PathPrefix(`/api`) || PathPrefix(`/service`) || PathPrefix(`/v2`) || PathPrefix(`/chartrepo`) || PathPrefix(`/c`))
      services:
        - kind: Service
          name: harbor-harbor-core
          namespace: harbor
          port: 80
          scheme: http
  tls:
    secretName: example-cert

This should solve all your problems!