Closed jeandevops closed 4 years ago
Hello... Some one there?
Could you try to clean the cache of your browser?
Hello @ywk253100, thanks for the reply.
I've already cleaned and even tried to run in anonymous mode also.
I was getting "GET /api/users/current failed with error: {"code":401,"message":"UnAuthorize"}" for the admin user and "state mismatch" (https://github.com/goharbor/harbor/issues/9384) error for the OIDC users after a fresh installation (tested with both 1.9.0 and 1.9.1-dev). I haven't dived deep into it yet, but both of the error messages are gone after the only change from external (sentinel) redis server to the internal one.
I appear to be having the same issue @alexnguyen91 what do you think?
sudo kubectl port-forward svc/my-release-harbor-portal 80:80
@tufank Harbor only works with a single entry point Redis, more detail refer to the External Redis
section of https://github.com/goharbor/harbor-helm/blob/master/docs/High%20Availability.md#configuration
@alexellis If you expose Harbor via node port or cluster IP, you should forward the requests to nginx rather than portal
I'm using internal Redis and I'm hitting Nginx instead of portal... Any news about this login problem in fresh installation using the helm chart?
Guys I got a little hint. At version 1.0.0 and 1.1.0 of helm-chart the problem do not appear (I'm attempting to use the branch of version 1.2.0, that's a particular problem for this version of the chart)
Same issue here, 1.1.0 works well, however, 1.20, 1.30 fail
I found that the issue is came from unhealthy jobservice
and redis
. https://github.com/goharbor/harbor-helm/issues/480
When the login fail, I found that the response header of /c/login
have two set-cookie
, so /api/users/current
will use the one that not yet login, and this seems to relate to unhealthy redis
. After fix the permission of redis volume, this issue solved
Just one add on for this ticket we have the same troubles on Harbor UI with external redis ha.
In my situation the redis was deploy from the stable/redis-ha chart and for me it was logical to activate the option stickyBalancing for haporxy. But this option induces the same issue multiple disconnections
the fix was to add the value to false (initial state) and add nginx.ingress.kubernetes.io/affinity-mode: "persistent" for the ingress controller, we have 3 replicas (3azs) for all critical components
I think the title of this issue is too general which leads to different problems being discussed in one issue.
@jeandevops please clarify if you have found the root cause or not, is it b/c of the redis problem?
Tried the fix mentioned by @secret104278 and it solved the problem. Thanks everybody!
I don't think this should be closed. If there's an issue with redis in the default chart - it should be fixed and the chart should be updated. It looks like @secret104278 has created a PR to resolve this issue #593. @reasonerjt @ywk253100 Would either of you be able to review this?
/reopen
In case you are not able to login into harbor when exposing the service through ClusterIP, please make sure you are exposing both the portal & the api correctly:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: harbor
namespace: harbor
spec:
entryPoints:
- websecure
routes:
- kind: Rule
match: Host(`harbor.example.com`)
services:
- kind: Service
name: harbor-harbor-portal
namespace: harbor
port: 80
scheme: http
- kind: Rule
match: Host(`harbor.example.com`) && (PathPrefix(`/api`) || PathPrefix(`/service`) || PathPrefix(`/v2`) || PathPrefix(`/chartrepo`) || PathPrefix(`/c`))
services:
- kind: Service
name: harbor-harbor-core
namespace: harbor
port: 80
scheme: http
tls:
secretName: example-cert
This should solve all your problems!
Hello! I'm trying to use the helm chart (harbor-1.2.0 - app version 1.9.0) at Kubernetes 1.14.6, but I'm facing the same problem described at the issues:
https://github.com/goharbor/harbor/issues/4161 https://github.com/goharbor/harbor/issues/3418
For the first time, I can login and change the password. I even can login again one time or two, but after some time the login button brokes.
I'm hitting the web interface directly through Nginx component (no Ingress or proxy in the way), for this I've made some customizations at Nginx template deployment.yaml to use "hostNetwork" and "ClusterFirstWithHostNet" (because we use Kong as Ingress controller and Harbor brokes the authentication headers used by OpenID at registry - that's a problem related at issue #3114 at goharbor/harbor with "X-Forwarded-Proto $scheme", I've tried to follow the workarounds but had none success - but this is a talk for another day ^^)
That's a piece of log from the core pod:
And my values.yaml is like:
PS.: I've already tried to clean/disable cookies (at Firefox and Google Chrome), etc. I'm not using the master branch and I already tried to run without the all components (Notary, Clair and Chartmuseum)
Thanks in advance!