Closed aupadh12 closed 1 week ago
But I am getting a lot of http 502 error coming from 502 Bad Gateway and this is due to strict-origin-when-cross-origin.
What is leading you to believe that the 502 is related to CORs?
strict-origin-when-cross-origin
isn't an error. It is your browser's default referrer policy. I think if you look at other requests that succeeded, you will see that is the referrer policy on most of them.
Also a 502 isn't a common response when a cross-origin request is rejected.
Is there something apart from what's in the screenshot that's leading you to suspect CORs as the issue? Ordinarily, the UI wouldn't even be issuing any cross-origin requests.
Something else is going on here.
Do you know how can I further troubleshoot this? I am running this on 1.30 EKS cluster and using certificates issued by sectigo.
Check the response body in case there are more details there. If not, you're going to have to look at logs for load balancers, proxies, ingress controllers, etc. -- anything between your browser and the Kargo server that might have responded with a 502.
This doesn't appear to be a Kargo problem.
I tried finding it but I am still not able to figure out. I am also running argocd on the same cluster but that is running fine and is not giving me any error.
I am attaching ingress and service bellow. Do you see any issues here:
`apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: kargo-ingress namespace: kargo labels: helm.sh/chart: kargo-1.0.3 app.kubernetes.io/name: kargo app.kubernetes.io/instance: kargo app.kubernetes.io/version: "v1.0.3" app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: api annotations: cert-manager.io/cluster-issuer: sectigo
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
nginx.ingress.kubernetes.io/proxy-body-size: 5000m
nginx.ingress.kubernetes.io/proxy-buffer-size: "32k"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
spec: ingressClassName: nginx rules:
apiVersion: v1 kind: Service metadata: name: kargo-api namespace: kargo labels: helm.sh/chart: kargo-1.0.3 app.kubernetes.io/name: kargo app.kubernetes.io/instance: kargo app.kubernetes.io/version: "v1.0.3" app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: api spec: type: ClusterIP ports:
Hi @krancour ,
I was able to resolve this issue after changing PERMISSIVE_CORS_POLICY_ENABLED: "true" and setting this annotation in the ingress.yaml file:
nginx.ingress.kubernetes.io/service-upstream: "true"
there are restrictions when we try implementing things at big enterprise pharma companies and simply closing the issue is not good. At the end, I had to change kargo configuration to get this working.
@aupadh12 I am glad you got it working somehow. Do you have any explanation for why these changes worked?
As I mentioned before, I see no evidence of a CORS issue (strict-origin-when-cross-origin
is not an error). This being the case, relaxing the CORS policy amounts to a random configuration change.
nginx.ingress.kubernetes.io/service-upstream: "true"
has to do with how traffic is routed from your ingress controller to your pods.
If these changes made any meaningful difference, it reinforces the notion that your issue was somewhere within your infrastructure.
Description
I am trying to implement kargo into our environment. But I am getting a lot of http 502 error coming from 502 Bad Gateway and this is due to strict-origin-when-cross-origin. Is there something extra which I need to do here?
I have installed 1.0.3 in AWS EKS cluster.
Screenshots