emissary-ingress / emissary

open source Kubernetes-native API gateway for microservices built on the Envoy Proxy
https://www.getambassador.io
Apache License 2.0
4.38k stars 687 forks source link

503 and 403 errors when using more than 1 ambassador pod #1461

Closed robertrbruno closed 4 years ago

robertrbruno commented 5 years ago

Describe the bug

When talking to a service through ambassador that has an auth service, was getting what appeared to be random responses between 200, 503, and 403. Had replica of 3 set for the ambassador deployment. Upon looking at the logs, one pod was always giving 200 responses, another 503, and another 403. Tried restarting the trouble pods with no luck.

As a workaround I made my deployment to just replica 1 and now only seem to be getting 200 responses. Only saw this bug when I upgraded to 0.53.1. Was previously on version 0.50.3.

Versions (please complete the following information):

vaibhavrtk commented 5 years ago

I am still getting intermittedt 503 even when one replica is there. ACCESS [2019-04-30T06:23:58.601Z] "GET /api/v1/query?query=kube_pod_labels%7Blabel_app%3D%22connect-sidekiq%22%7D&time=1556605419.186&_=1556604765861 HTTP/2" 503 UC 0 57 0 - "100.96.66.0" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36" "de590163-2ac6-4508-89c7-03937c6f73f8" "prometheus.granite.rock.swiggy" "100.67.36.178:80"

vaibhavrtk commented 5 years ago

This is happening for 10-20% of the requests

ankurpshah commented 5 years ago

Facing similar issue on the latest version of Ambassador (0.60.2)

erulabs commented 5 years ago

Also seeing this issue with 0.71.0 - services intermittently 503 - I believe this occurs when the metrics-server pod is having issues, tho am still investigating that.

Whamied commented 5 years ago

We were encountering this issue version 0.51.1 Upgrading to version0.61.1 has the same issue.

We are seeing intermittent 503 and 403 response codes.

Adding a retry_policy has taken the number of 503 issues down to almost 0, but we are still seeing intermittent 403 responses. The requests that give a 403 do not reach our Auth service. We are getting UAEX response flags on all of those.

Is the retry configuration applied to the external auth call? If not, is there a way to configure that?

dioniseo commented 5 years ago

Does anybody can confirm that retry_policy works for AuthService?

The changes were merged but actually looked to issue in envoy repo - https://github.com/envoyproxy/envoy/issues/5974 that was closed without adding support of retry for envoy.ext_authz filter.

richarddli commented 5 years ago

I believe we already have taken this patch in our version of Envoy

dioniseo commented 5 years ago

@richarddli If I find correctly configuration definition for ext_authz filter: https://github.com/datawire/ambassador/blob/master/go/apis/envoy/config/filter/http/ext_authz/v2/ext_authz.pb.go#L72 then there is no field for retry policy, but I'm not experienced with GO so may be wrong...

dioniseo commented 5 years ago

Hi @richarddli, did you have a chance to recheck that retry policy should have worked for AuthService configuration in latest releases of Ambassador? If we define retry per Mapping or Globally then it works fine but when define only for AuthService then it seems doesn't work.

sekaninat commented 5 years ago

Hi, any closure on this? I'd say we're experiencing the same behavior with just a single replica of plain envoy with authorization service and a some endpoint backend. Around 20% of the requests are 403 or 503 when we generate higher load. Note that the failed requests are not received by the target component, 403 - authorization service and 503 - the backend, at all.

yamaszone commented 5 years ago

We are seeing this issue with Ambassador v0.70.0 configured with 3 replicas. ~15% requests encounters 403 with high load (~1000 req/sec) but with lower rate everything works as expected. We are planning to upgrade Ambassador to latest version but wanted to know if the issue was expected to be fixed with version later than v0.70.0. Here's an example error log:

ACCESS [2019-09-12T20:35:25.044Z] "GET /route/endpoint1 HTTP/1.1" 403 UAEX 0 0 5001 - "10.1.0.128,10.1.0.44" "hey/0.0.1" "678fbfc4-2718-41a2-a16d-a217ddd39ca6" "xyz.westus2.cloudapp.azure.com" "-"

@richarddli can you please comment on this?

sekaninat commented 5 years ago

For us it was wrong kubernetes configuration. Our pods didn't have enough connections enabled in sysctl.

mahbh2001 commented 5 years ago

We are using ambassador:0.75.0 with 3 replicas , getting similar issue where we get intermittent failures while hitting authorisation service HTTP/1.1" 403 UAEX 0 0 5002 - connections are getting closed ~5sec.

stale[bot] commented 4 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

f-a-a commented 4 years ago

i're experiencing this on version 0.86.1, can anyone comment on whats needed to have this rectified?

oyvinvo commented 4 years ago

I'm experiencing this aswell running in AWS when the concurrency is high enough. Firing 200 requests (5 concurrent) usually makes atleast one fail with response flag UAEX or UC. When UAEX is raised we can't find any traces in our AuthService. When UC is raised we can find a trace, but the trace says everything is okay, and 200 is returned.

To me it seems like UAEX is raised when the connection is closed before the AuthService is reached, while UC is raised when the connection is closed before the AuthService has responded.

We're running version 1.0

MateuszCzubak commented 4 years ago

Same issue here - 403s or 503s with UC and UAEX codes in 0.86.1 version. IMO this issue should be reopened.

Mokto commented 4 years ago

Last version is 1.1.1. Maybe you should try to upgrade first ?

oyvinvo commented 4 years ago

Haven't had the time to try it yet, but I'm confident that the new setting https://www.getambassador.io/reference/core/ambassador/#upstream-idle-timeout-cluster_idle_timeout_ms in version 1.1.1 might solve the issues for me.

f-a-a commented 4 years ago

update: I have managed to rectify this issue in my cluster still running on version 0.86.1.

For my case, I found that the bottleneck was on my external AuthService not processing request in time. All I did was increasing resources and adding replicas on oathkeeper's deployment to ensure it has enough to process and it hasn't been throwing errors as of late.

przemek-sl commented 4 years ago

Did anyone manage to solve this issue? Or did you try to apply any workarounds to mitigate the number of errors between Ambassador and AuthService?

@sekaninat Did you remember what you changed and what values you had before?

jasperkuperus commented 4 years ago

@richarddli Shouldn't this issue be reopened? I'm also seeing this issue unfortunately :(

hextrim commented 3 years ago

You can believe it or not. I have a lot of 403/503 UAEX - fixed kube dns. All 200.

MateuszCzubak commented 3 years ago

In our case some of the errors went away by increasing the CPU limits for ambassador deployment but the problem still remains.

amitzbb commented 3 years ago

Did anyone find an official solution for that issue?

Algirdyz commented 3 years ago

I am also getting this issue. Running ambassador 1.13 3 pods Load on the server is very low an it still happens.

Any solutions or workarounds yet?

prathap0611 commented 2 years ago

Did we get any resolution on this. We are also getting this error with emissary-ingress 2.1 and tried with 2 pods

TalhaNaeem101 commented 1 year ago

Did anyone find any solution for this? I am also getting this error even with 3.0.0 version. Randomly, gets 403 on my app, while using appropriate resources and 3 pods for ambassador. The problem is that the request does not reach my micro-service and it gives 403, however it is random. #4286 #3893

prathap0611 commented 1 year ago

Did we get any resolution on this. We are also getting this error with emissary-ingress 2.1 and tried with 2 pods

In our case, it is an issue with our design. We have two different instances of the application running each having its own authentication service. When requests hits ingress it sends the request for authentication in a round robin fashion. When a request to a targeted application hits the corresponding auth service it was working but when it hits the other auth service it was failing.

We changed our design and have single authentication service now and it is working fine. Hope this helps if someone encounters similar problem

krisiye commented 4 months ago

I think this should not be a 403 and be a 5xx instead. Irrespective In our case it was clearly a CPU related bottleneck on the ambassador pods. Adding extra pods and balancing our workload eliminated the 403's.