Closed robertrbruno closed 4 years ago
I am still getting intermittedt 503 even when one replica is there.
ACCESS [2019-04-30T06:23:58.601Z] "GET /api/v1/query?query=kube_pod_labels%7Blabel_app%3D%22connect-sidekiq%22%7D&time=1556605419.186&_=1556604765861 HTTP/2" 503 UC 0 57 0 - "100.96.66.0" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36" "de590163-2ac6-4508-89c7-03937c6f73f8" "prometheus.granite.rock.swiggy" "100.67.36.178:80"
This is happening for 10-20% of the requests
Facing similar issue on the latest version of Ambassador (0.60.2)
Also seeing this issue with 0.71.0
- services intermittently 503 - I believe this occurs when the metrics-server pod is having issues, tho am still investigating that.
We were encountering this issue version 0.51.1
Upgrading to version0.61.1
has the same issue.
We are seeing intermittent 503
and 403
response codes.
Adding a retry_policy
has taken the number of 503
issues down to almost 0, but we are still seeing intermittent 403
responses. The requests that give a 403
do not reach our Auth service. We are getting UAEX
response flags on all of those.
Is the retry configuration applied to the external auth call? If not, is there a way to configure that?
Does anybody can confirm that retry_policy works for AuthService?
The changes were merged but actually looked to issue in envoy repo - https://github.com/envoyproxy/envoy/issues/5974 that was closed without adding support of retry for envoy.ext_authz filter.
I believe we already have taken this patch in our version of Envoy
@richarddli If I find correctly configuration definition for ext_authz filter: https://github.com/datawire/ambassador/blob/master/go/apis/envoy/config/filter/http/ext_authz/v2/ext_authz.pb.go#L72 then there is no field for retry policy, but I'm not experienced with GO so may be wrong...
Hi @richarddli, did you have a chance to recheck that retry policy should have worked for AuthService configuration in latest releases of Ambassador? If we define retry per Mapping or Globally then it works fine but when define only for AuthService then it seems doesn't work.
Hi, any closure on this? I'd say we're experiencing the same behavior with just a single replica of plain envoy with authorization service and a some endpoint backend. Around 20% of the requests are 403 or 503 when we generate higher load. Note that the failed requests are not received by the target component, 403 - authorization service and 503 - the backend, at all.
We are seeing this issue with Ambassador v0.70.0
configured with 3 replicas. ~15% requests encounters 403 with high load (~1000 req/sec) but with lower rate everything works as expected. We are planning to upgrade Ambassador to latest version but wanted to know if the issue was expected to be fixed with version later than v0.70.0
. Here's an example error log:
ACCESS [2019-09-12T20:35:25.044Z] "GET /route/endpoint1 HTTP/1.1" 403 UAEX 0 0 5001 - "10.1.0.128,10.1.0.44" "hey/0.0.1" "678fbfc4-2718-41a2-a16d-a217ddd39ca6" "xyz.westus2.cloudapp.azure.com" "-"
@richarddli can you please comment on this?
For us it was wrong kubernetes configuration. Our pods didn't have enough connections enabled in sysctl.
We are using ambassador:0.75.0 with 3 replicas , getting similar issue where we get intermittent failures while hitting authorisation service HTTP/1.1" 403 UAEX 0 0 5002 - connections are getting closed ~5sec.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
i're experiencing this on version 0.86.1
, can anyone comment on whats needed to have this rectified?
I'm experiencing this aswell running in AWS when the concurrency is high enough. Firing 200 requests (5 concurrent) usually makes atleast one fail with response flag UAEX or UC. When UAEX is raised we can't find any traces in our AuthService. When UC is raised we can find a trace, but the trace says everything is okay, and 200 is returned.
To me it seems like UAEX is raised when the connection is closed before the AuthService is reached, while UC is raised when the connection is closed before the AuthService has responded.
We're running version 1.0
Same issue here - 403s or 503s with UC and UAEX codes in 0.86.1
version. IMO this issue should be reopened.
Last version is 1.1.1. Maybe you should try to upgrade first ?
Haven't had the time to try it yet, but I'm confident that the new setting https://www.getambassador.io/reference/core/ambassador/#upstream-idle-timeout-cluster_idle_timeout_ms in version 1.1.1 might solve the issues for me.
update: I have managed to rectify this issue in my cluster still running on version 0.86.1
.
For my case, I found that the bottleneck was on my external AuthService
not processing request in time. All I did was increasing resources and adding replicas on oathkeeper
's deployment to ensure it has enough to process and it hasn't been throwing errors as of late.
Did anyone manage to solve this issue? Or did you try to apply any workarounds to mitigate the number of errors between Ambassador and AuthService?
@sekaninat Did you remember what you changed and what values you had before?
@richarddli Shouldn't this issue be reopened? I'm also seeing this issue unfortunately :(
You can believe it or not. I have a lot of 403/503 UAEX - fixed kube dns. All 200.
In our case some of the errors went away by increasing the CPU limits for ambassador deployment but the problem still remains.
Did anyone find an official solution for that issue?
I am also getting this issue. Running ambassador 1.13 3 pods Load on the server is very low an it still happens.
Any solutions or workarounds yet?
Did we get any resolution on this. We are also getting this error with emissary-ingress 2.1 and tried with 2 pods
Did anyone find any solution for this? I am also getting this error even with 3.0.0 version. Randomly, gets 403 on my app, while using appropriate resources and 3 pods for ambassador. The problem is that the request does not reach my micro-service and it gives 403, however it is random. #4286 #3893
Did we get any resolution on this. We are also getting this error with emissary-ingress 2.1 and tried with 2 pods
In our case, it is an issue with our design. We have two different instances of the application running each having its own authentication service. When requests hits ingress it sends the request for authentication in a round robin fashion. When a request to a targeted application hits the corresponding auth service it was working but when it hits the other auth service it was failing.
We changed our design and have single authentication service now and it is working fine. Hope this helps if someone encounters similar problem
I think this should not be a 403 and be a 5xx instead. Irrespective In our case it was clearly a CPU related bottleneck on the ambassador pods. Adding extra pods and balancing our workload eliminated the 403's.
Describe the bug
When talking to a service through ambassador that has an auth service, was getting what appeared to be random responses between 200, 503, and 403. Had replica of 3 set for the ambassador deployment. Upon looking at the logs, one pod was always giving 200 responses, another 503, and another 403. Tried restarting the trouble pods with no luck.
As a workaround I made my deployment to just replica 1 and now only seem to be getting 200 responses. Only saw this bug when I upgraded to 0.53.1. Was previously on version 0.50.3.
Versions (please complete the following information):