Open nagarajatantry opened 4 years ago
Any input on this?
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
@tannaga How did you end up resolving this?
I'm seeing a ton of these errors, about 10K error lines in the last 2 hours, and it's not even a production cluster. This is running on GKE.
I'm observing the same issue on GKE. Restarting pods helps, but the issue re-appears from time to time.
We're observing this on GKE too.
Huh, @rishabhparikh and @Zebradil, what version of Emissary are you using?
Hi @kflynn, one and a half year ago we were evaluating emissary ingress and saw this issue. But as we decided to go with another solution, I don't have any additional information on this issue anymore.
@Zebradil Thanks -- I meant to tag the folks who'd recently commented on this issue, and misread the year for you, mea culpa!
Describe the bug During performance test, I have enabled ambassador pods and my upstream service to scale up when it breaches the 60% cpu threshold. When the scale up events are performed in both ambassador and upstream pods at the same time then i start seeing 503 errors with the below log message in my upstream service (Go). This does not happen when either ambassador or upstream service is pre-scaled.
To Reproduce
Expected behavior Scale up events without errors.
Versions (please complete the following information):
Additional context I have tested with different setups.
In case of 1 and 2, i see upwards of 10k (proportionate to the tps) 503 errors and the below error message in upstream logs . I dont see this issue when ambassador is not in the path (set up 3)