Open jaygorrell opened 5 years ago
@jaygorrell Could there be an application level conflict causing the issue? Please check our updated docs on the topic: https://istio.io/docs/concepts/traffic-management/#compatibility-with-application-level-fault-handling
@rcaballeromx Wouldn't that only apply to things like retries where it compounds the value? In the case of a timeout, it doesn't matter if both are handling it since it's a short-circuit.
Either way, no application fault handling here, and having a 100ms timeout allow a 30+ second response come through is a good proof.
I have exact same problems.
If it is via istio-ilbgateway
, this header will be overwritten and it will take effect.
But, It does not overwrite and take effect via mesh, and the x-envoy-upstream-rq-timeout-ms
header is not output to the log.
Then it occurs not only x-envoy-upstream-rq-timeout-ms
but also x-envoy-max-retries
header.
Is there any update on this issue?
Version (include the output of istioctl version --remote and kubectl version) istio 1.2.2 / k8s v1.13.6-gke.13
How was Istio installed? Helm template w/mTLS
@istio/wg-networking-maintainers does this sound a real issue or intended behavior? Note the mentioned release is 1.1.
Re-open per https://github.com/istio/istio/issues/19111
As per a recent post on discuss.istio.io it looks like this is now protected by a pilot env flag PILOT_SIDECAR_USE_REMOTE_ADDRESS
and was changed for Istio 1.2+
I've confirmed that setting that flag now allows workloads to tweak envoy behaviour via headers.
Also FWIW enabling PILOT_SIDECAR_USE_REMOTE_ADDRESS
does cause other side affects then if you just wanted the ability to use request header overrides from inside the mesh
e.g. X-Forwarded-Proto https://github.com/istio/istio/issues/15124 https://github.com/istio/istio/issues/7964
@Dev25 @jaygorrell did you guys finally managed to solve this?
With this setup: nginx-ingress --> ingress-gateway --> VirtualService --> Pod
I'm always getting an 15s timeout which I don't find a way to change.
This didn't happened without the nginx-ingress. Seems like the additional hops cause this headers to be stripped out.
The x-envoy-upstream-rq-timeout-ms
header appears to work between containers if you use respect_expected_rq_timeout
of envoy.
https://www.envoyproxy.io/docs/envoy/latest/api-v2/config/filter/http/router/v2/router.proto
In istio it seems to use EnvoyFilter for this setting, but I don't know how to set this up.
Bug description Making requests to a service with no VirtualService (or with a VirtualService with no timeout configured) that includes
x-envoy-upstream-rq-timeout-ms
does not take effect. The service responds successfully after its normal amount of time.This should work, per https://istio.io/docs/tasks/traffic-management/request-timeouts/
Relevant cluster dump info for the route
Relevant debug logs that show two extra headers I sent (
x-my-test
andx-envoy-upstream-rq-timeout-ms
) were received differently:Affected product area (please put an X in all that apply)
[ ] Configuration Infrastructure [ X ] Docs [ ] Installation [ X ] Networking [ ] Performance and Scalability [ ] Policies and Telemetry [ ] Security [ ] Test and Release [ ] User Experience [ ] Developer Infrastrcture
Expected behavior A request to a service with
x-envoy-upstream-rq-timeout-ms
should timeout after the desired amount of time, especially when a VirtualService is not setting a value itself.Steps to reproduce the bug
x-envoy-upstream-rq-timeout-ms: 100
.Version (include the output of
istioctl version --remote
andkubectl version
) istio 1.1.4 / k8s 1.11.8How was Istio installed? Helm w/mTLS