Upstream timeouts don't work with Camel Service. Actually upstream connection policy is not working when https_proxy is being used (either with env vars or policy).
This PR fixes the integration of upstream connection with any use case for https_proxy.
It was considered adding connection options to the proxy policy and camel policy. Mainly because "Upstream connection" policy is referring to "upstream", which can be confusing when a proxy is being used, as the connection to upstream backend is no longer made by APIcast. Instead, APIcast creates a connection to the proxy. So the Upstream connection opts should, ideally, only apply to connections to backend "upstream".
We decided to apply upstream connection policy to any "upstream" connection the APIcast make initiate, either a proxy or a actual upstream backend. That way, implementation wise it is easier. No need to add extended connection parameters to the proxy policies. Furthermore, if users are using Upstream connection policy together with http_proxy, the configuration still applies. With a new connection parameters in the proxy policies, this last use case would be broken. Additionally, if connection opts are added to the policies as optional params, we would need to add new env vars as well for the use case where proxies are configured via env vars. Too complex just to stick "upstream" concept to the actual backend (api_backend in the service configuration). Instead, upstram connection policy applies to any "upstream" connection APIcast does, regardless of being it a proxy or backend upstream.
Verification Steps
Upstream connection integration with https_proxy camel proxy
Now, let's simulate some network latency using docker containers with Traffic control. We will add 200ms of latency to the container running socat between the proxy and the backend upstream. It is called example.com service.
Stop the gateway
CTRL-C
We are going to modify network-related stuff, so the NET_ADMIN capability is needed.
What
Fixes: https://issues.redhat.com/browse/THREESCALE-10582
Upstream timeouts don't work with Camel Service. Actually upstream connection policy is not working when
https_proxy
is being used (either with env vars or policy).This PR fixes the integration of upstream connection with any use case for
https_proxy
.It was considered adding connection options to the proxy policy and camel policy. Mainly because "Upstream connection" policy is referring to "upstream", which can be confusing when a proxy is being used, as the connection to upstream backend is no longer made by APIcast. Instead, APIcast creates a connection to the proxy. So the Upstream connection opts should, ideally, only apply to connections to backend "upstream".
We decided to apply upstream connection policy to any "upstream" connection the APIcast make initiate, either a proxy or a actual upstream backend. That way, implementation wise it is easier. No need to add extended connection parameters to the proxy policies. Furthermore, if users are using Upstream connection policy together with
http_proxy
, the configuration still applies. With a new connection parameters in the proxy policies, this last use case would be broken. Additionally, if connection opts are added to the policies as optional params, we would need to add new env vars as well for the use case where proxies are configured via env vars. Too complex just to stick "upstream" concept to the actual backend (api_backend
in the service configuration). Instead, upstram connection policy applies to any "upstream" connection APIcast does, regardless of being it a proxy or backend upstream.Verification Steps
Upstream connection integration with
https_proxy
camel proxyhttps_proxy
use case: APIcast --> camel proxy --> upstream (TLS)This env uses as
api_backend
the real envhttps://echo-api.3scale.net:443
. My roundtrip latency is ~400ms.Let's start with timeouts set to 1 sec and the request should be accepted.
Run environment
The request should be accepted (
200 OK
), as the connection timeouts should not be exceeded.Now, let's lower the timeouts threshold to something like 100ms that should be exceeded because the upstream is far away.
Stop the gateway
Restore
apicast-config.json
fileApply 100ms timeouts
Run environment
The request should fail (
502 Bad Gateway
), as the connection timeouts should be exceeded.The logs should show the following line
Clean the env before starting next step
Upstream connection integration with
https_proxy
with proxy policy (tinyproxy)https_proxy
dev environment setuphttps_proxy
use case: APIcast --> tiny proxy --> upstream (TLS)Timeouts set to 0.1 sec
Run environment
The request should be accepted (200 OK), as the connection timeouts should not be exceeded.
Now, let's simulate some network latency using docker containers with Traffic control. We will add 200ms of latency to the container running
socat
between the proxy and the backend upstream. It is calledexample.com
service.Stop the gateway
We are going to modify network-related stuff, so the NET_ADMIN capability is needed.
Run environment with the new config
install the
tc
(traffic control) commandAdd 200ms latency to the outbound traffic of
example.com
service.The request should be rejected (
503 Service Temporarily Unavailable
), as the connection timeouts should not be exceeded.The logs should show the following line