Open cbchenoweth opened 1 year ago
@AliceProxy as requested during our call earlier today.
apiVersion: getambassador.io/v3alpha1
kind: Listener
metadata:
labels:
app.kubernetes.io/component: ingress
app.kubernetes.io/name: emissary-ingress
app.kubernetes.io/instance: emissary-ingress
app.kubernetes.io/managed-by: argocd
name: https
namespace: emissary
spec:
hostBinding:
namespace:
from: SELF
l7Depth: 1
port: 8443
# documentation says to use protocol over protocolStack but there is a current bug with that approach
# https://github.com/emissary-ingress/emissary/issues/4153
#protocol: HTTPSPROXY
protocolStack: [ "PROXY", "TLS", "HTTP", "TCP" ]
securityModel: XFP
apiVersion: getambassador.io/v3alpha1
kind: Host
metadata:
labels:
app.kubernetes.io/component: ingress
app.kubernetes.io/name: emissary-ingress
app.kubernetes.io/instance: emissary-ingress
app.kubernetes.io/managed-by: argocd
name: all
namespace: emissary
spec:
hostname: "*"
tlsSecret:
name: emissary-certificate
tls:
min_tls_version: v1.2
alpn_protocols: h2, http/1.1
mappingSelector:
matchLabels:
# associate all mappings that contain this label
app.kubernetes.io/component: api
apiVersion: getambassador.io/v3alpha1
kind: Module
metadata:
labels:
app.kubernetes.io/component: ingress
app.kubernetes.io/name: emissary-ingress
app.kubernetes.io/instance: emissary-ingress
app.kubernetes.io/managed-by: argocd
name: ambassador
namespace: emissary
spec:
config:
# output logs in json
envoy_log_type: json
# See for reference:
# https://www.envoyproxy.io/docs/envoy/latest/configuration/observability/access_log/usage
envoy_log_format: {
"authority": "%REQ(:AUTHORITY)%",
"bytes_received": "%BYTES_RECEIVED%",
"bytes_sent": "%BYTES_SENT%",
"downstream_local_address": "%DOWNSTREAM_LOCAL_ADDRESS%",
"downstream_remote_address": "%DOWNSTREAM_REMOTE_ADDRESS%",
"downstream_direct_remote_address": "%DOWNSTREAM_DIRECT_REMOTE_ADDRESS%",
"duration": "%DURATION%",
"istio_policy_status": "%DYNAMIC_METADATA(istio.mixer:status)%",
"method": "%REQ(:METHOD)%",
"path": "%REQ(X-ENVOY-ORIGINAL-PATH?:PATH)%",
"protocol": "%PROTOCOL%",
"request_id": "%REQ(X-REQUEST-ID)%",
"requested_host": "%REQ(HOST)%",
"requested_server_name": "%REQUESTED_SERVER_NAME%",
"response_code": "%RESPONSE_CODE%",
"response_flags": "%RESPONSE_FLAGS%",
"start_time": "%START_TIME%",
"upstream_cluster": "%UPSTREAM_CLUSTER%",
"upstream_host": "%UPSTREAM_HOST%",
"upstream_local_address": "%UPSTREAM_LOCAL_ADDRESS%",
"upstream_service_time": "%RESP(X-ENVOY-UPSTREAM-SERVICE-TIME)%",
"upstream_transport_failure_reason": "%UPSTREAM_TRANSPORT_FAILURE_REASON%",
"user_agent": "%REQ(USER-AGENT)%",
"x_forwarded_for": "%REQ(X-FORWARDED-FOR)%",
"x_user_id": "%REQ(X-USER-ID)%"
}
use_remote_address: true
resolver: endpoint
load_balancer:
policy: round_robin
lua_scripts: |
local base64 = require("lua.base64")
local utils = require("lua.utils")
local json = require("lua.json")
function envoy_on_request(request_handle)
local authToken = request_handle:headers():get("Authorization")
if authToken ~= nil then
local credentials = authToken:gsub("^%s*Bearer%s+", "")
local jwt = utils.tokenize(credentials, '.', 3)
local jwt_decoded = json.parse(base64.from_base64(jwt[2]))
request_handle:headers():add("x-user-id", jwt_decoded["sub"])
end
end
Just checking if there has been any traction in 👀 on this or anything else we should be investigating on our side to help give more information.
One additional piece of information we can add to this is that it does appear the correct configuration is loaded all the way through if the mapping is removed and re-added completely (though the way it manifests is a little odd).
404
responses to the endpoint. (which makes sense, the mapping was removed).timeout_ms
from before it was deleted.timeout_ms
.
Describe the bug When making updates to existing Mapping configuration, the emissary pods show the updates in the
aconf.json
files, but they are not reflected in their.json
oreconf.json
files, (and not actually updated in the envoy config, so do not take effect).If we restart the emissary POD, then the new configuration propagates through all the files and is working.
To Reproduce Steps to reproduce the behavior:
Rewrite
timeout_ms
Labels
emissary-ingress
PODs to make sure everything loaded in "clean slate"aconf.json
,ir.json
,econf.json
Rewrite
timeout_ms
Labels
emissary-ingress
pods files and see that:aconf.json
has updated to reflect the changes ✅ir.json
, and theeconf.json
are not reflecting the changes ❌emissary-ingress
pods:ir.json
, and theeconf.json
are now reflecting the correct values when "loaded from clean state" 🤔Expected behavior Expected the Mapping updates to be reflected all the way to the envoy config without needing to restart the PODs.
Versions (please complete the following information):
3.4.0
(using official docker image:docker.io/emissaryingress/emissary:3.4.0
)Additional context
In some of our Mappings we see this message related to the
HOST
But we are seeing same behavior with other Mappings that do not have that message. And even the ones with that message still reflect the updates correctly when the emissary PODs are restarted. Here is condensed output of listing mappings in cluster. We were able to see same issues with both the
caleb-dummy-api
and theccp
Mappings.I have attached some files for deeper context / ability to verify if we missed something in our config causing troubles:
mapping
files in initial state and after the updateaconf.json
,ir.json
, andeconf.json
files in both the initial state, as well as after the update was appliedIn those files I also included the command I was running to get the files content. (for the config files on the emissary pod, I was following the debug instructions defined here)