Closed alphamarket closed 1 year ago
@alphamarket could you provide some more information - esp the Envoy version - it might be helpful to see the config also
did you try debug logging the Envoy proxies? Are there any clues in the app log?
@phlax Envoy version: envoyproxy/envoy:v1.23-latest
did you try debug logging the Envoy proxies?
How can I do that?
Are there any clues in the app log?
app
is a very robust C++ engine, each app
instance is capable of handling 6,000 requests per second, app
is not a bottleneck here...
How can I do that?
https://www.envoyproxy.io/docs/envoy/latest/start/quick-start/run-envoy#debugging-envoy
app is a very robust C++ engine, each app instance is capable of handling 6,000 requests per second, app is not a bottleneck here...
no but it might give some indication as to why the connection is being terminated
@phlax Enovy's config:
node:
cluster: envoy-cluster
id: 3921a62b-e522-42d7-88d7-1cbbcbfacbd2
dynamic_resources:
lds_config:
path: /etc/envoy/envoy-lds.yaml
admin:
access_log_path: /tmp/admin_access.log
address:
socket_address:
protocol: TCP
address: 0.0.0.0
port_value: 9901
static_resources:
################################################################################
# Clusters
################################################################################
clusters:
# Cluster: app
- name: app_cluster
type: STRICT_DNS
dns_lookup_family: V4_ONLY
lb_policy: ROUND_ROBIN
load_assignment:
cluster_name: app_cluster
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: app
port_value: 80
# /etc/envoy/envoy-lds.yaml
resources:
################################################################################
# HTTP listeners
################################################################################
- "@type": type.googleapis.com/envoy.config.listener.v3.Listener
name: http_listener
address:
socket_address:
address: 0.0.0.0
port_value: 80
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
http_protocol_options:
accept_http_10: true
stat_prefix: ingress_http
use_remote_address: true
xff_num_trusted_hops: 0
access_log:
- name: envoy.access_loggers.stdout
typed_config:
"@type": type.googleapis.com/envoy.extensions.access_loggers.stream.v3.StdoutAccessLog
http_filters:
- name: envoy.filters.http.router
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router
route_config:
name: services_route
virtual_hosts:
# example.com
- name: example.com
domains: ["example.com"]
retry_policy:
retry_on: 5xx,reset,connect-failure,refused-stream
num_retries: 10
per_try_timeout: 10s
routes:
- match: { prefix: "/" }
route: { cluster: app_cluster }
This issue has been automatically marked as stale because it has not had activity in the last 30 days. It will be closed in the next 7 days unless it is tagged "help wanted" or "no stalebot" or other activity occurs. Thank you for your contributions.
This issue has been automatically closed because it has not had activity in the last 37 days. If this issue is still valid, please ping a maintainer and ask them to label it as "help wanted" or "no stalebot". Thank you for your contributions.
We have deployed the envoy proxy in k8s. This is the output of the pods
As you can see all the envoy's replicas and the app is in
Running
state, but when we bench mark theapp
using apache AB tool:We observe some 503 error responses from the envoy:
Although that all pods are active and every single pod of the
app
can easily handle 1000 requests, why are we seeing someupstream connect error or disconnect/reset before headers. reset reason: connection termination
error?