envoyproxy / envoy

Cloud-native high-performance edge/middle/service proxy
https://www.envoyproxy.io
Apache License 2.0
25.13k stars 4.82k forks source link

gRPC-JSON trasnscoding issue: upstream connect error or disconnect/reset before headers. reset reason: connection failure, transport failure reason: delayed connect error: 111 #23740

Closed justinyjsong closed 2 years ago

justinyjsong commented 2 years ago

Title: gRPC-JSON encoding issue: upstream connect error or disconnect/reset before headers. reset reason: connection failure, transport failure reason: delayed connect error: 111

Description:

I am trying to use gRPC-json transcoder for my gRPC server (running from local) and envoy running from docker container, but is facing the "upstream connect error or disconnect/reset before headers. reset reason: connection failure, transport failure reason: delayed connect error: 111". I am running this on Linux.

I used docker-compose which takes Dockerfile to deploy the service.

docker-compose.yaml

version: "3.8"
services:
  envoy-proxy:
    build:
      context: .
      dockerfile: Dockerfile
      args:
        ENVOY_CONFIG: envoy-local.yaml
    ports:
      - "1338:1338"
      - "9901:9901"

Dockerfile

FROM envoyproxy/envoy-dev:latest

ARG ENVOY_CONFIG=envoy-local.yaml

COPY ./envoy/proto_descriptor.pb /etc/envoy/proto_descriptor.pb
RUN chmod go+r /etc/envoy/proto_descriptor.pb
COPY ./${ENVOY_CONFIG} /etc/envoy/envoy-local.yaml
RUN chmod go+r /etc/envoy/envoy-local.yaml
CMD /usr/local/bin/envoy -c /etc/envoy/envoy-local.yaml -l debug

envoy-local.yaml

admin:
  access_log_path: "/tmp/admin_access.log"
  address:
    socket_address: {address: 0.0.0.0, port_value: 9901}

static_resources:
  listeners:
  - name: listener1
    address:
      socket_address: {protocol: TCP, address: 0.0.0.0, port_value: 1338}
    filter_chains:
    - filters:
      - name: envoy.filters.network.http_connection_manager
        typed_config:
          "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
          stat_prefix: grpc_json
          codec_type: AUTO
          route_config:
            name: local_route
            virtual_hosts:
            - name: local_service
              domains: ["*"]
              routes:
              # NOTE: by default, matching happens based on the gRPC route, and not on the incoming request path.
              # Reference: https://envoyproxy.io/docs/envoy/latest/configuration/http/http_filters/grpc_json_transcoder_filter#route-configs-for-transcoded-requests
              - match: { prefix: "/", grpc: {}}
                route: {cluster: grpc-backend, timeout: 600s}
          http_filters:
          - name: envoy.filters.http.grpc_json_transcoder
            typed_config:
              "@type": type.googleapis.com/envoy.extensions.filters.http.grpc_json_transcoder.v3.GrpcJsonTranscoder
              proto_descriptor: "/etc/envoy/proto_descriptor.pb"
              services: ["sample.package.grpc.SampleService"]
              print_options:
                add_whitespace: true
                always_print_primitive_fields: true
                always_print_enums_as_ints: false
                preserve_proto_field_names: false
          - name: envoy.filters.http.router
            typed_config:
              "@type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router

  clusters:
  - name: grpc-backend
    connect_timeout: 5s
    type: LOGICAL_DNS
    # Comment out the following line to test on v6 networks
    # dns_lookup_family: V4_ONLY
    lb_policy: ROUND_ROBIN
    #http2_protocol_options: {}
    typed_extension_protocol_options:
      envoy.extensions.upstreams.http.v3.HttpProtocolOptions:
        "@type": type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions
        explicit_http_config:
          http2_protocol_options: {}
    load_assignment:
      cluster_name: grpc-backend
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address:
                # WARNING: "docker.for.mac.localhost" has been deprecated from Docker v18.03.0.
                # If you're running an older version of Docker, please use "docker.for.mac.localhost" instead.
                # Reference: https://docs.docker.com/docker-for-mac/release-notes/#docker-community-edition-18030-ce-mac59-2018-03-26
                address: 0.0.0.0
                port_value: 1339

and this is how I compile my protobuf which also contains googleapis and generate the proto_descriptor for json transcoding to be used.

PROTO_WORKSPACE=proto_workspace/googleapis/
PB_STORE=src/generated/
PROTO_PATH=proto

protoc --proto_path=${PROTO_PATH} --proto_path=${PROTO_WORKSPACE} --include_imports --include_source_info \
--go_out=${PB_STORE} --go_opt=paths=source_relative \
--go-grpc_out=${PB_STORE} --go-grpc_opt=paths=source_relative \
--descriptor_set_out=envoy/proto_descriptor.pb \
proto/*.proto

running the gRPC server is as simple as

go run main.go

gRPCurl command worked

grpcurl -plaintext -d '{"content_type_url":"template", "content": { "name": "test", "num": "123"}}' localhost:1339 sample.package.grpc.SampleService/GenerateTest
{
  "sampleResponse": "0.0.0.0:1338/test-123-1a5337e6-3a81-44ac-9977-04ba624360c4"
}

Here is my proto file that was compiled.

syntax = "proto3";

package sample.package.grpc;

import "google/api/annotations.proto";

option go_package = "sample-service.com/sample-service";

message GenerateSampleRequest {
  string content_type_url = 1;
  TemplateForm content = 2;
}

message GenerateSampleResponse {
  string sample_response = 1;
}

message TemplateForm {
    string name = 1;
    string num = 2;
}

service SampleService {
  rpc GenerateTest (GenerateSampleRequest) returns (GenerateSampleResponse)
  {
    option (google.api.http) = {
      post: "/v1/generateTest"
      body: "*"
    };
  }
}

Now executing the below POST curl

curl -X POST http://127.0.0.1:1338/v1/generateTest \
-H 'Content-Type: application/json' \
-d '{"content_type_url":"html", "content": {"name":"sample", "num":"123"}}'

Gives

envoy-proxy_1  | [2022-10-28 21:35:32.977][48][debug][http] [source/common/http/conn_manager_impl.cc:306] [C4] new stream
envoy-proxy_1  | [2022-10-28 21:35:32.978][48][debug][http] [source/common/http/conn_manager_impl.cc:930] [C4][S4891186692387075967] request headers complete (end_stream=false):
envoy-proxy_1  | ':authority', '0.0.0.0:1338'
envoy-proxy_1  | ':path', '/v1/generateTest'
envoy-proxy_1  | ':method', 'POST'
envoy-proxy_1  | 'user-agent', 'curl/7.85.0'
envoy-proxy_1  | 'accept', '*/*'
envoy-proxy_1  | 'content-type', 'application/json'
envoy-proxy_1  | 'content-length', '70'
envoy-proxy_1  | 
envoy-proxy_1  | [2022-10-28 21:35:32.978][48][debug][connection] [./source/common/network/connection_impl.h:92] [C4] current connecting state: false
envoy-proxy_1  | [2022-10-28 21:35:32.978][48][debug][router] [source/common/router/router.cc:470] [C4][S4891186692387075967] cluster 'grpc-backend' match for URL '/sample.package.grpc.SampleService/GenerateTest'
envoy-proxy_1  | [2022-10-28 21:35:32.978][48][debug][router] [source/common/router/router.cc:678] [C4][S4891186692387075967] router decoding headers:
envoy-proxy_1  | ':authority', '0.0.0.0:1338'
envoy-proxy_1  | ':path', '/sample.package.grpc.SampleService/GenerateTest'
envoy-proxy_1  | ':method', 'POST'
envoy-proxy_1  | ':scheme', 'http'
envoy-proxy_1  | 'user-agent', 'curl/7.85.0'
envoy-proxy_1  | 'accept', '*/*'
envoy-proxy_1  | 'content-type', 'application/grpc'
envoy-proxy_1  | 'x-forwarded-proto', 'http'
envoy-proxy_1  | 'x-request-id', '3121346b-c259-4252-aa97-a4125b0bbdf9'
envoy-proxy_1  | 'x-envoy-original-path', '/v1/generateTest'
envoy-proxy_1  | 'x-envoy-original-method', 'POST'
envoy-proxy_1  | 'te', 'trailers'
envoy-proxy_1  | 'x-envoy-expected-rq-timeout-ms', '600000'
envoy-proxy_1  | 
envoy-proxy_1  | [2022-10-28 21:35:32.978][48][debug][pool] [source/common/http/conn_pool_base.cc:78] queueing stream due to no available connections (ready=0 busy=0 connecting=0)
envoy-proxy_1  | [2022-10-28 21:35:32.978][48][debug][pool] [source/common/conn_pool/conn_pool_base.cc:290] trying to create new connection
envoy-proxy_1  | [2022-10-28 21:35:32.978][48][debug][pool] [source/common/conn_pool/conn_pool_base.cc:145] creating a new connection (connecting=0)
envoy-proxy_1  | [2022-10-28 21:35:32.978][48][debug][http2] [source/common/http/http2/codec_impl.cc:1794] [C5] updating connection-level initial window size to 268435456
envoy-proxy_1  | [2022-10-28 21:35:32.978][48][debug][connection] [./source/common/network/connection_impl.h:92] [C5] current connecting state: true
envoy-proxy_1  | [2022-10-28 21:35:32.978][48][debug][client] [source/common/http/codec_client.cc:57] [C5] connecting
envoy-proxy_1  | [2022-10-28 21:35:32.978][48][debug][connection] [source/common/network/connection_impl.cc:924] [C5] connecting to 0.0.0.0:1339
envoy-proxy_1  | [2022-10-28 21:35:32.978][48][debug][connection] [source/common/network/connection_impl.cc:943] [C5] connection in progress
envoy-proxy_1  | [2022-10-28 21:35:32.978][48][debug][http2] [source/extensions/filters/http/grpc_json_transcoder/json_transcoder_filter.cc:587] [C4][S4891186692387075967] continuing request during decodeData, transcoded data size=26, end_stream=false
envoy-proxy_1  | [2022-10-28 21:35:32.978][48][debug][http] [source/common/http/conn_manager_impl.cc:913] [C4][S4891186692387075967] request end stream
envoy-proxy_1  | [2022-10-28 21:35:32.978][48][debug][http2] [source/extensions/filters/http/grpc_json_transcoder/json_transcoder_filter.cc:587] [C4][S4891186692387075967] continuing request during decodeData, transcoded data size=0, end_stream=true
envoy-proxy_1  | [2022-10-28 21:35:32.978][48][debug][connection] [source/common/network/connection_impl.cc:695] [C5] delayed connect error: 111
envoy-proxy_1  | [2022-10-28 21:35:32.978][48][debug][connection] [source/common/network/connection_impl.cc:250] [C5] closing socket: 0
envoy-proxy_1  | [2022-10-28 21:35:32.978][48][debug][client] [source/common/http/codec_client.cc:107] [C5] disconnect. resetting 0 pending requests
envoy-proxy_1  | [2022-10-28 21:35:32.978][48][debug][pool] [source/common/conn_pool/conn_pool_base.cc:483] [C5] client disconnected, failure reason: delayed connect error: 111
envoy-proxy_1  | [2022-10-28 21:35:32.978][48][debug][router] [source/common/router/router.cc:1210] [C4][S4891186692387075967] upstream reset: reset reason: connection failure, transport failure reason: delayed connect error: 111
envoy-proxy_1  | [2022-10-28 21:35:32.979][48][debug][http] [source/common/http/filter_manager.cc:905] [C4][S4891186692387075967] Sending local reply with details upstream_reset_before_response_started{connection_failure,delayed_connect_error:_111}
envoy-proxy_1  | [2022-10-28 21:35:32.979][48][debug][http2] [source/extensions/filters/http/grpc_json_transcoder/json_transcoder_filter.cc:629] [C4][S4891186692387075967] Response headers is NOT application/grpc content-type. Response is passed through without transcoding.
envoy-proxy_1  | [2022-10-28 21:35:32.979][48][debug][http2] [source/extensions/filters/http/grpc_json_transcoder/json_transcoder_filter.cc:634] [C4][S4891186692387075967] Response headers is passed through
envoy-proxy_1  | [2022-10-28 21:35:32.979][48][debug][http] [source/common/http/conn_manager_impl.cc:1551] [C4][S4891186692387075967] encoding headers via codec (end_stream=false):
envoy-proxy_1  | ':status', '503'
envoy-proxy_1  | 'content-length', '145'
envoy-proxy_1  | 'content-type', 'text/plain'
envoy-proxy_1  | 'date', 'Fri, 28 Oct 2022 21:35:32 GMT'
envoy-proxy_1  | 'server', 'envoy'
envoy-proxy_1  | 
envoy-proxy_1  | [2022-10-28 21:35:32.979][48][debug][http2] [source/extensions/filters/http/grpc_json_transcoder/json_transcoder_filter.cc:667] [C4][S4891186692387075967] Response data is passed through
envoy-proxy_1  | [2022-10-28 21:35:32.979][48][debug][pool] [source/common/conn_pool/conn_pool_base.cc:453] invoking idle callbacks - is_draining_for_deletion_=false
envoy-proxy_1  | [2022-10-28 21:35:32.979][48][debug][connection] [source/common/network/connection_impl.cc:651] [C4] remote close
envoy-proxy_1  | [2022-10-28 21:35:32.979][48][debug][connection] [source/common/network/connection_impl.cc:250] [C4] closing socket: 0
envoy-proxy_1  | [2022-10-28 21:35:32.979][48][debug][conn_handler] [source/server/active_stream_listener_base.cc:120] [C4] adding to cleanup list
envoy-proxy_1  | [2022-10-28 21:35:34.049][9][debug][main] [source/server/server.cc:251] flushing stats

I am not sure why envoy proxy is not able to catch the server. gRPC is running locally and envoy is running from docker container.

For further investigation, I have also provided my admin logs.

From the admin, output of curl localhost:9901/clusters

grpc-backend::observability_name::grpc-backend
grpc-backend::default_priority::max_connections::1024
grpc-backend::default_priority::max_pending_requests::1024
grpc-backend::default_priority::max_requests::1024
grpc-backend::default_priority::max_retries::3
grpc-backend::high_priority::max_connections::1024
grpc-backend::high_priority::max_pending_requests::1024
grpc-backend::high_priority::max_requests::1024
grpc-backend::high_priority::max_retries::3
grpc-backend::added_via_api::false
grpc-backend::0.0.0.0:1339::cx_active::0
grpc-backend::0.0.0.0:1339::cx_connect_fail::0
grpc-backend::0.0.0.0:1339::cx_total::0
grpc-backend::0.0.0.0:1339::rq_active::0
grpc-backend::0.0.0.0:1339::rq_error::0
grpc-backend::0.0.0.0:1339::rq_success::0
grpc-backend::0.0.0.0:1339::rq_timeout::0
grpc-backend::0.0.0.0:1339::rq_total::0
grpc-backend::0.0.0.0:1339::hostname::0.0.0.0
grpc-backend::0.0.0.0:1339::health_flags::healthy
grpc-backend::0.0.0.0:1339::weight::1
grpc-backend::0.0.0.0:1339::region::
grpc-backend::0.0.0.0:1339::zone::
grpc-backend::0.0.0.0:1339::sub_zone::
grpc-backend::0.0.0.0:1339::canary::false
grpc-backend::0.0.0.0:1339::priority::0
grpc-backend::0.0.0.0:1339::success_rate::-1
grpc-backend::0.0.0.0:1339::local_origin_success_rate::-1

Output of curl localhost:9901/listeners

listener1::0.0.0.0:1338

Output of curl localhost:9901/stats

cluster.grpc-backend.assignment_stale: 0
cluster.grpc-backend.assignment_timeout_received: 0
cluster.grpc-backend.bind_errors: 0
cluster.grpc-backend.circuit_breakers.default.cx_open: 0
cluster.grpc-backend.circuit_breakers.default.cx_pool_open: 0
cluster.grpc-backend.circuit_breakers.default.rq_open: 0
cluster.grpc-backend.circuit_breakers.default.rq_pending_open: 0
cluster.grpc-backend.circuit_breakers.default.rq_retry_open: 0
cluster.grpc-backend.circuit_breakers.high.cx_open: 0
cluster.grpc-backend.circuit_breakers.high.cx_pool_open: 0
cluster.grpc-backend.circuit_breakers.high.rq_open: 0
cluster.grpc-backend.circuit_breakers.high.rq_pending_open: 0
cluster.grpc-backend.circuit_breakers.high.rq_retry_open: 0
cluster.grpc-backend.default.total_match_count: 1
cluster.grpc-backend.lb_healthy_panic: 0
cluster.grpc-backend.lb_local_cluster_not_ok: 0
cluster.grpc-backend.lb_recalculate_zone_structures: 0
cluster.grpc-backend.lb_subsets_active: 0
cluster.grpc-backend.lb_subsets_created: 0
cluster.grpc-backend.lb_subsets_fallback: 0
cluster.grpc-backend.lb_subsets_fallback_panic: 0
cluster.grpc-backend.lb_subsets_removed: 0
cluster.grpc-backend.lb_subsets_selected: 0
cluster.grpc-backend.lb_zone_cluster_too_small: 0
cluster.grpc-backend.lb_zone_no_capacity_left: 0
cluster.grpc-backend.lb_zone_number_differs: 0
cluster.grpc-backend.lb_zone_routing_all_directly: 0
cluster.grpc-backend.lb_zone_routing_cross_zone: 0
cluster.grpc-backend.lb_zone_routing_sampled: 0
cluster.grpc-backend.max_host_weight: 0
cluster.grpc-backend.membership_change: 1
cluster.grpc-backend.membership_degraded: 0
cluster.grpc-backend.membership_excluded: 0
cluster.grpc-backend.membership_healthy: 1
cluster.grpc-backend.membership_total: 1
cluster.grpc-backend.original_dst_host_invalid: 0
cluster.grpc-backend.retry_or_shadow_abandoned: 0
cluster.grpc-backend.update_attempt: 25
cluster.grpc-backend.update_empty: 0
cluster.grpc-backend.update_failure: 0
cluster.grpc-backend.update_no_rebuild: 0
cluster.grpc-backend.update_success: 25
cluster.grpc-backend.upstream_cx_active: 0
cluster.grpc-backend.upstream_cx_close_notify: 0
cluster.grpc-backend.upstream_cx_connect_attempts_exceeded: 0
cluster.grpc-backend.upstream_cx_connect_fail: 0
cluster.grpc-backend.upstream_cx_connect_timeout: 0
cluster.grpc-backend.upstream_cx_connect_with_0_rtt: 0
cluster.grpc-backend.upstream_cx_destroy: 0
cluster.grpc-backend.upstream_cx_destroy_local: 0
cluster.grpc-backend.upstream_cx_destroy_local_with_active_rq: 0
cluster.grpc-backend.upstream_cx_destroy_remote: 0
cluster.grpc-backend.upstream_cx_destroy_remote_with_active_rq: 0
cluster.grpc-backend.upstream_cx_destroy_with_active_rq: 0
cluster.grpc-backend.upstream_cx_http1_total: 0
cluster.grpc-backend.upstream_cx_http2_total: 0
cluster.grpc-backend.upstream_cx_http3_total: 0
cluster.grpc-backend.upstream_cx_idle_timeout: 0
cluster.grpc-backend.upstream_cx_max_duration_reached: 0
cluster.grpc-backend.upstream_cx_max_requests: 0
cluster.grpc-backend.upstream_cx_none_healthy: 0
cluster.grpc-backend.upstream_cx_overflow: 0
cluster.grpc-backend.upstream_cx_pool_overflow: 0
cluster.grpc-backend.upstream_cx_protocol_error: 0
cluster.grpc-backend.upstream_cx_rx_bytes_buffered: 0
cluster.grpc-backend.upstream_cx_rx_bytes_total: 0
cluster.grpc-backend.upstream_cx_total: 0
cluster.grpc-backend.upstream_cx_tx_bytes_buffered: 0
cluster.grpc-backend.upstream_cx_tx_bytes_total: 0
cluster.grpc-backend.upstream_flow_control_backed_up_total: 0
cluster.grpc-backend.upstream_flow_control_drained_total: 0
cluster.grpc-backend.upstream_flow_control_paused_reading_total: 0
cluster.grpc-backend.upstream_flow_control_resumed_reading_total: 0
cluster.grpc-backend.upstream_http3_broken: 0
cluster.grpc-backend.upstream_internal_redirect_failed_total: 0
cluster.grpc-backend.upstream_internal_redirect_succeeded_total: 0
cluster.grpc-backend.upstream_rq_0rtt: 0
cluster.grpc-backend.upstream_rq_active: 0
cluster.grpc-backend.upstream_rq_cancelled: 0
cluster.grpc-backend.upstream_rq_completed: 0
cluster.grpc-backend.upstream_rq_maintenance_mode: 0
cluster.grpc-backend.upstream_rq_max_duration_reached: 0
cluster.grpc-backend.upstream_rq_pending_active: 0
cluster.grpc-backend.upstream_rq_pending_failure_eject: 0
cluster.grpc-backend.upstream_rq_pending_overflow: 0
cluster.grpc-backend.upstream_rq_pending_total: 0
cluster.grpc-backend.upstream_rq_per_try_idle_timeout: 0
cluster.grpc-backend.upstream_rq_per_try_timeout: 0
cluster.grpc-backend.upstream_rq_retry: 0
cluster.grpc-backend.upstream_rq_retry_backoff_exponential: 0
cluster.grpc-backend.upstream_rq_retry_backoff_ratelimited: 0
cluster.grpc-backend.upstream_rq_retry_limit_exceeded: 0
cluster.grpc-backend.upstream_rq_retry_overflow: 0
cluster.grpc-backend.upstream_rq_retry_success: 0
cluster.grpc-backend.upstream_rq_rx_reset: 0
cluster.grpc-backend.upstream_rq_timeout: 0
cluster.grpc-backend.upstream_rq_total: 0
cluster.grpc-backend.upstream_rq_tx_reset: 0
cluster.grpc-backend.version: 0
cluster_manager.active_clusters: 1
cluster_manager.cluster_added: 1
cluster_manager.cluster_modified: 0
cluster_manager.cluster_removed: 0
cluster_manager.cluster_updated: 0
cluster_manager.cluster_updated_via_merge: 0
cluster_manager.update_merge_cancelled: 0
cluster_manager.update_out_of_merge_window: 0
cluster_manager.warming_clusters: 0
dns.cares.get_addr_failure: 0
dns.cares.not_found: 0
dns.cares.pending_resolutions: 0
dns.cares.resolve_total: 25
dns.cares.timeouts: 0
envoy.overload_actions.reset_high_memory_stream.count: 0
filesystem.flushed_by_timer: 12
filesystem.reopen_failed: 0
filesystem.write_buffered: 3
filesystem.write_completed: 3
filesystem.write_failed: 0
filesystem.write_total_buffered: 0
http.admin.downstream_cx_active: 1
http.admin.downstream_cx_delayed_close_timeout: 0
http.admin.downstream_cx_destroy: 3
http.admin.downstream_cx_destroy_active_rq: 0
http.admin.downstream_cx_destroy_local: 0
http.admin.downstream_cx_destroy_local_active_rq: 0
http.admin.downstream_cx_destroy_remote: 3
http.admin.downstream_cx_destroy_remote_active_rq: 0
http.admin.downstream_cx_drain_close: 0
http.admin.downstream_cx_http1_active: 1
http.admin.downstream_cx_http1_total: 4
http.admin.downstream_cx_http2_active: 0
http.admin.downstream_cx_http2_total: 0
http.admin.downstream_cx_http3_active: 0
http.admin.downstream_cx_http3_total: 0
http.admin.downstream_cx_idle_timeout: 0
http.admin.downstream_cx_max_duration_reached: 0
http.admin.downstream_cx_max_requests_reached: 0
http.admin.downstream_cx_overload_disable_keepalive: 0
http.admin.downstream_cx_protocol_error: 0
http.admin.downstream_cx_rx_bytes_buffered: 83
http.admin.downstream_cx_rx_bytes_total: 341
http.admin.downstream_cx_ssl_active: 0
http.admin.downstream_cx_ssl_total: 0
http.admin.downstream_cx_total: 4
http.admin.downstream_cx_tx_bytes_buffered: 0
http.admin.downstream_cx_tx_bytes_total: 6004
http.admin.downstream_cx_upgrades_active: 0
http.admin.downstream_cx_upgrades_total: 0
http.admin.downstream_flow_control_paused_reading_total: 0
http.admin.downstream_flow_control_resumed_reading_total: 0
http.admin.downstream_rq_1xx: 0
http.admin.downstream_rq_2xx: 3
http.admin.downstream_rq_3xx: 0
http.admin.downstream_rq_4xx: 1
http.admin.downstream_rq_5xx: 0
http.admin.downstream_rq_active: 1
http.admin.downstream_rq_completed: 4
http.admin.downstream_rq_failed_path_normalization: 0
http.admin.downstream_rq_header_timeout: 0
http.admin.downstream_rq_http1_total: 4
http.admin.downstream_rq_http2_total: 0
http.admin.downstream_rq_http3_total: 0
http.admin.downstream_rq_idle_timeout: 0
http.admin.downstream_rq_max_duration_reached: 0
http.admin.downstream_rq_non_relative_path: 0
http.admin.downstream_rq_overload_close: 0
http.admin.downstream_rq_redirected_with_normalized_path: 0
http.admin.downstream_rq_rejected_via_ip_detection: 0
http.admin.downstream_rq_response_before_rq_complete: 0
http.admin.downstream_rq_rx_reset: 0
http.admin.downstream_rq_timeout: 0
http.admin.downstream_rq_too_large: 0
http.admin.downstream_rq_total: 4
http.admin.downstream_rq_tx_reset: 0
http.admin.downstream_rq_ws_on_non_ws_route: 0
http.admin.rs_too_large: 0
http.async-client.no_cluster: 0
http.async-client.no_route: 0
http.async-client.passthrough_internal_redirect_bad_location: 0
http.async-client.passthrough_internal_redirect_no_route: 0
http.async-client.passthrough_internal_redirect_predicate: 0
http.async-client.passthrough_internal_redirect_too_many_redirects: 0
http.async-client.passthrough_internal_redirect_unsafe_scheme: 0
http.async-client.rq_direct_response: 0
http.async-client.rq_redirect: 0
http.async-client.rq_reset_after_downstream_response_started: 0
http.async-client.rq_total: 0
http.grpc_json.downstream_cx_active: 0
http.grpc_json.downstream_cx_delayed_close_timeout: 0
http.grpc_json.downstream_cx_destroy: 0
http.grpc_json.downstream_cx_destroy_active_rq: 0
http.grpc_json.downstream_cx_destroy_local: 0
http.grpc_json.downstream_cx_destroy_local_active_rq: 0
http.grpc_json.downstream_cx_destroy_remote: 0
http.grpc_json.downstream_cx_destroy_remote_active_rq: 0
http.grpc_json.downstream_cx_drain_close: 0
http.grpc_json.downstream_cx_http1_active: 0
http.grpc_json.downstream_cx_http1_total: 0
http.grpc_json.downstream_cx_http2_active: 0
http.grpc_json.downstream_cx_http2_total: 0
http.grpc_json.downstream_cx_http3_active: 0
http.grpc_json.downstream_cx_http3_total: 0
http.grpc_json.downstream_cx_idle_timeout: 0
http.grpc_json.downstream_cx_max_duration_reached: 0
http.grpc_json.downstream_cx_max_requests_reached: 0
http.grpc_json.downstream_cx_overload_disable_keepalive: 0
http.grpc_json.downstream_cx_protocol_error: 0
http.grpc_json.downstream_cx_rx_bytes_buffered: 0
http.grpc_json.downstream_cx_rx_bytes_total: 0
http.grpc_json.downstream_cx_ssl_active: 0
http.grpc_json.downstream_cx_ssl_total: 0
http.grpc_json.downstream_cx_total: 0
http.grpc_json.downstream_cx_tx_bytes_buffered: 0
http.grpc_json.downstream_cx_tx_bytes_total: 0
http.grpc_json.downstream_cx_upgrades_active: 0
http.grpc_json.downstream_cx_upgrades_total: 0
http.grpc_json.downstream_flow_control_paused_reading_total: 0
http.grpc_json.downstream_flow_control_resumed_reading_total: 0
http.grpc_json.downstream_rq_1xx: 0
http.grpc_json.downstream_rq_2xx: 0
http.grpc_json.downstream_rq_3xx: 0
http.grpc_json.downstream_rq_4xx: 0
http.grpc_json.downstream_rq_5xx: 0
http.grpc_json.downstream_rq_active: 0
http.grpc_json.downstream_rq_completed: 0
http.grpc_json.downstream_rq_failed_path_normalization: 0
http.grpc_json.downstream_rq_header_timeout: 0
http.grpc_json.downstream_rq_http1_total: 0
http.grpc_json.downstream_rq_http2_total: 0
http.grpc_json.downstream_rq_http3_total: 0
http.grpc_json.downstream_rq_idle_timeout: 0
http.grpc_json.downstream_rq_max_duration_reached: 0
http.grpc_json.downstream_rq_non_relative_path: 0
http.grpc_json.downstream_rq_overload_close: 0
http.grpc_json.downstream_rq_redirected_with_normalized_path: 0
http.grpc_json.downstream_rq_rejected_via_ip_detection: 0
http.grpc_json.downstream_rq_response_before_rq_complete: 0
http.grpc_json.downstream_rq_rx_reset: 0
http.grpc_json.downstream_rq_timeout: 0
http.grpc_json.downstream_rq_too_large: 0
http.grpc_json.downstream_rq_total: 0
http.grpc_json.downstream_rq_tx_reset: 0
http.grpc_json.downstream_rq_ws_on_non_ws_route: 0
http.grpc_json.no_cluster: 0
http.grpc_json.no_route: 0
http.grpc_json.passthrough_internal_redirect_bad_location: 0
http.grpc_json.passthrough_internal_redirect_no_route: 0
http.grpc_json.passthrough_internal_redirect_predicate: 0
http.grpc_json.passthrough_internal_redirect_too_many_redirects: 0
http.grpc_json.passthrough_internal_redirect_unsafe_scheme: 0
http.grpc_json.rq_direct_response: 0
http.grpc_json.rq_redirect: 0
http.grpc_json.rq_reset_after_downstream_response_started: 0
http.grpc_json.rq_total: 0
http.grpc_json.rs_too_large: 0
http.grpc_json.tracing.client_enabled: 0
http.grpc_json.tracing.health_check: 0
http.grpc_json.tracing.not_traceable: 0
http.grpc_json.tracing.random_sampling: 0
http.grpc_json.tracing.service_forced: 0
http1.dropped_headers_with_underscores: 0
http1.metadata_not_supported_error: 0
http1.requests_rejected_with_underscores_in_headers: 0
http1.response_flood: 0
listener.0.0.0.0_1338.downstream_cx_active: 0
listener.0.0.0.0_1338.downstream_cx_destroy: 0
listener.0.0.0.0_1338.downstream_cx_overflow: 0
listener.0.0.0.0_1338.downstream_cx_overload_reject: 0
listener.0.0.0.0_1338.downstream_cx_total: 0
listener.0.0.0.0_1338.downstream_cx_transport_socket_connect_timeout: 0
listener.0.0.0.0_1338.downstream_global_cx_overflow: 0
listener.0.0.0.0_1338.downstream_listener_filter_error: 0
listener.0.0.0.0_1338.downstream_listener_filter_remote_close: 0
listener.0.0.0.0_1338.downstream_pre_cx_active: 0
listener.0.0.0.0_1338.downstream_pre_cx_timeout: 0
listener.0.0.0.0_1338.extension_config_missing: 0
listener.0.0.0.0_1338.http.grpc_json.downstream_rq_1xx: 0
listener.0.0.0.0_1338.http.grpc_json.downstream_rq_2xx: 0
listener.0.0.0.0_1338.http.grpc_json.downstream_rq_3xx: 0
listener.0.0.0.0_1338.http.grpc_json.downstream_rq_4xx: 0
listener.0.0.0.0_1338.http.grpc_json.downstream_rq_5xx: 0
listener.0.0.0.0_1338.http.grpc_json.downstream_rq_completed: 0
listener.0.0.0.0_1338.no_filter_chain_match: 0
listener.0.0.0.0_1338.worker_0.downstream_cx_active: 0
listener.0.0.0.0_1338.worker_0.downstream_cx_total: 0
listener.0.0.0.0_1338.worker_1.downstream_cx_active: 0
listener.0.0.0.0_1338.worker_1.downstream_cx_total: 0
listener.0.0.0.0_1338.worker_10.downstream_cx_active: 0
listener.0.0.0.0_1338.worker_10.downstream_cx_total: 0
listener.0.0.0.0_1338.worker_11.downstream_cx_active: 0
listener.0.0.0.0_1338.worker_11.downstream_cx_total: 0
listener.0.0.0.0_1338.worker_12.downstream_cx_active: 0
listener.0.0.0.0_1338.worker_12.downstream_cx_total: 0
listener.0.0.0.0_1338.worker_13.downstream_cx_active: 0
listener.0.0.0.0_1338.worker_13.downstream_cx_total: 0
listener.0.0.0.0_1338.worker_14.downstream_cx_active: 0
listener.0.0.0.0_1338.worker_14.downstream_cx_total: 0
listener.0.0.0.0_1338.worker_15.downstream_cx_active: 0
listener.0.0.0.0_1338.worker_15.downstream_cx_total: 0
listener.0.0.0.0_1338.worker_16.downstream_cx_active: 0
listener.0.0.0.0_1338.worker_16.downstream_cx_total: 0
listener.0.0.0.0_1338.worker_17.downstream_cx_active: 0
listener.0.0.0.0_1338.worker_17.downstream_cx_total: 0
listener.0.0.0.0_1338.worker_18.downstream_cx_active: 0
listener.0.0.0.0_1338.worker_18.downstream_cx_total: 0
listener.0.0.0.0_1338.worker_19.downstream_cx_active: 0
listener.0.0.0.0_1338.worker_19.downstream_cx_total: 0
listener.0.0.0.0_1338.worker_2.downstream_cx_active: 0
listener.0.0.0.0_1338.worker_2.downstream_cx_total: 0
listener.0.0.0.0_1338.worker_20.downstream_cx_active: 0
listener.0.0.0.0_1338.worker_20.downstream_cx_total: 0
listener.0.0.0.0_1338.worker_21.downstream_cx_active: 0
listener.0.0.0.0_1338.worker_21.downstream_cx_total: 0
listener.0.0.0.0_1338.worker_22.downstream_cx_active: 0
listener.0.0.0.0_1338.worker_22.downstream_cx_total: 0
listener.0.0.0.0_1338.worker_23.downstream_cx_active: 0
listener.0.0.0.0_1338.worker_23.downstream_cx_total: 0
listener.0.0.0.0_1338.worker_24.downstream_cx_active: 0
listener.0.0.0.0_1338.worker_24.downstream_cx_total: 0
listener.0.0.0.0_1338.worker_25.downstream_cx_active: 0
listener.0.0.0.0_1338.worker_25.downstream_cx_total: 0
listener.0.0.0.0_1338.worker_26.downstream_cx_active: 0
listener.0.0.0.0_1338.worker_26.downstream_cx_total: 0
listener.0.0.0.0_1338.worker_27.downstream_cx_active: 0
listener.0.0.0.0_1338.worker_27.downstream_cx_total: 0
listener.0.0.0.0_1338.worker_28.downstream_cx_active: 0
listener.0.0.0.0_1338.worker_28.downstream_cx_total: 0
listener.0.0.0.0_1338.worker_29.downstream_cx_active: 0
listener.0.0.0.0_1338.worker_29.downstream_cx_total: 0
listener.0.0.0.0_1338.worker_3.downstream_cx_active: 0
listener.0.0.0.0_1338.worker_3.downstream_cx_total: 0
listener.0.0.0.0_1338.worker_30.downstream_cx_active: 0
listener.0.0.0.0_1338.worker_30.downstream_cx_total: 0
listener.0.0.0.0_1338.worker_31.downstream_cx_active: 0
listener.0.0.0.0_1338.worker_31.downstream_cx_total: 0
listener.0.0.0.0_1338.worker_32.downstream_cx_active: 0
listener.0.0.0.0_1338.worker_32.downstream_cx_total: 0
listener.0.0.0.0_1338.worker_33.downstream_cx_active: 0
listener.0.0.0.0_1338.worker_33.downstream_cx_total: 0
listener.0.0.0.0_1338.worker_34.downstream_cx_active: 0
listener.0.0.0.0_1338.worker_34.downstream_cx_total: 0
listener.0.0.0.0_1338.worker_35.downstream_cx_active: 0
listener.0.0.0.0_1338.worker_35.downstream_cx_total: 0
listener.0.0.0.0_1338.worker_36.downstream_cx_active: 0
listener.0.0.0.0_1338.worker_36.downstream_cx_total: 0
listener.0.0.0.0_1338.worker_37.downstream_cx_active: 0
listener.0.0.0.0_1338.worker_37.downstream_cx_total: 0
listener.0.0.0.0_1338.worker_38.downstream_cx_active: 0
listener.0.0.0.0_1338.worker_38.downstream_cx_total: 0
listener.0.0.0.0_1338.worker_39.downstream_cx_active: 0
listener.0.0.0.0_1338.worker_39.downstream_cx_total: 0
listener.0.0.0.0_1338.worker_4.downstream_cx_active: 0
listener.0.0.0.0_1338.worker_4.downstream_cx_total: 0
listener.0.0.0.0_1338.worker_40.downstream_cx_active: 0
listener.0.0.0.0_1338.worker_40.downstream_cx_total: 0
listener.0.0.0.0_1338.worker_41.downstream_cx_active: 0
listener.0.0.0.0_1338.worker_41.downstream_cx_total: 0
listener.0.0.0.0_1338.worker_42.downstream_cx_active: 0
listener.0.0.0.0_1338.worker_42.downstream_cx_total: 0
listener.0.0.0.0_1338.worker_43.downstream_cx_active: 0
listener.0.0.0.0_1338.worker_43.downstream_cx_total: 0
listener.0.0.0.0_1338.worker_44.downstream_cx_active: 0
listener.0.0.0.0_1338.worker_44.downstream_cx_total: 0
listener.0.0.0.0_1338.worker_45.downstream_cx_active: 0
listener.0.0.0.0_1338.worker_45.downstream_cx_total: 0
listener.0.0.0.0_1338.worker_46.downstream_cx_active: 0
listener.0.0.0.0_1338.worker_46.downstream_cx_total: 0
listener.0.0.0.0_1338.worker_47.downstream_cx_active: 0
listener.0.0.0.0_1338.worker_47.downstream_cx_total: 0
listener.0.0.0.0_1338.worker_5.downstream_cx_active: 0
listener.0.0.0.0_1338.worker_5.downstream_cx_total: 0
listener.0.0.0.0_1338.worker_6.downstream_cx_active: 0
listener.0.0.0.0_1338.worker_6.downstream_cx_total: 0
listener.0.0.0.0_1338.worker_7.downstream_cx_active: 0
listener.0.0.0.0_1338.worker_7.downstream_cx_total: 0
listener.0.0.0.0_1338.worker_8.downstream_cx_active: 0
listener.0.0.0.0_1338.worker_8.downstream_cx_total: 0
listener.0.0.0.0_1338.worker_9.downstream_cx_active: 0
listener.0.0.0.0_1338.worker_9.downstream_cx_total: 0
listener.admin.downstream_cx_active: 1
listener.admin.downstream_cx_destroy: 3
listener.admin.downstream_cx_overflow: 0
listener.admin.downstream_cx_overload_reject: 0
listener.admin.downstream_cx_total: 4
listener.admin.downstream_cx_transport_socket_connect_timeout: 0
listener.admin.downstream_global_cx_overflow: 0
listener.admin.downstream_listener_filter_error: 0
listener.admin.downstream_listener_filter_remote_close: 0
listener.admin.downstream_pre_cx_active: 0
listener.admin.downstream_pre_cx_timeout: 0
listener.admin.http.admin.downstream_rq_1xx: 0
listener.admin.http.admin.downstream_rq_2xx: 3
listener.admin.http.admin.downstream_rq_3xx: 0
listener.admin.http.admin.downstream_rq_4xx: 1
listener.admin.http.admin.downstream_rq_5xx: 0
listener.admin.http.admin.downstream_rq_completed: 4
listener.admin.main_thread.downstream_cx_active: 1
listener.admin.main_thread.downstream_cx_total: 4
listener.admin.no_filter_chain_match: 0
listener_manager.listener_added: 1
listener_manager.listener_create_failure: 0
listener_manager.listener_create_success: 48
listener_manager.listener_in_place_updated: 0
listener_manager.listener_modified: 0
listener_manager.listener_removed: 0
listener_manager.listener_stopped: 0
listener_manager.total_filter_chains_draining: 0
listener_manager.total_listeners_active: 1
listener_manager.total_listeners_draining: 0
listener_manager.total_listeners_warming: 0
listener_manager.workers_started: 1
main_thread.watchdog_mega_miss: 0
main_thread.watchdog_miss: 0
runtime.admin_overrides_active: 0
runtime.deprecated_feature_seen_since_process_start: 0
runtime.deprecated_feature_use: 0
runtime.load_error: 0
runtime.load_success: 1
runtime.num_keys: 0
runtime.num_layers: 0
runtime.override_dir_exists: 0
runtime.override_dir_not_exists: 1
server.compilation_settings.fips_mode: 0
server.concurrency: 48
server.days_until_first_cert_expiring: 4294967295
server.debug_assertion_failures: 0
server.dropped_stat_flushes: 0
server.dynamic_unknown_fields: 0
server.envoy_bug_failures: 0
server.hot_restart_epoch: 0
server.hot_restart_generation: 1
server.live: 1
server.main_thread.watchdog_mega_miss: 0
server.main_thread.watchdog_miss: 0
server.memory_allocated: 8004960
server.memory_heap_size: 16777216
server.memory_physical_size: 36968802
server.parent_connections: 0
server.seconds_until_first_ocsp_response_expiring: 0
server.state: 0
server.static_unknown_fields: 0
server.stats_recent_lookups: 2558
server.total_connections: 0
server.uptime: 120
server.version: 3911761
server.wip_protos: 0
server.worker_0.watchdog_mega_miss: 0
server.worker_0.watchdog_miss: 0
server.worker_1.watchdog_mega_miss: 0
server.worker_1.watchdog_miss: 0
server.worker_10.watchdog_mega_miss: 0
server.worker_10.watchdog_miss: 0
server.worker_11.watchdog_mega_miss: 0
server.worker_11.watchdog_miss: 0
server.worker_12.watchdog_mega_miss: 0
server.worker_12.watchdog_miss: 0
server.worker_13.watchdog_mega_miss: 0
server.worker_13.watchdog_miss: 0
server.worker_14.watchdog_mega_miss: 0
server.worker_14.watchdog_miss: 0
server.worker_15.watchdog_mega_miss: 0
server.worker_15.watchdog_miss: 0
server.worker_16.watchdog_mega_miss: 0
server.worker_16.watchdog_miss: 0
server.worker_17.watchdog_mega_miss: 0
server.worker_17.watchdog_miss: 0
server.worker_18.watchdog_mega_miss: 0
server.worker_18.watchdog_miss: 0
server.worker_19.watchdog_mega_miss: 0
server.worker_19.watchdog_miss: 0
server.worker_2.watchdog_mega_miss: 0
server.worker_2.watchdog_miss: 0
server.worker_20.watchdog_mega_miss: 0
server.worker_20.watchdog_miss: 0
server.worker_21.watchdog_mega_miss: 0
server.worker_21.watchdog_miss: 0
server.worker_22.watchdog_mega_miss: 0
server.worker_22.watchdog_miss: 0
server.worker_23.watchdog_mega_miss: 0
server.worker_23.watchdog_miss: 0
server.worker_24.watchdog_mega_miss: 0
server.worker_24.watchdog_miss: 0
server.worker_25.watchdog_mega_miss: 0
server.worker_25.watchdog_miss: 0
server.worker_26.watchdog_mega_miss: 0
server.worker_26.watchdog_miss: 0
server.worker_27.watchdog_mega_miss: 0
server.worker_27.watchdog_miss: 0
server.worker_28.watchdog_mega_miss: 0
server.worker_28.watchdog_miss: 0
server.worker_29.watchdog_mega_miss: 0
server.worker_29.watchdog_miss: 0
server.worker_3.watchdog_mega_miss: 0
server.worker_3.watchdog_miss: 0
server.worker_30.watchdog_mega_miss: 0
server.worker_30.watchdog_miss: 0
server.worker_31.watchdog_mega_miss: 0
server.worker_31.watchdog_miss: 0
server.worker_32.watchdog_mega_miss: 0
server.worker_32.watchdog_miss: 0
server.worker_33.watchdog_mega_miss: 0
server.worker_33.watchdog_miss: 0
server.worker_34.watchdog_mega_miss: 0
server.worker_34.watchdog_miss: 0
server.worker_35.watchdog_mega_miss: 0
server.worker_35.watchdog_miss: 0
server.worker_36.watchdog_mega_miss: 0
server.worker_36.watchdog_miss: 0
server.worker_37.watchdog_mega_miss: 0
server.worker_37.watchdog_miss: 0
server.worker_38.watchdog_mega_miss: 0
server.worker_38.watchdog_miss: 0
server.worker_39.watchdog_mega_miss: 0
server.worker_39.watchdog_miss: 0
server.worker_4.watchdog_mega_miss: 0
server.worker_4.watchdog_miss: 0
server.worker_40.watchdog_mega_miss: 0
server.worker_40.watchdog_miss: 0
server.worker_41.watchdog_mega_miss: 0
server.worker_41.watchdog_miss: 0
server.worker_42.watchdog_mega_miss: 0
server.worker_42.watchdog_miss: 0
server.worker_43.watchdog_mega_miss: 0
server.worker_43.watchdog_miss: 0
server.worker_44.watchdog_mega_miss: 0
server.worker_44.watchdog_miss: 0
server.worker_45.watchdog_mega_miss: 0
server.worker_45.watchdog_miss: 0
server.worker_46.watchdog_mega_miss: 0
server.worker_46.watchdog_miss: 0
server.worker_47.watchdog_mega_miss: 0
server.worker_47.watchdog_miss: 0
server.worker_5.watchdog_mega_miss: 0
server.worker_5.watchdog_miss: 0
server.worker_6.watchdog_mega_miss: 0
server.worker_6.watchdog_miss: 0
server.worker_7.watchdog_mega_miss: 0
server.worker_7.watchdog_miss: 0
server.worker_8.watchdog_mega_miss: 0
server.worker_8.watchdog_miss: 0
server.worker_9.watchdog_mega_miss: 0
server.worker_9.watchdog_miss: 0
vhost.local_service.vcluster.other.upstream_rq_retry: 0
vhost.local_service.vcluster.other.upstream_rq_retry_limit_exceeded: 0
vhost.local_service.vcluster.other.upstream_rq_retry_overflow: 0
vhost.local_service.vcluster.other.upstream_rq_retry_success: 0
vhost.local_service.vcluster.other.upstream_rq_timeout: 0
vhost.local_service.vcluster.other.upstream_rq_total: 0
workers.watchdog_mega_miss: 0
workers.watchdog_miss: 0
cluster.grpc-backend.upstream_cx_connect_ms: No recorded values
cluster.grpc-backend.upstream_cx_length_ms: No recorded values
http.admin.downstream_cx_length_ms: P0(nan,0) P25(nan,0) P50(nan,1.025) P75(nan,1.0625) P90(nan,1.085) P95(nan,1.0925) P99(nan,1.0985) P99.5(nan,1.09925) P99.9(nan,1.09985) P100(nan,1.1)
http.admin.downstream_rq_time: P0(nan,0) P25(nan,0) P50(nan,0) P75(nan,0) P90(nan,0) P95(nan,0) P99(nan,0) P99.5(nan,0) P99.9(nan,0) P100(nan,0)
http.grpc_json.downstream_cx_length_ms: No recorded values
http.grpc_json.downstream_rq_time: No recorded values
listener.0.0.0.0_1338.downstream_cx_length_ms: No recorded values
listener.admin.downstream_cx_length_ms: P0(nan,0) P25(nan,0) P50(nan,1.025) P75(nan,1.0625) P90(nan,1.085) P95(nan,1.0925) P99(nan,1.0985) P99.5(nan,1.09925) P99.9(nan,1.09985) P100(nan,1.1)
server.initialization_time_ms: P0(nan,150) P25(nan,152.5) P50(nan,155) P75(nan,157.5) P90(nan,159) P95(nan,159.5) P99(nan,159.9) P99.5(nan,159.95) P99.9(nan,159.99) P100(nan,160)

Would appreciate how to resolve this issue.

dio commented 2 years ago

I think this is a docker networking "issue". You need to be able to reach a service running on your "localhost" from within a process that runs inside a container.

justinyjsong commented 2 years ago

actually just a configuration issue, resolving this ticket. thanks.