envoyproxy / envoy

Cloud-native high-performance edge/middle/service proxy
https://www.envoyproxy.io
Apache License 2.0
25.14k stars 4.83k forks source link

Dynamic forward proxy: "Proto constraint validation failed (FilterConfigValidationError.DnsCacheConfig: ["value is required"])" #9461

Closed mabukhovsky closed 4 years ago

mabukhovsky commented 4 years ago

Dynamic forward proxy config validation failure: Proto constraint validation failed (FilterConfigValidationError.DnsCacheConfig: ["value is required"])

Description: Hi, I'm trying to create dynamic forward proxy using your latest master code that have introduced auto_host_rewrite_header for envoy.filters.http.dynamic_forward_proxy. It is supposed to do forwarding to the domain/port defined in X-Host-Port header. I've used configuration which is copy paste from the default dynamic forward proxy configuration you recommend (see below). However Envoy doesn't start due to FilterConfigValidationError.DnsCacheConfig: ["value is required"] problem. The issue is in the second listener definition (listener 1) as without it it starts and works fine.

Please advise if I've missed something. I don't understand what's wrong with dns_cache_config as it is copy pasted from configuration you recommend by default for dynamic forward proxy.

[optional Relevant Links:] https://github.com/envoyproxy/envoy/pull/8869/files https://www.envoyproxy.io/docs/envoy/latest/api-v2/config/filter/http/dynamic_forward_proxy/v2alpha/dynamic_forward_proxy.proto#config-filter-http-dynamic-forward-proxy-v2alpha-perrouteconfig

Config:

 access_log_path: %PR_HOME%/logs/admin_access.log
  address:
    socket_address: { address: 127.0.0.1, port_value: %ADMIN_PORT% }

dynamic_resources:
  lds_config:
    api_config_source:
      api_type: GRPC
      grpc_services:
        envoy_grpc:
          cluster_name: pr_cluster
      refresh_delay: { seconds: 5 }
  cds_config:
    api_config_source:
      api_type: GRPC
      grpc_services:
        envoy_grpc:
          cluster_name: pr_cluster
      refresh_delay: { seconds: 5 }

static_resources:
  listeners:
  # listner 0
  - name: exposed_admin_listener
    address:
      socket_address: { address: 0.0.0.0, port_value: %EXPOSED_ADMIN_PORT% }
    filter_chains:
    - filters:
      - name: envoy.http_connection_manager
        typed_config:
          "@type": type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager
          stat_prefix: ingress_http
          codec_type: AUTO
          access_log: 
          - name: envoy.file_access_log
            typed_config: 
              "@type": type.googleapis.com/envoy.config.accesslog.v2.FileAccessLog
              path: %PR_HOME%/logs/exposed_admin_access.log
          route_config:
            name: envoy_admin
            virtual_hosts:
            - name: envoy_admin
              domains: ["*"]
              routes:
              - match: { prefix: "/app_info/metrics" }
                route: { cluster: exposed_admin, prefix_rewrite: "/stats/prometheus" }
          http_filters:
          - name: envoy.router
  # listener 1
  - name: listener_1
    address:
      socket_address: { address: 0.0.0.0, port_value: 10000 }
    filter_chains:
    - filters:
      - name: envoy.http_connection_manager
        typed_config:
          "@type": type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager
          stat_prefix: ingress_http
          codec_type: AUTO
          access_log: 
          - name: envoy.file_access_log
            typed_config: 
              "@type": type.googleapis.com/envoy.config.accesslog.v2.FileAccessLog
              path: %PR_HOME%/logs/exposed_admin_access.log
          route_config:
            name: envoy_dynamic_proxy_route
            virtual_hosts:
            - name: envoy_dynamic_proxy_virtual_host
              domains: ["*"]
              routes:
              - match: { prefix: "/" }
                route: { cluster: "dynamic_forward_proxy_cluster" }
                per_filter_config:
                    envoy.filters.http.dynamic_forward_proxy:
                       auto_host_rewrite_header: "X-Host-Port"
          http_filters:
          - name: envoy.filters.http.dynamic_forward_proxy
            config:
              dns_cache_config:
                name: dynamic_forward_proxy_cache_config
                dns_lookup_family: V4_ONLY
  clusters:
  - name: pr_cluster
    connect_timeout: 5s
    type: LOGICAL_DNS
    dns_lookup_family: V4_ONLY
    lb_policy: ROUND_ROBIN
    http2_protocol_options: {}
    tls_context:
      common_tls_context:
        alpn_protocols: ["h2"]
    dns_refresh_rate:
      seconds: 3600
    load_assignment:
      cluster_name: xds_cluster
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address:
                address: %XDS_HOST%
                port_value: %XDS_PORT%
  - name: exposed_admin
    connect_timeout: 0.250s
    type: STATIC
    hosts:
      - socket_address: { address: 127.0.0.1, port_value: %ADMIN_PORT% }
  - name: dynamic_forward_proxy_cluster
    connect_timeout: 1s
    lb_policy: CLUSTER_PROVIDED
    cluster_type:
      name: envoy.clusters.dynamic_forward_proxy
      typed_config:
        "@type": type.googleapis.com/envoy.config.cluster.dynamic_forward_proxy.v2alpha.ClusterConfig
        dns_cache_config:
          name: dynamic_forward_proxy_cache_config
          dns_lookup_family: V4_ONLY

tracing:
  http:
    name: envoy.dynamic.ot
    config:
      library: /usr/local/lib/libjaegertracing_plugin.so
      config:
        service_name: project-envoy
        sampler:
          type: const
          param: 1
        reporter:
          localAgentHostPort: %TRACING_HOST_PORT%
        headers:
          jaegerDebugHeader: jaeger-debug-id
          jaegerBaggageHeader: jaeger-baggage
          traceBaggageHeaderPrefix: uberctx-
        baggage_restrictions:
          denyBaggageOnInitializationFailure: false
          hostPort: ""

node:
  id: %NODE_ID%
  cluster: %NODE_CLUSTER%
  metadata:
    instance: %NODE_INSTANCE%
    host: %NODE_HOST%
    port: %NODE_PORT%
    admin_port: %ADMIN_PORT%

Logs / Call Stack:

proxy_1        | Starting Envoy Instance bc6b4ff61e4dpr-envoy
proxy_1        | /custom-envoy/envoy -c /usr/local/pr-envoy/envoy.yaml --component-log-level main:info,http:trace,http2:trace,config:trace,filter:trace,router:trace,upstream:trace,client:trace,connection:trace,grpc:trace --config-yaml {'admin':{'address':{'socket_address':{'address':'0.0.0.0'}}}}
proxy_1        | [2019-11-23 00:05:35.628][1][info][main] [source/server/server.cc:238] initializing epoch 0 (hot restart version=11.104)
proxy_1        | [2019-11-23 00:05:35.628][1][info][main] [source/server/server.cc:240] statically linked extensions:
proxy_1        | [2019-11-23 00:05:35.628][1][info][main] [source/server/server.cc:242]   access_loggers: envoy.file_access_log,envoy.http_grpc_access_log
proxy_1        | [2019-11-23 00:05:35.628][1][info][main] [source/server/server.cc:245]   filters.http: envoy.buffer,envoy.cors,envoy.csrf,envoy.ext_authz,envoy.fault,envoy.filters.http.dynamic_forward_proxy,envoy.filters.http.grpc_http1_reverse_bridge,envoy.filters.http.header_to_metadata,envoy.filters.http.jwt_authn,envoy.filters.http.original_src,envoy.filters.http.rbac,envoy.filters.http.tap,envoy.grpc_http1_bridge,envoy.grpc_json_transcoder,envoy.grpc_web,envoy.gzip,envoy.health_check,envoy.http_dynamo_filter,envoy.ip_tagging,envoy.lua,envoy.rate_limit,envoy.router,envoy.squash
proxy_1        | [2019-11-23 00:05:35.628][1][info][main] [source/server/server.cc:248]   filters.listener: envoy.listener.original_dst,envoy.listener.original_src,envoy.listener.proxy_protocol,envoy.listener.tls_inspector
proxy_1        | [2019-11-23 00:05:35.628][1][info][main] [source/server/server.cc:251]   filters.network: envoy.client_ssl_auth,envoy.echo,envoy.ext_authz,envoy.filters.network.dubbo_proxy,envoy.filters.network.mysql_proxy,envoy.filters.network.rbac,envoy.filters.network.sni_cluster,envoy.filters.network.thrift_proxy,envoy.filters.network.zookeeper_proxy,envoy.http_connection_manager,envoy.mongo_proxy,envoy.ratelimit,envoy.redis_proxy,envoy.tcp_proxy
proxy_1        | [2019-11-23 00:05:35.628][1][info][main] [source/server/server.cc:253]   stat_sinks: envoy.dog_statsd,envoy.metrics_service,envoy.stat_sinks.hystrix,envoy.statsd
proxy_1        | [2019-11-23 00:05:35.628][1][info][main] [source/server/server.cc:255]   tracers: envoy.dynamic.ot,envoy.lightstep,envoy.tracers.datadog,envoy.tracers.opencensus,envoy.zipkin
proxy_1        | [2019-11-23 00:05:35.628][1][info][main] [source/server/server.cc:258]   transport_sockets.downstream: envoy.transport_sockets.alts,envoy.transport_sockets.tap,raw_buffer,tls
proxy_1        | [2019-11-23 00:05:35.628][1][info][main] [source/server/server.cc:261]   transport_sockets.upstream: envoy.transport_sockets.alts,envoy.transport_sockets.tap,raw_buffer,tls
proxy_1        | [2019-11-23 00:05:35.628][1][info][main] [source/server/server.cc:267] buffer implementation: old (libevent)
proxy_1        | [2019-11-23 00:05:35.635][1][warning][misc] [source/common/protobuf/utility.cc:199] Using deprecated option 'envoy.api.v2.Cluster.hosts' from file cds.proto. This configuration will be removed from Envoy soon. Please see https://www.envoyproxy.io/docs/envoy/latest/intro/deprecated for details.
proxy_1        | [2019-11-23 00:05:35.637][1][info][main] [source/server/server.cc:322] admin address: 0.0.0.0:10327
proxy_1        | [2019-11-23 00:05:35.638][1][info][main] [source/server/server.cc:432] runtime: layers:
proxy_1        |   - name: base
proxy_1        |     static_layer:
proxy_1        |       {}
proxy_1        |   - name: admin
proxy_1        |     admin_layer:
proxy_1        |       {}
proxy_1        | [2019-11-23 00:05:35.638][1][warning][runtime] [source/common/runtime/runtime_impl.cc:497] Skipping unsupported runtime layer: name: "base"
proxy_1        | static_layer {
proxy_1        | }
proxy_1        |
proxy_1        | [2019-11-23 00:05:35.638][1][info][config] [source/server/configuration_impl.cc:61] loading 0 static secret(s)
proxy_1        | [2019-11-23 00:05:35.638][1][info][config] [source/server/configuration_impl.cc:67] loading 3 cluster(s)
proxy_1        | [2019-11-23 00:05:35.639][62][debug][grpc] [source/common/grpc/google_async_client_impl.cc:43] completionThread running
proxy_1        | [2019-11-23 00:05:35.641][1][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:839] adding TLS initial cluster dynamic_forward_proxy_cluster
proxy_1        | [2019-11-23 00:05:35.642][1][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:839] adding TLS initial cluster exposed_admin
proxy_1        | [2019-11-23 00:05:35.642][1][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:839] adding TLS initial cluster xds_cluster
proxy_1        | [2019-11-23 00:05:35.642][1][debug][upstream] [source/common/upstream/upstream_impl.cc:779] initializing secondary cluster dynamic_forward_proxy_cluster completed
proxy_1        | [2019-11-23 00:05:35.642][1][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:103] cm init: init complete: cluster=dynamic_forward_proxy_cluster primary=0 secondary=0
proxy_1        | [2019-11-23 00:05:35.642][1][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:75] cm init: adding: cluster=dynamic_forward_proxy_cluster primary=0 secondary=0
proxy_1        | [2019-11-23 00:05:35.642][1][trace][upstream] [source/common/upstream/upstream_impl.cc:1007] Local locality:
proxy_1        | [2019-11-23 00:05:35.642][1][debug][upstream] [source/common/upstream/upstream_impl.cc:779] initializing secondary cluster exposed_admin completed
proxy_1        | [2019-11-23 00:05:35.642][1][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:999] membership update for TLS cluster exposed_admin added 1 removed 0
proxy_1        | [2019-11-23 00:05:35.642][1][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:103] cm init: init complete: cluster=exposed_admin primary=0 secondary=0
proxy_1        | [2019-11-23 00:05:35.642][1][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:75] cm init: adding: cluster=exposed_admin primary=0 secondary=0
proxy_1        | [2019-11-23 00:05:35.642][1][debug][upstream] [source/common/upstream/logical_dns_cluster.cc:69] starting async DNS resolution for xds-service
proxy_1        | [2019-11-23 00:05:35.642][1][trace][upstream] [source/common/network/dns_impl.cc:160] Setting DNS resolution timer for 5000 milliseconds
proxy_1        | [2019-11-23 00:05:35.642][1][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:75] cm init: adding: cluster=xds_cluster primary=1 secondary=0
proxy_1        | [2019-11-23 00:05:35.642][1][info][config] [source/server/configuration_impl.cc:71] loading 2 listener(s)
proxy_1        | [2019-11-23 00:05:35.642][1][debug][config] [source/server/configuration_impl.cc:73] listener #0:
proxy_1        | [2019-11-23 00:05:35.642][1][debug][config] [source/server/listener_manager_impl.cc:485] begin add/update listener: name=exposed_admin_listener hash=12680774201327472844
proxy_1        | [2019-11-23 00:05:35.642][1][debug][config] [source/server/listener_manager_impl.cc:57]   filter #0:
proxy_1        | [2019-11-23 00:05:35.642][1][debug][config] [source/server/listener_manager_impl.cc:58]     name: envoy.http_connection_manager
proxy_1        | [2019-11-23 00:05:35.642][1][debug][config] [source/server/listener_manager_impl.cc:61]   config: {}
proxy_1        | [2019-11-23 00:05:35.647][1][debug][config] [source/extensions/filters/network/http_connection_manager/config.cc:351]     http filter #0
proxy_1        | [2019-11-23 00:05:35.647][1][debug][config] [source/extensions/filters/network/http_connection_manager/config.cc:352]       name: envoy.router
proxy_1        | [2019-11-23 00:05:35.648][1][debug][config] [source/extensions/filters/network/http_connection_manager/config.cc:356]     config: {}
proxy_1        | [2019-11-23 00:05:35.649][1][debug][config] [source/server/listener_manager_impl.cc:376] add active listener: name=exposed_admin_listener, hash=12680774201327472844, address=0.0.0.0:10328
proxy_1        | [2019-11-23 00:05:35.649][1][debug][config] [source/server/configuration_impl.cc:73] listener #1:
proxy_1        | [2019-11-23 00:05:35.649][1][debug][config] [source/server/listener_manager_impl.cc:485] begin add/update listener: name=listener_1 hash=3149458023057097503
proxy_1        | [2019-11-23 00:05:35.650][1][debug][config] [source/server/listener_manager_impl.cc:57]   filter #0:
proxy_1        | [2019-11-23 00:05:35.650][1][debug][config] [source/server/listener_manager_impl.cc:58]     name: envoy.http_connection_manager
proxy_1        | [2019-11-23 00:05:35.650][1][debug][config] [source/server/listener_manager_impl.cc:61]   config: {}
proxy_1        | [2019-11-23 00:05:35.653][1][critical][main] [source/server/server.cc:93] error initializing configuration '/usr/local/pr-envoy/envoy.yaml': Proto constraint validation failed (FilterConfigValidationError.DnsCacheConfig: ["value is required"]):
proxy_1        | [2019-11-23 00:05:35.653][1][debug][grpc] [source/common/grpc/google_async_client_impl.cc:33] Joining completionThread
proxy_1        | [2019-11-23 00:05:35.653][62][debug][grpc] [source/common/grpc/google_async_client_impl.cc:66] completionThread exiting
proxy_1        | [2019-11-23 00:05:35.653][1][debug][grpc] [source/common/grpc/google_async_client_impl.cc:35] Joined completionThread
proxy_1        | [2019-11-23 00:05:35.654][1][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:851] shutting down thread local cluster manager
proxy_1        | [2019-11-23 00:05:35.654][1][info][main] [source/server/server.cc:560] exiting
proxy_1        | Proto constraint validation failed (FilterConfigValidationError.DnsCacheConfig: ["value is required"]):
pr_proxy_1 exited with code 1
mabukhovsky commented 4 years ago

@mattklein123 I apologize for opening duplicate. I would appreciate if you could tag this one with "help wanted" so it is resolved. It is a blocker for a whole team.

mattklein123 commented 4 years ago

@derekargueta @rgs1 are familiar with this feature, maybe they have time to check your config.

rgs1 commented 4 years ago

@mabukhovsky i think for the filter part, it has to be a typed config:

            http_filters:
            - name: envoy.filters.http.dynamic_forward_proxy
              typed_config:
                "@type": type.googleapis.com/envoy.config.cluster.dynamic_forward_proxy.v2alpha.ClusterConfig
                dns_cache_config:
                  name: dynamic_forward_proxy_cache_config
                  dns_lookup_family: V4_ONLY

whereas you have:

         http_filters:
         - name: envoy.filters.http.dynamic_forward_proxy
           config:
             dns_cache_config:
               name: dynamic_forward_proxy_cache_config
               dns_lookup_family: V4_ONLY

See https://github.com/envoyproxy/envoy/blob/master/docs/root/configuration/http/http_filters/dynamic_forward_proxy_filter.rst.

(not sure tho, that's just from a quick look)

mabukhovsky commented 4 years ago

@rgs1 Thanks for replying. I've tried to make the offered change, but still hitting the same issue for envoy.http_connection_manager: FilterConfigValidationError.DnsCacheConfig: ["value is required"]

My new config:


static_resources:
  listeners:
  - name: exposed_admin_listener
    address:
      socket_address: { address: 0.0.0.0, port_value: %EXPOSED_ADMIN_PORT% }
    filter_chains:
    - filters:
      - name: envoy.http_connection_manager
        typed_config:
          "@type": type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager
          stat_prefix: ingress_http
          route_config:
            name: envoy_admin_route
            virtual_hosts:
            - name: envoy_admin_service
              domains: ["*"]
              routes:
              - match: { prefix: "/app_info/metrics" }
                route: { cluster: "exposed_admin", prefix_rewrite: "/stats/prometheus" }
              - match: { prefix: "/" }
                route: { cluster: "dynamic_forward_proxy_cluster" }
                per_filter_config:
                    envoy.filters.http.dynamic_forward_proxy:
                       auto_host_rewrite_header: "X-Host-Port"
          http_filters:
          - name: envoy.filters.http.dynamic_forward_proxy
            typed_config:
              "@type": type.googleapis.com/envoy.config.cluster.dynamic_forward_proxy.v2alpha.ClusterConfig
              dns_cache_config:
                name: dynamic_forward_proxy_cache_config
                dns_lookup_family: V4_ONLY
          - name: envoy.router
  clusters:
  - name: xds_cluster
    connect_timeout: 5s
    type: LOGICAL_DNS
    dns_lookup_family: V4_ONLY
    lb_policy: ROUND_ROBIN
    http2_protocol_options: {}
    tls_context:
      common_tls_context:
        alpn_protocols: ["h2"]
    dns_refresh_rate:
      seconds: 3600
    load_assignment:
      cluster_name: xds_cluster
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address:
                address: %XDS_HOST%
                port_value: %XDS_PORT%
  - name: exposed_admin
    connect_timeout: 0.250s
    type: STATIC
    hosts:
      - socket_address: { address: 127.0.0.1, port_value: %ADMIN_PORT% }
  - name: dynamic_forward_proxy_cluster
    connect_timeout: 1s
    lb_policy: CLUSTER_PROVIDED
    cluster_type:
      name: envoy.clusters.dynamic_forward_proxy
      typed_config:
          "@type": type.googleapis.com/envoy.config.cluster.dynamic_forward_proxy.v2alpha.ClusterConfig
          dns_cache_config:
            name: dynamic_forward_proxy_cache_config
            dns_lookup_family: V4_ONLY
    tls_context:
      common_tls_context:
        validation_context:
          trusted_ca: { filename: /etc/envoy/envoy.crt }

tracing:
  http:
    name: envoy.dynamic.ot
    config:
      library: /usr/local/lib/libjaegertracing_plugin.so
      config:
        service_name: abc-envoy
        sampler:
          type: const
          param: 1
        reporter:
          localAgentHostPort: %TRACING_HOST_PORT%
        headers:
          jaegerDebugHeader: jaeger-debug-id
          jaegerBaggageHeader: jaeger-baggage
          traceBaggageHeaderPrefix: uberctx-
        baggage_restrictions:
          denyBaggageOnInitializationFailure: false
          hostPort: ""

node:
  id: %NODE_ID%
  cluster: %NODE_CLUSTER%
  metadata:
    instance: %NODE_INSTANCE%
    host: %NODE_HOST%
    port: %NODE_PORT%
    admin_port: %ADMIN_PORT%```

*Updated log:*

```[2020-01-15 21:53:54.579][1][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:75] cm init: adding: cluster=xds_cluster primary=1 secondary=0
[2020-01-15 21:53:54.579][1][info][config] [source/server/configuration_impl.cc:71] loading 1 listener(s)
[2020-01-15 21:53:54.579][1][debug][config] [source/server/configuration_impl.cc:73] listener #0:
[2020-01-15 21:53:54.579][1][debug][config] [source/server/listener_manager_impl.cc:485] begin add/update listener: name=exposed_admin_listener hash=5609579006498289337
[2020-01-15 21:53:54.579][1][debug][config] [source/server/listener_manager_impl.cc:57]   filter #0:
[2020-01-15 21:53:54.579][1][debug][config] [source/server/listener_manager_impl.cc:58]     name: envoy.http_connection_manager
[2020-01-15 21:53:54.579][1][debug][config] [source/server/listener_manager_impl.cc:61]   config: {}
[2020-01-15 21:53:54.585][1][critical][main] [source/server/server.cc:93] error initializing configuration '/usr/local/workday-abc-envoy/envoy.yaml': Proto constraint validation failed (FilterConfigValidationError.DnsCacheConfig: ["value is required"]):
[2020-01-15 21:53:54.585][1][debug][grpc] [source/common/grpc/google_async_client_impl.cc:33] Joining completionThread
[2020-01-15 21:53:54.585][63][debug][grpc] [source/common/grpc/google_async_client_impl.cc:66] completionThread exiting
[2020-01-15 21:53:54.586][1][debug][grpc] [source/common/grpc/google_async_client_impl.cc:35] Joined completionThread
[2020-01-15 21:53:54.586][1][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:851] shutting down thread local cluster manager
[2020-01-15 21:53:54.586][1][info][main] [source/server/server.cc:560] exiting
| Proto constraint validation failed (FilterConfigValidationError.DnsCacheConfig: ["value is required"]):```
mattklein123 commented 4 years ago

See https://github.com/envoyproxy/envoy/pull/9696, there was a typo from a previous fixup. Beyond that, the config on that doc page loads fine for me on current master.

mabukhovsky commented 4 years ago

@mattklein123 @rgs1 Thank you for fixing the doc. I'm still facing the same issue with my config :(

Proto constraint validation failed (FilterConfigValidationError.DnsCacheConfig: ["value is required"]):

Per your proposal, I've also tried your default config (from https://github.com/envoyproxy/envoy/blob/master/docs/root/configuration/http/http_filters/dynamic_forward_proxy_filter.rst) on recent master (envoyproxy/envoy-dev:latest)

[
    {
        "Id": "sha256:a68e416231a3226baee6bfaa6f269556b66cb319e55ff5252bc1b73bd84b0d0d",
        "RepoTags": [
            "envoyproxy/envoy-dev:latest"
        ],
        "RepoDigests": [
            "envoyproxy/envoy-dev@sha256:513b6775dc2ffa562c0c6217573c15ffe7741d4e75e6d79fbf7e6713e7fcfc4b"
        ],
        "Parent": "",
        "Comment": "",
        "Created": "2020-01-16T04:04:35.368250018Z",```

and got this error

proxy_1 | [2020-01-16 17:55:13.916][1][info][config] [source/server/configuration_impl.cc:61] loading 0 static secret(s) proxy_1 | [2020-01-16 17:55:13.916][1][info][config] [source/server/configuration_impl.cc:67] loading 1 cluster(s) proxy_1 | [2020-01-16 17:55:13.917][62][debug][grpc] [source/common/grpc/google_async_client_impl.cc:43] completionThread running proxy_1 | [2020-01-16 17:55:13.924][1][debug][grpc] [source/common/grpc/google_async_client_impl.cc:33] Joining completionThread proxy_1 | [2020-01-16 17:55:13.934][62][debug][grpc] [source/common/grpc/google_async_client_impl.cc:66] completionThread exiting proxy_1 | [2020-01-16 17:55:13.934][1][debug][grpc] [source/common/grpc/google_async_client_impl.cc:35] Joined completionThread proxy_1 | [2020-01-16 17:55:13.934][1][critical][main] [source/server/server.cc:93] error initializing configuration '/usr/local/workday-sage-envoy/envoy.yaml': Didn't find a registered implementation for name: 'envoy.transport_sockets.tls' proxy_1 | [2020-01-16 17:55:13.935][1][info][main] [source/server/server.cc:560] exiting proxy_1 | Didn't find a registered implementation for name: 'envoy.transport_sockets.tls'

mattklein123 commented 4 years ago

You must be using a non-master binary or have some other binary issue. Sorry I don't have any other ideas for you.

stale[bot] commented 4 years ago

This issue has been automatically marked as stale because it has not had activity in the last 30 days. It will be closed in the next 7 days unless it is tagged "help wanted" or other activity occurs. Thank you for your contributions.

stale[bot] commented 4 years ago

This issue has been automatically closed because it has not had activity in the last 37 days. If this issue is still valid, please ping a maintainer and ask them to label it as "help wanted". Thank you for your contributions.