projectcontour / contour

Contour is a Kubernetes ingress controller using Envoy proxy.
https://projectcontour.io
Apache License 2.0
3.73k stars 679 forks source link

Gateway provisioned envoy not listening on IPv6 in dual stack cluster #5557

Closed cehoffman closed 1 year ago

cehoffman commented 1 year ago

What steps did you take and what happened: I have deployed contour gateway provisioner following the getting started documentation. The provisioner came up successfully. I then tried creating a Gateway instance. The envoy pods never became ready because the kubelet was health checking the IPv6 address, but nothing was listening on that address. I then added a ContourDeployment configuration to the GatewayClass setting all address fields to [::] and then ::. Same behavior related to envoy. When using an address of :: for the contour configuration, the contour pods entered a crash loop with the following logs.

k logs -n projectcontour --previous contour-ceh-im-86f7f58d6b-h4rhh ``` time="2023-07-15T14:10:59Z" level=info msg="maxprocs: Updating GOMAXPROCS=1: using minimum allowed GOMAXPROCS" time="2023-07-15T14:10:59Z" level=info msg="args: [serve --incluster --xds-address=0.0.0.0 --xds-port=8001 --contour-cafile=/certs/ca.crt --contour-cert-file=/certs/tls.crt --contour-key-file=/certs/tls.key --contour-config-name=contourconfig-ceh-im --leader-election-resource-name=leader-elect-ceh-im --envoy-service-name=envoy-ceh-im --kubernetes-debug=0]" time="2023-07-15T14:10:59Z" level=info msg="Watching Service for Ingress status" envoy-service-name=envoy-ceh-im envoy-service-namespace=projectcontour time="2023-07-15T14:10:59Z" level=info msg="Starting EventSource" caller="logr.go:278" context=kubernetes controller=httproute-controller source="kind source: *v1beta1.HTTPRoute" time="2023-07-15T14:10:59Z" level=info msg="Starting Controller" caller="logr.go:278" context=kubernetes controller=httproute-controller time="2023-07-15T14:10:59Z" level=info msg="Starting EventSource" caller="logr.go:278" context=kubernetes controller=grpcroute-controller source="kind source: *v1alpha2.GRPCRoute" time="2023-07-15T14:10:59Z" level=info msg="Starting Controller" caller="logr.go:278" context=kubernetes controller=grpcroute-controller time="2023-07-15T14:10:59Z" level=info msg="Starting EventSource" caller="logr.go:278" context=kubernetes controller=tlsroute-controller source="kind source: *v1alpha2.TLSRoute" time="2023-07-15T14:10:59Z" level=info msg="Starting Controller" caller="logr.go:278" context=kubernetes controller=tlsroute-controller time="2023-07-15T14:10:59Z" level=info msg="started HTTP server" address="0.0.0.0:8000" context=metricsvc time="2023-07-15T14:10:59Z" level=info msg="started event handler" context=contourEventHandler time="2023-07-15T14:10:59Z" level=info msg="started HTTP server" address="127.0.0.1:6060" context=debugsvc time="2023-07-15T14:10:59Z" level=info msg="started HTTP server" address="[::]:8000" context=healthsvc time="2023-07-15T14:10:59Z" level=error msg="terminated HTTP server with error" context=healthsvc error="listen tcp 0.0.0.0:8000: bind: address already in use" time="2023-07-15T14:10:59Z" level=info msg="waiting for informer caches to sync" context=xds time="2023-07-15T14:10:59Z" level=info msg="Stopping and waiting for non leader election runnables" caller="internal.go:581" context=kubernetes time="2023-07-15T14:10:59Z" level=info msg="stopped event handler" context=contourEventHandler time="2023-07-15T14:10:59Z" level=info msg="attempting to acquire leader lease projectcontour/leader-elect-ceh-im...\n" caller="leaderelection.go:248" context=kubernetes time="2023-07-15T14:10:59Z" level=error msg="error received after stop sequence was engaged" caller="internal.go:555" context=kubernetes error="informer cache failed to sync" time="2023-07-15T14:10:59Z" level=error msg="terminated HTTP server with error" context=debugsvc error="http: Server closed" time="2023-07-15T14:10:59Z" level=error msg="error received after stop sequence was engaged" caller="internal.go:555" context=kubernetes error="http: Server closed" time="2023-07-15T14:10:59Z" level=error msg="terminated HTTP server with error" context=metricsvc error="http: Server closed" time="2023-07-15T14:10:59Z" level=error msg="error received after stop sequence was engaged" caller="internal.go:555" context=kubernetes error="http: Server closed" time="2023-07-15T14:10:59Z" level=error msg="failed to get informer from cache" caller="source.go:148" context=kubernetes error="Timeout: failed waiting for *v1beta1.HTTPRoute Informer to sync" logger=controller-runtime.source time="2023-07-15T14:10:59Z" level=info msg="Starting workers" caller="logr.go:278" context=kubernetes controller=httproute-controller worker count=1 time="2023-07-15T14:10:59Z" level=info msg="Shutdown signal received, waiting for all workers to finish" caller="logr.go:278" context=kubernetes controller=httproute-controller time="2023-07-15T14:10:59Z" level=info msg="All workers finished" caller="logr.go:278" context=kubernetes controller=httproute-controller time="2023-07-15T14:10:59Z" level=error msg="failed to get informer from cache" caller="source.go:148" context=kubernetes error="Timeout: failed waiting for *v1alpha2.GRPCRoute Informer to sync" logger=controller-runtime.source time="2023-07-15T14:10:59Z" level=info msg="Starting workers" caller="logr.go:278" context=kubernetes controller=grpcroute-controller worker count=1 time="2023-07-15T14:10:59Z" level=info msg="Shutdown signal received, waiting for all workers to finish" caller="logr.go:278" context=kubernetes controller=grpcroute-controller time="2023-07-15T14:10:59Z" level=info msg="All workers finished" caller="logr.go:278" context=kubernetes controller=grpcroute-controller time="2023-07-15T14:10:59Z" level=error msg="failed to get informer from cache" caller="source.go:148" context=kubernetes error="Timeout: failed waiting for *v1alpha2.TLSRoute Informer to sync" logger=controller-runtime.source time="2023-07-15T14:10:59Z" level=info msg="Starting workers" caller="logr.go:278" context=kubernetes controller=tlsroute-controller worker count=1 time="2023-07-15T14:10:59Z" level=info msg="Shutdown signal received, waiting for all workers to finish" caller="logr.go:278" context=kubernetes controller=tlsroute-controller time="2023-07-15T14:10:59Z" level=info msg="All workers finished" caller="logr.go:278" context=kubernetes controller=tlsroute-controller time="2023-07-15T14:10:59Z" level=info msg="Stopping and waiting for leader election runnables" caller="internal.go:585" context=kubernetes time="2023-07-15T14:10:59Z" level=info msg="started status update handler" context=StatusUpdateHandler time="2023-07-15T14:10:59Z" level=info msg="received a new address for status.loadBalancer" context=loadBalancerStatusWriter loadbalancer-address="2600:1700:8e41:4bc3::3" time="2023-07-15T14:10:59Z" level=info msg="stopped status update handler" context=StatusUpdateHandler time="2023-07-15T14:10:59Z" level=info msg="Stopping and waiting for caches" caller="internal.go:591" context=kubernetes time="2023-07-15T14:10:59Z" level=info msg="Stopping and waiting for webhooks" caller="internal.go:595" context=kubernetes time="2023-07-15T14:10:59Z" level=info msg="Wait completed, proceeding to shutdown the manager" caller="internal.go:599" context=kubernetes time="2023-07-15T14:10:59Z" level=error msg="error retrieving resource lock projectcontour/leader-elect-ceh-im: Get \"https://[fd00:beef::1]:443/apis/coordination.k8s.io/v1/namespaces/projectcontour/leases/leader-elect-ceh-im\": context canceled\n" caller="leaderelection.go:330" context=kubernetes error="" time="2023-07-15T14:10:59Z" level=fatal msg="Contour server failed" error="listen tcp 0.0.0.0:8000: bind: address already in use" ```

This is the logs from envoy when the contour pods were not in a crash loop, i.e. when I had no ContourDeployment setup or when address was set as [::].

k logs -n projectcontour envoy-ceh-im-72hzp -c envoy ``` [2023-07-15 14:23:16.028][1][info][main] [source/server/server.cc:404] initializing epoch 0 (base id=0, hot restart version=11.104) [2023-07-15 14:23:16.028][1][info][main] [source/server/server.cc:406] statically linked extensions: [2023-07-15 14:23:16.028][1][info][main] [source/server/server.cc:408] envoy.udp_packet_writer: envoy.udp_packet_writer.default, envoy.udp_packet_writer.gso [2023-07-15 14:23:16.028][1][info][main] [source/server/server.cc:408] envoy.path.rewrite: envoy.path.rewrite.uri_template.uri_template_rewriter [2023-07-15 14:23:16.028][1][info][main] [source/server/server.cc:408] envoy.http.cache: envoy.extensions.http.cache.file_system_http_cache, envoy.extensions.http.cache.simple [2023-07-15 14:23:16.028][1][info][main] [source/server/server.cc:408] envoy.matching.network.input: envoy.matching.inputs.application_protocol, envoy.matching.inputs.destination_ip, envoy.matching.inputs.destination_port, envoy.matching.inputs.direct_source_ip, envoy.matching.inputs.dns_san, envoy.matching.inputs.filter_state, envoy.matching.inputs.server_name, envoy.matching.inputs.source_ip, envoy.matching.inputs.source_port, envoy.matching.inputs.source_type, envoy.matching.inputs.subject, envoy.matching.inputs.transport_protocol, envoy.matching.inputs.uri_san [2023-07-15 14:23:16.028][1][info][main] [source/server/server.cc:408] envoy.rbac.matchers: envoy.rbac.matchers.upstream_ip_port [2023-07-15 14:23:16.028][1][info][main] [source/server/server.cc:408] envoy.upstreams: envoy.filters.connection_pools.tcp.generic [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] envoy.http.stateful_session: envoy.http.stateful_session.cookie, envoy.http.stateful_session.header [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] envoy.dubbo_proxy.protocols: dubbo [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] envoy.http.original_ip_detection: envoy.http.original_ip_detection.custom_header, envoy.http.original_ip_detection.xff [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] envoy.quic.connection_id_generator: envoy.quic.deterministic_connection_id_generator [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] envoy.path.match: envoy.path.match.uri_template.uri_template_matcher [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] envoy.transport_sockets.upstream: envoy.transport_sockets.alts, envoy.transport_sockets.http_11_proxy, envoy.transport_sockets.internal_upstream, envoy.transport_sockets.quic, envoy.transport_sockets.raw_buffer, envoy.transport_sockets.starttls, envoy.transport_sockets.tap, envoy.transport_sockets.tcp_stats, envoy.transport_sockets.tls, envoy.transport_sockets.upstream_proxy_protocol, raw_buffer, starttls, tls [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] envoy.upstream_options: envoy.extensions.upstreams.http.v3.HttpProtocolOptions, envoy.extensions.upstreams.tcp.v3.TcpProtocolOptions, envoy.upstreams.http.http_protocol_options, envoy.upstreams.tcp.tcp_protocol_options [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] envoy.thrift_proxy.protocols: auto, binary, binary/non-strict, compact, twitter [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] envoy.dubbo_proxy.filters: envoy.filters.dubbo.router [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] envoy.regex_engines: envoy.regex_engines.google_re2 [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] envoy.matching.http.input: envoy.matching.inputs.destination_ip, envoy.matching.inputs.destination_port, envoy.matching.inputs.direct_source_ip, envoy.matching.inputs.dns_san, envoy.matching.inputs.request_headers, envoy.matching.inputs.request_trailers, envoy.matching.inputs.response_headers, envoy.matching.inputs.response_trailers, envoy.matching.inputs.server_name, envoy.matching.inputs.source_ip, envoy.matching.inputs.source_port, envoy.matching.inputs.source_type, envoy.matching.inputs.status_code_class_input, envoy.matching.inputs.status_code_input, envoy.matching.inputs.subject, envoy.matching.inputs.uri_san, query_params [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] envoy.thrift_proxy.filters: envoy.filters.thrift.header_to_metadata, envoy.filters.thrift.payload_to_metadata, envoy.filters.thrift.rate_limit, envoy.filters.thrift.router [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] envoy.wasm.runtime: envoy.wasm.runtime.null, envoy.wasm.runtime.v8 [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] envoy.retry_host_predicates: envoy.retry_host_predicates.omit_canary_hosts, envoy.retry_host_predicates.omit_host_metadata, envoy.retry_host_predicates.previous_hosts [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] envoy.compression.decompressor: envoy.compression.brotli.decompressor, envoy.compression.gzip.decompressor, envoy.compression.zstd.decompressor [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] envoy.matching.action: envoy.matching.actions.format_string, filter-chain-name [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] envoy.quic.proof_source: envoy.quic.proof_source.filter_chain [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] envoy.compression.compressor: envoy.compression.brotli.compressor, envoy.compression.gzip.compressor, envoy.compression.zstd.compressor [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] envoy.request_id: envoy.request_id.uuid [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] envoy.route.early_data_policy: envoy.route.early_data_policy.default [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] envoy.retry_priorities: envoy.retry_priorities.previous_priorities [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] envoy.thrift_proxy.transports: auto, framed, header, unframed [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] envoy.tls.cert_validator: envoy.tls.cert_validator.default, envoy.tls.cert_validator.spiffe [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] envoy.resource_monitors: envoy.resource_monitors.fixed_heap, envoy.resource_monitors.injected_resource [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] envoy.load_balancing_policies: envoy.load_balancing_policies.least_request, envoy.load_balancing_policies.maglev, envoy.load_balancing_policies.random, envoy.load_balancing_policies.ring_hash, envoy.load_balancing_policies.round_robin [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] envoy.filters.udp_listener: envoy.filters.udp.dns_filter, envoy.filters.udp_listener.udp_proxy [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] envoy.http.stateful_header_formatters: envoy.http.stateful_header_formatters.preserve_case, preserve_case [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] envoy.grpc_credentials: envoy.grpc_credentials.aws_iam, envoy.grpc_credentials.default, envoy.grpc_credentials.file_based_metadata [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] envoy.http.header_validators: envoy.http.header_validators.envoy_default [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] envoy.filters.network: envoy.echo, envoy.ext_authz, envoy.filters.network.connection_limit, envoy.filters.network.direct_response, envoy.filters.network.dubbo_proxy, envoy.filters.network.echo, envoy.filters.network.ext_authz, envoy.filters.network.http_connection_manager, envoy.filters.network.local_ratelimit, envoy.filters.network.mongo_proxy, envoy.filters.network.ratelimit, envoy.filters.network.rbac, envoy.filters.network.redis_proxy, envoy.filters.network.sni_cluster, envoy.filters.network.sni_dynamic_forward_proxy, envoy.filters.network.tcp_proxy, envoy.filters.network.thrift_proxy, envoy.filters.network.wasm, envoy.filters.network.zookeeper_proxy, envoy.http_connection_manager, envoy.mongo_proxy, envoy.ratelimit, envoy.redis_proxy, envoy.tcp_proxy [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] envoy.tracers: envoy.dynamic.ot, envoy.tracers.datadog, envoy.tracers.dynamic_ot, envoy.tracers.opencensus, envoy.tracers.opentelemetry, envoy.tracers.skywalking, envoy.tracers.xray, envoy.tracers.zipkin, envoy.zipkin [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] envoy.network.dns_resolver: envoy.network.dns_resolver.cares, envoy.network.dns_resolver.getaddrinfo [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] envoy.common.key_value: envoy.key_value.file_based [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] envoy.matching.network.custom_matchers: envoy.matching.custom_matchers.trie_matcher [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] envoy.http.custom_response: envoy.extensions.http.custom_response.local_response_policy, envoy.extensions.http.custom_response.redirect_policy [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] envoy.filters.listener: envoy.filters.listener.http_inspector, envoy.filters.listener.local_ratelimit, envoy.filters.listener.original_dst, envoy.filters.listener.original_src, envoy.filters.listener.proxy_protocol, envoy.filters.listener.tls_inspector, envoy.listener.http_inspector, envoy.listener.original_dst, envoy.listener.original_src, envoy.listener.proxy_protocol, envoy.listener.tls_inspector [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] envoy.matching.http.custom_matchers: envoy.matching.custom_matchers.trie_matcher [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] envoy.internal_redirect_predicates: envoy.internal_redirect_predicates.allow_listed_routes, envoy.internal_redirect_predicates.previous_routes, envoy.internal_redirect_predicates.safe_cross_scheme [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] envoy.dubbo_proxy.serializers: dubbo.hessian2 [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] envoy.transport_sockets.downstream: envoy.transport_sockets.alts, envoy.transport_sockets.quic, envoy.transport_sockets.raw_buffer, envoy.transport_sockets.starttls, envoy.transport_sockets.tap, envoy.transport_sockets.tcp_stats, envoy.transport_sockets.tls, raw_buffer, starttls, tls [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] envoy.health_checkers: envoy.health_checkers.grpc, envoy.health_checkers.http, envoy.health_checkers.redis, envoy.health_checkers.tcp, envoy.health_checkers.thrift [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] network.connection.client: default, envoy_internal [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] envoy.connection_handler: envoy.connection_handler.default [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] envoy.http.early_header_mutation: envoy.http.early_header_mutation.header_mutation [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] envoy.bootstrap: envoy.bootstrap.internal_listener, envoy.bootstrap.wasm, envoy.extensions.network.socket_interface.default_socket_interface [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] envoy.access_loggers.extension_filters: envoy.access_loggers.extension_filters.cel [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] envoy.filters.http: envoy.bandwidth_limit, envoy.buffer, envoy.cors, envoy.csrf, envoy.ext_authz, envoy.ext_proc, envoy.fault, envoy.filters.http.adaptive_concurrency, envoy.filters.http.admission_control, envoy.filters.http.alternate_protocols_cache, envoy.filters.http.aws_lambda, envoy.filters.http.aws_request_signing, envoy.filters.http.bandwidth_limit, envoy.filters.http.buffer, envoy.filters.http.cache, envoy.filters.http.cdn_loop, envoy.filters.http.composite, envoy.filters.http.compressor, envoy.filters.http.connect_grpc_bridge, envoy.filters.http.cors, envoy.filters.http.csrf, envoy.filters.http.custom_response, envoy.filters.http.decompressor, envoy.filters.http.dynamic_forward_proxy, envoy.filters.http.ext_authz, envoy.filters.http.ext_proc, envoy.filters.http.fault, envoy.filters.http.file_system_buffer, envoy.filters.http.gcp_authn, envoy.filters.http.grpc_http1_bridge, envoy.filters.http.grpc_http1_reverse_bridge, envoy.filters.http.grpc_json_transcoder, envoy.filters.http.grpc_stats, envoy.filters.http.grpc_web, envoy.filters.http.header_mutation, envoy.filters.http.header_to_metadata, envoy.filters.http.health_check, envoy.filters.http.ip_tagging, envoy.filters.http.jwt_authn, envoy.filters.http.local_ratelimit, envoy.filters.http.lua, envoy.filters.http.match_delegate, envoy.filters.http.oauth2, envoy.filters.http.on_demand, envoy.filters.http.original_src, envoy.filters.http.rate_limit_quota, envoy.filters.http.ratelimit, envoy.filters.http.rbac, envoy.filters.http.router, envoy.filters.http.set_metadata, envoy.filters.http.stateful_session, envoy.filters.http.tap, envoy.filters.http.wasm, envoy.grpc_http1_bridge, envoy.grpc_json_transcoder, envoy.grpc_web, envoy.health_check, envoy.ip_tagging, envoy.local_rate_limit, envoy.lua, envoy.rate_limit, envoy.router [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] envoy.resolvers: envoy.ip [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] envoy.listener_manager_impl: envoy.listener_manager_impl.default, envoy.listener_manager_impl.validation [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] envoy.filters.http.upstream: envoy.buffer, envoy.filters.http.admission_control, envoy.filters.http.buffer, envoy.filters.http.header_mutation, envoy.filters.http.upstream_codec [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] envoy.config.validators: envoy.config.validators.minimum_clusters, envoy.config.validators.minimum_clusters_validator [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] envoy.formatter: envoy.formatter.metadata, envoy.formatter.req_without_query [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] quic.http_server_connection: quic.http_server_connection.default [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] envoy.matching.input_matchers: envoy.matching.matchers.consistent_hashing, envoy.matching.matchers.ip [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] envoy.clusters: envoy.cluster.eds, envoy.cluster.logical_dns, envoy.cluster.original_dst, envoy.cluster.static, envoy.cluster.strict_dns, envoy.clusters.aggregate, envoy.clusters.dynamic_forward_proxy, envoy.clusters.redis [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] envoy.guarddog_actions: envoy.watchdog.abort_action, envoy.watchdog.profile_action [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] envoy.config_subscription: envoy.config_subscription.filesystem, envoy.config_subscription.filesystem_collection, envoy.config_subscription.rest [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] envoy.stats_sinks: envoy.dog_statsd, envoy.graphite_statsd, envoy.metrics_service, envoy.stat_sinks.dog_statsd, envoy.stat_sinks.graphite_statsd, envoy.stat_sinks.hystrix, envoy.stat_sinks.metrics_service, envoy.stat_sinks.statsd, envoy.stat_sinks.wasm, envoy.statsd [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] envoy.access_loggers: envoy.access_loggers.file, envoy.access_loggers.http_grpc, envoy.access_loggers.open_telemetry, envoy.access_loggers.stderr, envoy.access_loggers.stdout, envoy.access_loggers.tcp_grpc, envoy.access_loggers.wasm, envoy.file_access_log, envoy.http_grpc_access_log, envoy.open_telemetry_access_log, envoy.stderr_access_log, envoy.stdout_access_log, envoy.tcp_grpc_access_log, envoy.wasm_access_log [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] envoy.matching.common_inputs: envoy.matching.common_inputs.environment_variable [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] envoy.quic.server_preferred_address: quic.server_preferred_address.fixed [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] envoy.quic.server.crypto_stream: envoy.quic.crypto_stream.server.quiche [2023-07-15 14:23:16.029][1][info][main] [source/server/server.cc:408] envoy.rate_limit_descriptors: envoy.rate_limit_descriptors.expr [2023-07-15 14:23:16.030][1][info][main] [source/server/server.cc:456] HTTP header map info: [2023-07-15 14:23:16.031][1][info][main] [source/server/server.cc:459] request header map: 672 bytes: :authority,:method,:path,:protocol,:scheme,accept,accept-encoding,access-control-request-headers,access-control-request-method,access-control-request-private-network,authentication,authorization,cache-control,cdn-loop,connection,content-encoding,content-length,content-type,expect,grpc-accept-encoding,grpc-timeout,if-match,if-modified-since,if-none-match,if-range,if-unmodified-since,keep-alive,origin,pragma,proxy-connection,proxy-status,referer,te,transfer-encoding,upgrade,user-agent,via,x-client-trace-id,x-envoy-attempt-count,x-envoy-decorator-operation,x-envoy-downstream-service-cluster,x-envoy-downstream-service-node,x-envoy-expected-rq-timeout-ms,x-envoy-external-address,x-envoy-force-trace,x-envoy-hedge-on-per-try-timeout,x-envoy-internal,x-envoy-ip-tags,x-envoy-is-timeout-retry,x-envoy-max-retries,x-envoy-original-path,x-envoy-original-url,x-envoy-retriable-header-names,x-envoy-retriable-status-codes,x-envoy-retry-grpc-on,x-envoy-retry-on,x-envoy-upstream-alt-stat-name,x-envoy-upstream-rq-per-try-timeout-ms,x-envoy-upstream-rq-timeout-alt-response,x-envoy-upstream-rq-timeout-ms,x-envoy-upstream-stream-duration-ms,x-forwarded-client-cert,x-forwarded-for,x-forwarded-host,x-forwarded-port,x-forwarded-proto,x-ot-span-context,x-request-id [2023-07-15 14:23:16.031][1][info][main] [source/server/server.cc:459] request trailer map: 120 bytes: [2023-07-15 14:23:16.031][1][info][main] [source/server/server.cc:459] response header map: 432 bytes: :status,access-control-allow-credentials,access-control-allow-headers,access-control-allow-methods,access-control-allow-origin,access-control-allow-private-network,access-control-expose-headers,access-control-max-age,age,cache-control,connection,content-encoding,content-length,content-type,date,etag,expires,grpc-message,grpc-status,keep-alive,last-modified,location,proxy-connection,proxy-status,server,transfer-encoding,upgrade,vary,via,x-envoy-attempt-count,x-envoy-decorator-operation,x-envoy-degraded,x-envoy-immediate-health-check-fail,x-envoy-ratelimited,x-envoy-upstream-canary,x-envoy-upstream-healthchecked-cluster,x-envoy-upstream-service-time,x-request-id [2023-07-15 14:23:16.031][1][info][main] [source/server/server.cc:459] response trailer map: 144 bytes: grpc-message,grpc-status [2023-07-15 14:23:16.067][1][info][main] [source/server/server.cc:827] runtime: layers: - name: base static_layer: re2.max_program_size.error_level: 1048576 re2.max_program_size.warn_level: 1000 - name: dynamic rtds_layer: name: dynamic rtds_config: api_config_source: api_type: GRPC grpc_services: - envoy_grpc: cluster_name: contour authority: contour transport_api_version: V3 resource_api_version: V3 - name: admin admin_layer: {} [2023-07-15 14:23:16.068][1][info][admin] [source/server/admin/admin.cc:66] admin address: /admin/admin.sock [2023-07-15 14:23:16.068][1][info][config] [source/server/configuration_impl.cc:131] loading tracing configuration [2023-07-15 14:23:16.068][1][info][config] [source/server/configuration_impl.cc:91] loading 0 static secret(s) [2023-07-15 14:23:16.068][1][info][config] [source/server/configuration_impl.cc:97] loading 2 cluster(s) [2023-07-15 14:23:16.072][1][info][config] [source/server/configuration_impl.cc:101] loading 0 listener(s) [2023-07-15 14:23:16.072][1][info][config] [source/server/configuration_impl.cc:113] loading stats configuration [2023-07-15 14:23:16.074][1][info][main] [source/server/server.cc:923] starting main dispatch loop [2023-07-15 14:23:31.075][1][warning][config] [source/common/config/grpc_subscription_impl.cc:120] gRPC config: initial fetch timed out for type.googleapis.com/envoy.service.runtime.v3.Runtime [2023-07-15 14:23:31.075][1][info][runtime] [source/common/runtime/runtime_impl.cc:463] RTDS has finished initialization [2023-07-15 14:23:31.075][1][info][upstream] [source/common/upstream/cluster_manager_impl.cc:221] cm init: initializing cds [2023-07-15 14:23:31.076][1][warning][main] [source/server/server.cc:802] there is no configured limit to the number of allowed active connections. Set a limit via the runtime key overload.global_downstream_max_connections [2023-07-15 14:23:33.105][1][info][upstream] [source/common/upstream/cds_api_helper.cc:32] cds: add 0 cluster(s), remove 2 cluster(s) [2023-07-15 14:23:33.105][1][info][upstream] [source/common/upstream/cds_api_helper.cc:69] cds: added/updated 0 cluster(s), skipped 0 unmodified cluster(s) [2023-07-15 14:23:33.105][1][info][upstream] [source/common/upstream/cluster_manager_impl.cc:225] cm init: all clusters initialized [2023-07-15 14:23:33.105][1][info][main] [source/server/server.cc:904] all clusters initialized. initializing init manager [2023-07-15 14:23:33.138][1][info][upstream] [source/extensions/listener_managers/listener_manager/lds_api.cc:79] lds: add/update listener 'envoy-admin' [2023-07-15 14:23:33.141][1][info][upstream] [source/extensions/listener_managers/listener_manager/lds_api.cc:79] lds: add/update listener 'stats-health' [2023-07-15 14:23:33.141][1][info][config] [source/extensions/listener_managers/listener_manager/listener_manager_impl.cc:858] all dependencies initialized. starting workers ```
netstat -tlnp in envoy pod ``` Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:8002 0.0.0.0:* LISTEN 1/envoy tcp 0 0 0.0.0.0:8002 0.0.0.0:* LISTEN 1/envoy tcp 0 0 0.0.0.0:8002 0.0.0.0:* LISTEN 1/envoy tcp 0 0 0.0.0.0:8002 0.0.0.0:* LISTEN 1/envoy tcp 0 0 0.0.0.0:8002 0.0.0.0:* LISTEN 1/envoy tcp 0 0 0.0.0.0:8002 0.0.0.0:* LISTEN 1/envoy tcp 0 0 0.0.0.0:8002 0.0.0.0:* LISTEN 1/envoy tcp 0 0 0.0.0.0:8002 0.0.0.0:* LISTEN 1/envoy tcp 0 0 0.0.0.0:8002 0.0.0.0:* LISTEN 1/envoy tcp 0 0 0.0.0.0:8002 0.0.0.0:* LISTEN 1/envoy tcp 0 0 0.0.0.0:8002 0.0.0.0:* LISTEN 1/envoy tcp 0 0 0.0.0.0:8002 0.0.0.0:* LISTEN 1/envoy tcp 0 0 0.0.0.0:8002 0.0.0.0:* LISTEN 1/envoy tcp 0 0 0.0.0.0:8002 0.0.0.0:* LISTEN 1/envoy tcp 0 0 0.0.0.0:8002 0.0.0.0:* LISTEN 1/envoy tcp 0 0 0.0.0.0:8002 0.0.0.0:* LISTEN 1/envoy tcp 0 0 0.0.0.0:8002 0.0.0.0:* LISTEN 1/envoy tcp 0 0 0.0.0.0:8002 0.0.0.0:* LISTEN 1/envoy tcp 0 0 0.0.0.0:8002 0.0.0.0:* LISTEN 1/envoy tcp 0 0 0.0.0.0:8002 0.0.0.0:* LISTEN 1/envoy tcp 0 0 0.0.0.0:8002 0.0.0.0:* LISTEN 1/envoy tcp 0 0 0.0.0.0:8002 0.0.0.0:* LISTEN 1/envoy tcp 0 0 0.0.0.0:8002 0.0.0.0:* LISTEN 1/envoy tcp 0 0 0.0.0.0:8002 0.0.0.0:* LISTEN 1/envoy tcp 0 0 127.0.0.1:9001 0.0.0.0:* LISTEN 1/envoy tcp 0 0 127.0.0.1:9001 0.0.0.0:* LISTEN 1/envoy tcp 0 0 127.0.0.1:9001 0.0.0.0:* LISTEN 1/envoy tcp 0 0 127.0.0.1:9001 0.0.0.0:* LISTEN 1/envoy tcp 0 0 127.0.0.1:9001 0.0.0.0:* LISTEN 1/envoy tcp 0 0 127.0.0.1:9001 0.0.0.0:* LISTEN 1/envoy tcp 0 0 127.0.0.1:9001 0.0.0.0:* LISTEN 1/envoy tcp 0 0 127.0.0.1:9001 0.0.0.0:* LISTEN 1/envoy tcp 0 0 127.0.0.1:9001 0.0.0.0:* LISTEN 1/envoy tcp 0 0 127.0.0.1:9001 0.0.0.0:* LISTEN 1/envoy tcp 0 0 127.0.0.1:9001 0.0.0.0:* LISTEN 1/envoy tcp 0 0 127.0.0.1:9001 0.0.0.0:* LISTEN 1/envoy tcp 0 0 127.0.0.1:9001 0.0.0.0:* LISTEN 1/envoy tcp 0 0 127.0.0.1:9001 0.0.0.0:* LISTEN 1/envoy tcp 0 0 127.0.0.1:9001 0.0.0.0:* LISTEN 1/envoy tcp 0 0 127.0.0.1:9001 0.0.0.0:* LISTEN 1/envoy tcp 0 0 127.0.0.1:9001 0.0.0.0:* LISTEN 1/envoy tcp 0 0 127.0.0.1:9001 0.0.0.0:* LISTEN 1/envoy tcp 0 0 127.0.0.1:9001 0.0.0.0:* LISTEN 1/envoy tcp 0 0 127.0.0.1:9001 0.0.0.0:* LISTEN 1/envoy tcp 0 0 127.0.0.1:9001 0.0.0.0:* LISTEN 1/envoy tcp 0 0 127.0.0.1:9001 0.0.0.0:* LISTEN 1/envoy tcp 0 0 127.0.0.1:9001 0.0.0.0:* LISTEN 1/envoy tcp 0 0 127.0.0.1:9001 0.0.0.0:* LISTEN 1/envoy tcp6 0 0 :::8090 :::* LISTEN - ```
GatewayClass ```yaml apiVersion: gateway.networking.k8s.io/v1beta1 kind: GatewayClass metadata: name: contour spec: controllerName: projectcontour.io/gateway-controller parametersRef: group: projectcontour.io kind: ContourDeployment name: contour-config namespace: projectcontour ```
ContourDeployment ```yaml apiVersion: projectcontour.io/v1alpha1 kind: ContourDeployment metadata: name: contour-config namespace: projectcontour spec: contour: deployment: replicas: 2 nodePlacement: nodeSelector: node-role.kubernetes.io/control-plane: '' resources: limits: cpu: 500m memory: 256Mi requests: cpu: 500m memory: 256Mi envoy: deployment: replicas: 2 networkPublishing: externalTrafficPolicy: Local type: LoadBalancerService nodePlacement: nodeSelector: node-role.kubernetes.io/control-plane: '' resources: limits: cpu: 500m memory: 256Mi requests: cpu: 500m memory: 256Mi workloadType: Deployment runtimeSettings: envoy: health: address: '[::]' http: address: '[::]' https: address: '[::]' metrics: address: '[::]' health: address: '[::]' xdsServer: address: '[::]' ```
Gateway ```yaml apiVersion: gateway.networking.k8s.io/v1beta1 kind: Gateway metadata: name: ceh-im namespace: projectcontour spec: gatewayClassName: contour listeners: - allowedRoutes: namespaces: from: All hostname: '*.ceh.im' name: http port: 80 protocol: HTTP - allowedRoutes: namespaces: from: All hostname: '*.ceh.im' name: https port: 443 protocol: HTTPS tls: certificateRefs: - name: wildcard-ceh-im mode: Terminate ```

What did you expect to happen:

I expected the envoy pods to become ready and that I could start creating HTTPRoute resources for the gateway.

Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]

I have IPv6 set first in the list of PodCIDrs and IPv4 second. This makes the default address for all pods IPv6. The default address is the address that the kubelet uses for heath checks.

Environment:

github-actions[bot] commented 1 year ago

Hey @cehoffman! Thanks for opening your first issue. We appreciate your contribution and welcome you to our community! We are glad to have you here and to have your input on Contour. You can also join us on our mailing list and in our channel in the Kubernetes Slack Workspace

cehoffman commented 1 year ago

After looking through a few IPv6 issues, I've tried explicitly setting some flags on the contour pod, i.e. serve container.

        - '--xds-address=::'
        - '--envoy-service-https-address=::'
        - '--envoy-service-http-address=::'
        - '--stats-address=::'
        - '--debug-http-address=::'
        - '--http-address=::'
        - '--health-address=::'

I expect the ContourDeployment should allow me to set those as well, but I wanted to be sure. This didn't make any difference to the envoy pods. I restarted the pods as well.

I then tried goign back to using the ContourDeployment and understanding why contour was in crash loop. When I set all :: addresses. I missed the metrics block address. Contour was trying to launch health on :: and metrics on 0.0.0.0 using the same port since the two services share port 8000. This is why it crashed with port already bound. After adding in the metrics block with address :: . This set of changes didn't get the services to enter ready, but did stop crash loop even after restarting the pods.

I then deleted the Gateway instance and did a reapply after all the pods drained. On this iteration everything came up and the envoy pods became ready.

In summary I updated the runtimeSettings for the ContourDeployment to:

  runtimeSettings:
    envoy:
      health:
        address: '::'
      http:
        address: '::'
      https:
        address: '::'
      metrics:
        address: '::'
    health:
      address: '::'
    metrics:
      address: '::'
    xdsServer:
      address: '::'

This does seem a bit excessive to need to supply, but at least things are in ready state now. Also having the configuration points is much better than with Istio where I'm stuck with a similar problem for envoy readiness checks. I'll be trying to actually serve something through some route definitions next.

It seems like it might be nice to have a top level configuration for the gateway/contour to set an IP family, e.g. IPv4, IPv6, or DualStack.

github-actions[bot] commented 1 year ago

The Contour project currently lacks enough contributors to adequately respond to all Issues.

This bot triages Issues according to the following rules:

You can:

Please send feedback to the #contour channel in the Kubernetes Slack

github-actions[bot] commented 1 year ago

The Contour project currently lacks enough contributors to adequately respond to all Issues.

This bot triages Issues according to the following rules:

You can:

Please send feedback to the #contour channel in the Kubernetes Slack