kumahq / kuma-website

🐻 The official website for Kuma, the control plane for modern service connectivity.
https://kuma.io
Apache License 2.0
104 stars 88 forks source link

MeshGateway TLS listener doesn't validate #1119

Closed conman2305 closed 1 year ago

conman2305 commented 1 year ago

What happened?

Trying to create a MeshGateway with a TLS listener using the following configuration:

---
apiVersion: kuma.io/v1alpha1
kind: MeshGateway
mesh: dev
metadata:
  name: dsr-gateway
spec:
  conf:
    listeners:
    - port: 8181
      protocol: HTTPS
      tls:
        mode: TERMINATE
        certificate:
          secret: dsr-cert
  selectors:
  - match:
      kuma.io/service: dsr-gateway
---
apiVersion: kuma.io/v1alpha1
kind: MeshGatewayRoute
mesh: dev
metadata:
  name: dsr-gateway-route
spec:
  selectors:
    - match:
        kuma.io/service: dsr-gateway
  conf:
    http:
      rules:
        - matches:
            - path:
                match: PREFIX
                value: /
          backends:
            - destination:
                kuma.io/service: demo-service-rust_test_svc_8181

and I'm getting the following error during apply:

The MeshGateway "dsr-gateway" is invalid: spec.conf.listeners[0].tls.certificates: cannot be empty in TLS termination mode

I'm following the documentation here: https://kuma.io/docs/1.8.x/policies/mesh-gateway/#tls-termination

Kuma version: 1.8.1

This is a multi-zone and multi-mesh deployment, if that changes anything.

Thank you!

michaelbeaumont commented 1 year ago

The docs are incorrect. Try:

        certificates:
          - secret: dsr-cert
conman2305 commented 1 year ago

Awesome, that validated!

Now, part two of the problem appears to be the MeshGatewayInstance:

---
apiVersion: kuma.io/v1alpha1
kind: MeshGatewayInstance
metadata:
  name: dsr-gateway
  namespace: dev-mesh
spec:
  replicas: 3
  resources:
    limits:
      cpu: 1000m
    requests:
      cpu: 100m
  serviceType: ClusterIP
  tags:
    kuma.io/service: dsr-gateway

It doesn't look like the Envoy instance is getting any listener configuration:

[2022-11-02 17:56:49.970][21][info][main] [source/server/server.cc:786] runtime: layers:
--
Wed, Nov 2 2022 1:56:49 pm | - name: kuma
Wed, Nov 2 2022 1:56:49 pm | static_layer:
Wed, Nov 2 2022 1:56:49 pm | envoy.restart_features.use_apple_api_for_dns_lookups: false
Wed, Nov 2 2022 1:56:49 pm | re2.max_program_size.warn_level: 1000
Wed, Nov 2 2022 1:56:49 pm | re2.max_program_size.error_level: 4294967295
Wed, Nov 2 2022 1:56:49 pm | - name: gateway
Wed, Nov 2 2022 1:56:49 pm | static_layer:
Wed, Nov 2 2022 1:56:49 pm | overload.global_downstream_max_connections: 50000
Wed, Nov 2 2022 1:56:49 pm | - name: gateway.listeners
Wed, Nov 2 2022 1:56:49 pm | rtds_layer:
Wed, Nov 2 2022 1:56:49 pm | name: gateway.listeners
Wed, Nov 2 2022 1:56:49 pm | rtds_config:
Wed, Nov 2 2022 1:56:49 pm | ads:
Wed, Nov 2 2022 1:56:49 pm | {}
Wed, Nov 2 2022 1:56:49 pm | resource_api_version: V3
michaelbeaumont commented 1 year ago

Can you check the status of your MeshGatewayInstance? There may be an error condition.

I think you need to add the kuma.io/mesh: dev annotation.

conman2305 commented 1 year ago

Seems to be reporting as ready. I tossed the annotation on as well:

kubectl describe meshgatewayinstance dsr-gateway -n dev-mesh
Name:         dsr-gateway
Namespace:    dev-mesh
Labels:       <none>
Annotations:  kuma.io/mesh: dev
API Version:  kuma.io/v1alpha1
Kind:         MeshGatewayInstance
Metadata:
  Creation Timestamp:  2022-11-02T17:53:51Z
  Generation:          1
  Managed Fields:
    API Version:  kuma.io/v1alpha1
    Fields Type:  FieldsV1
    fieldsV1:
      f:status:
        .:
        f:conditions:
          .:
          k:{"type":"Ready"}:
            .:
            f:lastTransitionTime:
            f:message:
            f:observedGeneration:
            f:reason:
            f:status:
            f:type:
        f:loadBalancer:
    Manager:      kuma-cp
    Operation:    Update
    Subresource:  status
    Time:         2022-11-02T17:53:51Z
    API Version:  kuma.io/v1alpha1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .:
          f:kubectl.kubernetes.io/last-applied-configuration:
          f:kuma.io/mesh:
      f:spec:
        .:
        f:replicas:
        f:resources:
          .:
          f:limits:
            .:
            f:cpu:
          f:requests:
            .:
            f:cpu:
        f:serviceType:
        f:tags:
          .:
          f:kuma.io/service:
    Manager:         kubectl-client-side-apply
    Operation:       Update
    Time:            2022-11-02T18:04:13Z
  Resource Version:  8873538
  UID:               a49d9e0f-7103-4945-a945-b8e4f5edc2fa
Spec:
  Replicas:  3
  Resources:
    Limits:
      Cpu:  1000m
    Requests:
      Cpu:       100m
  Service Type:  ClusterIP
  Tags:
    kuma.io/service:  dsr-gateway
Status:
  Conditions:
    Last Transition Time:  2022-11-02T17:56:46Z
    Message:
    Observed Generation:   1
    Reason:                DeploymentNotAvailable
    Status:                False
    Type:                  Ready
  Load Balancer:
Events:  <none>
michaelbeaumont commented 1 year ago

It has

Status:
  Conditions:
    Last Transition Time:  2022-11-02T17:56:46Z
    Message:
    Observed Generation:   1
    Reason:                DeploymentNotAvailable
    Status:                False
    Type:                  Ready
  Load Balancer:

What's the status of the Deployment?

conman2305 commented 1 year ago
$ kubectl describe deployment dsr-gateway -n dev-mesh
Name:                   dsr-gateway
Namespace:              dev-mesh
CreationTimestamp:      Wed, 02 Nov 2022 13:53:51 -0400
Labels:                 <none>
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               app=dsr-gateway
Replicas:               3 desired | 3 updated | 3 total | 0 available | 3 unavailable

All 3 replicas in a restart loop over a failed health check:

Events:
  Type     Reason                Age                    From                             Message
  ----     ------                ----                   ----                             -------
  Normal   Scheduled             5m56s                  default-scheduler                Successfully assigned dev-mesh/dsr-gateway-6d7d684c95-cb5rq to ip-10-69-241-5.us-east-2.compute.internal
  Normal   Pulled                5m55s                  kubelet                          Container image "docker.io/kumahq/kuma-dp:1.8.1" already present on machine
  Normal   Created               5m55s                  kubelet                          Created container kuma-gateway
  Normal   Started               5m55s                  kubelet                          Started container kuma-gateway
  Normal   CreatedKumaDataplane  5m55s                  k8s.kuma.io/dataplane-generator  Created Kuma Dataplane: dsr-gateway-6d7d684c95-cb5rq
  Warning  Unhealthy             4m36s (x4 over 4m51s)  kubelet                          Liveness probe failed: Get "http://100.80.226.160:9901/ready": dial tcp 100.80.226.160:9901: connect: connection refused
  Warning  Unhealthy             51s (x69 over 5m54s)   kubelet                          Readiness probe failed: Get "http://100.80.226.160:9901/ready": dial tcp 100.80.226.160:9901: connect: connection refused
conman2305 commented 1 year ago

For the heck of it, I disabled the TLS termination on the MeshGateway and all 3 replicas do come up and correctly answer for traffic.

michaelbeaumont commented 1 year ago

Can you post the logs of the Pods?

conman2305 commented 1 year ago
2022-11-02T18:11:42.916Z    INFO    kuma-dp.run generated configurations will be stored in a temporary directory    {"dir": "/tmp/kuma-dp-4162614990"}
2022-11-02T18:11:42.969Z    INFO    kuma-dp.run fetched Envoy version   {"version": {"Build":"ae27fb5280d30e1400b7e9c9cbd448bfcd4ad9f5/1.22.1/Modified/RELEASE/BoringSSL","Version":"1.22.1","KumaDpCompatible":true}}
2022-11-02T18:11:42.969Z    INFO    kuma-dp.run generating bootstrap configuration
2022-11-02T18:11:42.969Z    INFO    dataplane   trying to fetch bootstrap configuration from the Control Plane
2022-11-02T18:11:42.992Z    INFO    kuma-dp.run received bootstrap configuration    {"adminPort": 9901}
2022-11-02T18:11:42.994Z    INFO    kuma-dp.run starting Kuma DP    {"version": "1.8.1"}
2022-11-02T18:11:42.994Z    INFO    kuma-dp.run.access-log-streamer starting resilient component ...
2022-11-02T18:11:42.994Z    INFO    access-log-streamer cleaning existing access log pipe   {"file": "/tmp/kuma-al-dsr-gateway-6d7d684c95-btw89.dev-mesh-dev.sock"}
2022-11-02T18:11:42.994Z    INFO    access-log-streamer creating access log pipe    {"file": "/tmp/kuma-al-dsr-gateway-6d7d684c95-btw89.dev-mesh-dev.sock"}
2022-11-02T18:11:42.994Z    INFO    kuma-dp.run.envoy   bootstrap configuration saved to a file {"file": "/tmp/kuma-dp-4162614990/bootstrap.yaml"}
2022-11-02T18:11:42.994Z    INFO    kuma-dp.run.envoy   starting Envoy  {"path": "/usr/bin/envoy", "arguments": ["--config-path", "/tmp/kuma-dp-4162614990/bootstrap.yaml", "--drain-time-s", "30", "--disable-hot-restart", "--log-level", "info", "--concurrency", "2"]}
2022-11-02T18:11:42.995Z    INFO    kuma-dp.run.dns-server  configuration saved to a file   {"file": "/tmp/kuma-dp-4162614990/Corefile"}
2022-11-02T18:11:42.995Z    INFO    kuma-dp.run.dns-server  starting DNS Server (coredns)   {"args": ["-conf", "/tmp/kuma-dp-4162614990/Corefile", "-quiet"]}
2022-11-02T18:11:42.995Z    INFO    metrics-hijacker    starting Metrics Hijacker Server    {"socketPath": "unix:///tmp/kuma-mh-dsr-gateway-6d7d684c95-btw89.dev-mesh-dev.sock"}
[2022-11-02 18:11:43.045][21][info][main] [source/server/server.cc:390] initializing epoch 0 (base id=0, hot restart version=disabled)
[2022-11-02 18:11:43.045][21][info][main] [source/server/server.cc:392] statically linked extensions:
[2022-11-02 18:11:43.045][21][info][main] [source/server/server.cc:394]   envoy.access_logger.extension_filters: envoy.access_loggers.extension_filters.cel
[2022-11-02 18:11:43.045][21][info][main] [source/server/server.cc:394]   envoy.matching.network.input: envoy.matching.inputs.application_protocol, envoy.matching.inputs.destination_ip, envoy.matching.inputs.destination_port, envoy.matching.inputs.direct_source_ip, envoy.matching.inputs.server_name, envoy.matching.inputs.source_ip, envoy.matching.inputs.source_port, envoy.matching.inputs.source_type, envoy.matching.inputs.transport_protocol
[2022-11-02 18:11:43.045][21][info][main] [source/server/server.cc:394]   envoy.filters.listener: envoy.filters.listener.http_inspector, envoy.filters.listener.original_dst, envoy.filters.listener.original_src, envoy.filters.listener.proxy_protocol, envoy.filters.listener.tls_inspector, envoy.listener.http_inspector, envoy.listener.original_dst, envoy.listener.original_src, envoy.listener.proxy_protocol, envoy.listener.tls_inspector
[2022-11-02 18:11:43.045][21][info][main] [source/server/server.cc:394]   envoy.upstream_options: envoy.extensions.upstreams.http.v3.HttpProtocolOptions, envoy.upstreams.http.http_protocol_options
[2022-11-02 18:11:43.045][21][info][main] [source/server/server.cc:394]   envoy.resolvers: envoy.ip
[2022-11-02 18:11:43.045][21][info][main] [source/server/server.cc:394]   envoy.transport_sockets.downstream: envoy.transport_sockets.alts, envoy.transport_sockets.quic, envoy.transport_sockets.raw_buffer, envoy.transport_sockets.starttls, envoy.transport_sockets.tap, envoy.transport_sockets.tcp_stats, envoy.transport_sockets.tls, raw_buffer, starttls, tls
[2022-11-02 18:11:43.045][21][info][main] [source/server/server.cc:394]   envoy.common.key_value: envoy.key_value.file_based
[2022-11-02 18:11:43.045][21][info][main] [source/server/server.cc:394]   envoy.wasm.runtime: envoy.wasm.runtime.null, envoy.wasm.runtime.v8
[2022-11-02 18:11:43.045][21][info][main] [source/server/server.cc:394]   envoy.formatter: envoy.formatter.metadata, envoy.formatter.req_without_query
[2022-11-02 18:11:43.045][21][info][main] [source/server/server.cc:394]   envoy.thrift_proxy.protocols: auto, binary, binary/non-strict, compact, twitter
[2022-11-02 18:11:43.045][21][info][main] [source/server/server.cc:394]   envoy.quic.server.crypto_stream: envoy.quic.crypto_stream.server.quiche
[2022-11-02 18:11:43.045][21][info][main] [source/server/server.cc:394]   envoy.compression.decompressor: envoy.compression.brotli.decompressor, envoy.compression.gzip.decompressor, envoy.compression.zstd.decompressor
[2022-11-02 18:11:43.045][21][info][main] [source/server/server.cc:394]   envoy.internal_redirect_predicates: envoy.internal_redirect_predicates.allow_listed_routes, envoy.internal_redirect_predicates.previous_routes, envoy.internal_redirect_predicates.safe_cross_scheme
[2022-11-02 18:11:43.045][21][info][main] [source/server/server.cc:394]   envoy.health_checkers: envoy.health_checkers.redis
[2022-11-02 18:11:43.045][21][info][main] [source/server/server.cc:394]   envoy.compression.compressor: envoy.compression.brotli.compressor, envoy.compression.gzip.compressor, envoy.compression.zstd.compressor
[2022-11-02 18:11:43.045][21][info][main] [source/server/server.cc:394]   envoy.upstreams: envoy.filters.connection_pools.tcp.generic
[2022-11-02 18:11:43.045][21][info][main] [source/server/server.cc:394]   envoy.network.dns_resolver: envoy.network.dns_resolver.cares
[2022-11-02 18:11:43.045][21][info][main] [source/server/server.cc:394]   envoy.config.validators: envoy.config.validators.minimum_clusters, envoy.config.validators.minimum_clusters_validator
[2022-11-02 18:11:43.045][21][info][main] [source/server/server.cc:394]   envoy.matching.action: composite-action, skip
[2022-11-02 18:11:43.045][21][info][main] [source/server/server.cc:394]   envoy.retry_priorities: envoy.retry_priorities.previous_priorities
[2022-11-02 18:11:43.045][21][info][main] [source/server/server.cc:394]   envoy.filters.network: envoy.client_ssl_auth, envoy.echo, envoy.ext_authz, envoy.filters.network.client_ssl_auth, envoy.filters.network.connection_limit, envoy.filters.network.direct_response, envoy.filters.network.dubbo_proxy, envoy.filters.network.echo, envoy.filters.network.ext_authz, envoy.filters.network.http_connection_manager, envoy.filters.network.kafka_broker, envoy.filters.network.local_ratelimit, envoy.filters.network.mongo_proxy, envoy.filters.network.ratelimit, envoy.filters.network.rbac, envoy.filters.network.redis_proxy, envoy.filters.network.sni_cluster, envoy.filters.network.sni_dynamic_forward_proxy, envoy.filters.network.tcp_proxy, envoy.filters.network.thrift_proxy, envoy.filters.network.wasm, envoy.filters.network.zookeeper_proxy, envoy.http_connection_manager, envoy.mongo_proxy, envoy.ratelimit, envoy.redis_proxy, envoy.tcp_proxy
[2022-11-02 18:11:43.045][21][info][main] [source/server/server.cc:394]   envoy.http.stateful_header_formatters: preserve_case
[2022-11-02 18:11:43.045][21][info][main] [source/server/server.cc:394]   envoy.grpc_credentials: envoy.grpc_credentials.aws_iam, envoy.grpc_credentials.default, envoy.grpc_credentials.file_based_metadata
[2022-11-02 18:11:43.045][21][info][main] [source/server/server.cc:394]   envoy.http.stateful_session: envoy.http.stateful_session.cookie
[2022-11-02 18:11:43.045][21][info][main] [source/server/server.cc:394]   envoy.dubbo_proxy.filters: envoy.filters.dubbo.router
[2022-11-02 18:11:43.045][21][info][main] [source/server/server.cc:394]   envoy.matching.http.input: envoy.matching.inputs.request_headers, envoy.matching.inputs.request_trailers, envoy.matching.inputs.response_headers, envoy.matching.inputs.response_trailers
[2022-11-02 18:11:43.045][21][info][main] [source/server/server.cc:394]   envoy.matching.network.custom_matchers: envoy.matching.custom_matchers.trie_matcher
[2022-11-02 18:11:43.045][21][info][main] [source/server/server.cc:394]   envoy.transport_sockets.upstream: envoy.transport_sockets.alts, envoy.transport_sockets.quic, envoy.transport_sockets.raw_buffer, envoy.transport_sockets.starttls, envoy.transport_sockets.tap, envoy.transport_sockets.tcp_stats, envoy.transport_sockets.tls, envoy.transport_sockets.upstream_proxy_protocol, raw_buffer, starttls, tls
[2022-11-02 18:11:43.045][21][info][main] [source/server/server.cc:394]   envoy.matching.common_inputs: envoy.matching.common_inputs.environment_variable
[2022-11-02 18:11:43.045][21][info][main] [source/server/server.cc:394]   envoy.dubbo_proxy.serializers: dubbo.hessian2
[2022-11-02 18:11:43.045][21][info][main] [source/server/server.cc:394]   envoy.thrift_proxy.transports: auto, framed, header, unframed
[2022-11-02 18:11:43.045][21][info][main] [source/server/server.cc:394]   envoy.rate_limit_descriptors: envoy.rate_limit_descriptors.expr
[2022-11-02 18:11:43.045][21][info][main] [source/server/server.cc:394]   envoy.rbac.matchers: envoy.rbac.matchers.upstream_ip_port
[2022-11-02 18:11:43.045][21][info][main] [source/server/server.cc:394]   envoy.retry_host_predicates: envoy.retry_host_predicates.omit_canary_hosts, envoy.retry_host_predicates.omit_host_metadata, envoy.retry_host_predicates.previous_hosts
[2022-11-02 18:11:43.045][21][info][main] [source/server/server.cc:394]   envoy.request_id: envoy.request_id.uuid
[2022-11-02 18:11:43.045][21][info][main] [source/server/server.cc:394]   envoy.filters.udp_listener: envoy.filters.udp.dns_filter, envoy.filters.udp_listener.udp_proxy
[2022-11-02 18:11:43.045][21][info][main] [source/server/server.cc:394]   envoy.resource_monitors: envoy.resource_monitors.fixed_heap, envoy.resource_monitors.injected_resource
[2022-11-02 18:11:43.045][21][info][main] [source/server/server.cc:394]   network.connection.client: default, envoy_internal
[2022-11-02 18:11:43.045][21][info][main] [source/server/server.cc:394]   envoy.tls.cert_validator: envoy.tls.cert_validator.default, envoy.tls.cert_validator.spiffe
[2022-11-02 18:11:43.045][21][info][main] [source/server/server.cc:394]   envoy.matching.input_matchers: envoy.matching.matchers.consistent_hashing, envoy.matching.matchers.ip
[2022-11-02 18:11:43.045][21][info][main] [source/server/server.cc:394]   envoy.access_loggers: envoy.access_loggers.file, envoy.access_loggers.http_grpc, envoy.access_loggers.open_telemetry, envoy.access_loggers.stderr, envoy.access_loggers.stdout, envoy.access_loggers.tcp_grpc, envoy.access_loggers.wasm, envoy.file_access_log, envoy.http_grpc_access_log, envoy.open_telemetry_access_log, envoy.stderr_access_log, envoy.stdout_access_log, envoy.tcp_grpc_access_log, envoy.wasm_access_log
[2022-11-02 18:11:43.045][21][info][main] [source/server/server.cc:394]   envoy.stats_sinks: envoy.dog_statsd, envoy.graphite_statsd, envoy.metrics_service, envoy.stat_sinks.dog_statsd, envoy.stat_sinks.graphite_statsd, envoy.stat_sinks.hystrix, envoy.stat_sinks.metrics_service, envoy.stat_sinks.statsd, envoy.stat_sinks.wasm, envoy.statsd
[2022-11-02 18:11:43.045][21][info][main] [source/server/server.cc:394]   envoy.clusters: envoy.cluster.eds, envoy.cluster.logical_dns, envoy.cluster.original_dst, envoy.cluster.static, envoy.cluster.strict_dns, envoy.clusters.aggregate, envoy.clusters.dynamic_forward_proxy, envoy.clusters.redis
[2022-11-02 18:11:43.045][21][info][main] [source/server/server.cc:394]   envoy.tracers: envoy.dynamic.ot, envoy.lightstep, envoy.tracers.datadog, envoy.tracers.dynamic_ot, envoy.tracers.lightstep, envoy.tracers.opencensus, envoy.tracers.skywalking, envoy.tracers.xray, envoy.tracers.zipkin, envoy.zipkin
[2022-11-02 18:11:43.045][21][info][main] [source/server/server.cc:394]   envoy.guarddog_actions: envoy.watchdog.abort_action, envoy.watchdog.profile_action
[2022-11-02 18:11:43.045][21][info][main] [source/server/server.cc:394]   envoy.thrift_proxy.filters: envoy.filters.thrift.header_to_metadata, envoy.filters.thrift.rate_limit, envoy.filters.thrift.router
[2022-11-02 18:11:43.045][21][info][main] [source/server/server.cc:394]   envoy.http.original_ip_detection: envoy.http.original_ip_detection.custom_header, envoy.http.original_ip_detection.xff
[2022-11-02 18:11:43.045][21][info][main] [source/server/server.cc:394]   envoy.quic.proof_source: envoy.quic.proof_source.filter_chain
[2022-11-02 18:11:43.045][21][info][main] [source/server/server.cc:394]   envoy.http.cache: envoy.extensions.http.cache.simple
[2022-11-02 18:11:43.045][21][info][main] [source/server/server.cc:394]   envoy.dubbo_proxy.protocols: dubbo
[2022-11-02 18:11:43.045][21][info][main] [source/server/server.cc:394]   envoy.filters.http: envoy.bandwidth_limit, envoy.buffer, envoy.cors, envoy.csrf, envoy.ext_authz, envoy.ext_proc, envoy.fault, envoy.filters.http.adaptive_concurrency, envoy.filters.http.admission_control, envoy.filters.http.alternate_protocols_cache, envoy.filters.http.aws_lambda, envoy.filters.http.aws_request_signing, envoy.filters.http.bandwidth_limit, envoy.filters.http.buffer, envoy.filters.http.cache, envoy.filters.http.cdn_loop, envoy.filters.http.composite, envoy.filters.http.compressor, envoy.filters.http.cors, envoy.filters.http.csrf, envoy.filters.http.decompressor, envoy.filters.http.dynamic_forward_proxy, envoy.filters.http.dynamo, envoy.filters.http.ext_authz, envoy.filters.http.ext_proc, envoy.filters.http.fault, envoy.filters.http.gcp_authn, envoy.filters.http.grpc_http1_bridge, envoy.filters.http.grpc_http1_reverse_bridge, envoy.filters.http.grpc_json_transcoder, envoy.filters.http.grpc_stats, envoy.filters.http.grpc_web, envoy.filters.http.header_to_metadata, envoy.filters.http.health_check, envoy.filters.http.ip_tagging, envoy.filters.http.jwt_authn, envoy.filters.http.local_ratelimit, envoy.filters.http.lua, envoy.filters.http.oauth2, envoy.filters.http.on_demand, envoy.filters.http.original_src, envoy.filters.http.ratelimit, envoy.filters.http.rbac, envoy.filters.http.router, envoy.filters.http.set_metadata, envoy.filters.http.stateful_session, envoy.filters.http.tap, envoy.filters.http.wasm, envoy.grpc_http1_bridge, envoy.grpc_json_transcoder, envoy.grpc_web, envoy.health_check, envoy.http_dynamo_filter, envoy.ip_tagging, envoy.local_rate_limit, envoy.lua, envoy.rate_limit, envoy.router, match-wrapper
[2022-11-02 18:11:43.045][21][info][main] [source/server/server.cc:394]   envoy.bootstrap: envoy.bootstrap.internal_listener, envoy.bootstrap.wasm, envoy.extensions.network.socket_interface.default_socket_interface
[2022-11-02 18:11:43.045][21][info][main] [source/server/server.cc:394]   envoy.dubbo_proxy.route_matchers: default
[2022-11-02 18:11:43.052][21][warning][misc] [source/common/protobuf/message_validator_impl.cc:21] Deprecated field: type envoy.extensions.transport_sockets.tls.v3.CertificateValidationContext Using deprecated option 'envoy.extensions.transport_sockets.tls.v3.CertificateValidationContext.match_subject_alt_names' from file common.proto. This configuration will be removed from Envoy soon. Please see https://www.envoyproxy.io/docs/envoy/latest/version_history/version_history for details. If continued use of this field is absolutely necessary, see https://www.envoyproxy.io/docs/envoy/latest/configuration/operations/runtime#using-runtime-overrides-for-deprecated-features for how to apply a temporary and highly discouraged override.
[2022-11-02 18:11:43.052][21][info][main] [source/server/server.cc:442] HTTP header map info:
[2022-11-02 18:11:43.053][21][info][main] [source/server/server.cc:445]   request header map: 656 bytes: :authority,:method,:path,:protocol,:scheme,accept,accept-encoding,access-control-request-headers,access-control-request-method,authentication,authorization,cache-control,cdn-loop,connection,content-encoding,content-length,content-type,expect,grpc-accept-encoding,grpc-timeout,if-match,if-modified-since,if-none-match,if-range,if-unmodified-since,keep-alive,origin,pragma,proxy-connection,proxy-status,referer,te,transfer-encoding,upgrade,user-agent,via,x-client-trace-id,x-envoy-attempt-count,x-envoy-decorator-operation,x-envoy-downstream-service-cluster,x-envoy-downstream-service-node,x-envoy-expected-rq-timeout-ms,x-envoy-external-address,x-envoy-force-trace,x-envoy-hedge-on-per-try-timeout,x-envoy-internal,x-envoy-ip-tags,x-envoy-max-retries,x-envoy-original-path,x-envoy-original-url,x-envoy-retriable-header-names,x-envoy-retriable-status-codes,x-envoy-retry-grpc-on,x-envoy-retry-on,x-envoy-upstream-alt-stat-name,x-envoy-upstream-rq-per-try-timeout-ms,x-envoy-upstream-rq-timeout-alt-response,x-envoy-upstream-rq-timeout-ms,x-envoy-upstream-stream-duration-ms,x-forwarded-client-cert,x-forwarded-for,x-forwarded-host,x-forwarded-proto,x-ot-span-context,x-request-id
[2022-11-02 18:11:43.053][21][info][main] [source/server/server.cc:445]   request trailer map: 128 bytes: 
[2022-11-02 18:11:43.053][21][info][main] [source/server/server.cc:445]   response header map: 432 bytes: :status,access-control-allow-credentials,access-control-allow-headers,access-control-allow-methods,access-control-allow-origin,access-control-expose-headers,access-control-max-age,age,cache-control,connection,content-encoding,content-length,content-type,date,etag,expires,grpc-message,grpc-status,keep-alive,last-modified,location,proxy-connection,proxy-status,server,transfer-encoding,upgrade,vary,via,x-envoy-attempt-count,x-envoy-decorator-operation,x-envoy-degraded,x-envoy-immediate-health-check-fail,x-envoy-ratelimited,x-envoy-upstream-canary,x-envoy-upstream-healthchecked-cluster,x-envoy-upstream-service-time,x-request-id
[2022-11-02 18:11:43.053][21][info][main] [source/server/server.cc:445]   response trailer map: 152 bytes: grpc-message,grpc-status
[2022-11-02 18:11:43.061][21][info][main] [source/server/server.cc:786] runtime: layers:
  - name: kuma
    static_layer:
      re2.max_program_size.error_level: 4294967295
      re2.max_program_size.warn_level: 1000
      envoy.restart_features.use_apple_api_for_dns_lookups: false
  - name: gateway
    static_layer:
      overload.global_downstream_max_connections: 50000
  - name: gateway.listeners
    rtds_layer:
      name: gateway.listeners
      rtds_config:
        ads:
          {}
        resource_api_version: V3
[2022-11-02 18:11:43.063][21][info][admin] [source/server/admin/admin.cc:134] admin address: 127.0.0.1:9901
[2022-11-02 18:11:43.064][21][info][config] [source/server/configuration_impl.cc:127] loading tracing configuration
[2022-11-02 18:11:43.064][21][info][config] [source/server/configuration_impl.cc:87] loading 1 static secret(s)
[2022-11-02 18:11:43.064][21][info][config] [source/server/configuration_impl.cc:93] loading 2 cluster(s)
[2022-11-02 18:11:43.085][21][info][config] [source/server/configuration_impl.cc:97] loading 0 listener(s)
[2022-11-02 18:11:43.085][21][info][config] [source/server/configuration_impl.cc:109] loading stats configuration
[2022-11-02 18:11:43.086][21][info][main] [source/server/server.cc:882] starting main dispatch loop
[2022-11-02 18:11:58.087][21][warning][config] [source/common/config/grpc_subscription_impl.cc:118] gRPC config: initial fetch timed out for type.googleapis.com/envoy.service.runtime.v3.Runtime
[2022-11-02 18:11:58.087][21][info][runtime] [source/common/runtime/runtime_impl.cc:463] RTDS has finished initialization
[2022-11-02 18:11:58.087][21][info][upstream] [source/common/upstream/cluster_manager_impl.cc:205] cm init: initializing cds
[2022-11-02 18:12:13.088][21][warning][config] [source/common/config/grpc_subscription_impl.cc:118] gRPC config: initial fetch timed out for type.googleapis.com/envoy.config.cluster.v3.Cluster
[2022-11-02 18:12:13.088][21][info][upstream] [source/common/upstream/cluster_manager_impl.cc:209] cm init: all clusters initialized
[2022-11-02 18:12:13.088][21][info][main] [source/server/server.cc:863] all clusters initialized. initializing init manager
michaelbeaumont commented 1 year ago

Is there anything in the kuma-cp logs? How exactly did you create the certificate secret you want to use?

conman2305 commented 1 year ago

Ooh, kuma-cp is getting spammed with:

gateway.Generator failed: failed to generate TLS certificate: missing server certificate"

The cert data was created with this command: kumactl generate tls-certificate --type=server --hostname=demo-service-rust --key-file=- --cert-file=- | base64 and then put into a kuma secret:

apiVersion: v1
data:
  value: <the cert>
kind: Secret
metadata:
  labels:
    kuma.io/mesh: dev
  name: dsr-cert
  namespace: kuma-system
type: system.kuma.io/secret
conman2305 commented 1 year ago

Ah, I just noticed I put the kuma.io/mesh: dev annotation as a label on the cert secret but that doesn't seem to have changed anything. Still getting that same error in the kuma-cp logs and the Envoy containers are failing their healthcheck.

michaelbeaumont commented 1 year ago

It looks like the secret is invalid, can you try to recreate or ensure the data has the right format? The kuma.io/mesh: dev should in fact be a label. Something like:

echo "
apiVersion: v1
kind: Secret
metadata:
  name: dsr-cert
  namespace: kuma-system
  labels:
    kuma.io/mesh: default
data:
  value: '$(kumactl generate tls-certificate --type=server --hostname=demo-service-rust --key-file=- --cert-file=- | base64 -w0)'
type: system.kuma.io/secret
" > secret.yaml

With the above secret I'm able to configure a TLS listener successfully.

conman2305 commented 1 year ago

Well, I don't have a reasoned explanation for it, but creating the cert data on my Ubuntu VM works, but my MacOS laptop doesn't.

conman2305 commented 1 year ago

Thanks for all the help!