solo-io / gloo

The Feature-rich, Kubernetes-native, Next-Generation API Gateway Built on Envoy
https://docs.solo.io/
Apache License 2.0
4.09k stars 443 forks source link

Gloo does not work in k3s (gateway does not start) #3949

Closed aku closed 3 years ago

aku commented 3 years ago

Describe the bug Gloo does not work correctly in https://k3s.io/

To Reproduce Steps to reproduce the behavior:

  1. install k3s
  2. install gloo using glooctl or helm chart
  3. notice that gateway does not start properly. UI reports "Could not find gloo proxy resource for gateway-proxy gateway-proxy"
    
    kubectl get po -n gloo-system -l "gloo=gateway"

NAME READY STATUS RESTARTS AGE gateway-7bc858f4dc-czcfg 0/1 Running 0 12m

kubectl logs -n gloo-system gateway-7bc858f4dc-czcfg

{"level":"info","ts":1607021497.2892132,"logger":"gateway.v1.event_loop","caller":"v1/setup_event_loop.sk.go:57","msg":"event loop started","version":"1.5.12"} {"level":"info","ts":1607021497.4214463,"logger":"gateway.v1.event_loop.gateway","caller":"validation/robust_client.go:34","msg":"starting proxy validation client... this may take a moment","version":"1.5.12","validation_server":"gloo:9988"}


**Expected behavior**
Gloo proxy starts correctly

**Additional context**
Add any other context about the problem here, e.g.
- Gloo Edge version v1.5.12 
- Kubernetes version v1.19.4+k3s1
Sodman commented 3 years ago

Hi @aku , I have run another version of Gloo Edge 1.5.x with k3s before and did not encounter any issues. I would expect this to work.

Can you run glooctl check and let us know if it spits out any error messages?

If that doesn't provide any insights, can you run a kubectl describe on the pod and let us know if there's anything interesting in the Events?

aku commented 3 years ago
glooctl install gateway --with-admin-console
Creating namespace gloo-system... Done.
Starting Gloo Edge installation...

Gloo Edge Enterprise was successfully installed!
glooctl check
Checking deployments... Deployment gateway in namespace gloo-system is not available! Message: Deployment does not have minimum availability.
Problems detected!
kubectl  describe pod gateway-7bc858f4dc-v6zh9 -n gloo-system
Name:         gateway-7bc858f4dc-v6zh9
Namespace:    gloo-system
Priority:     0
Node:         n2.home.cloud/192.168.1.41
Start Time:   Thu, 03 Dec 2020 22:23:15 +0300
Labels:       gloo=gateway
              pod-template-hash=7bc858f4dc
Annotations:  prometheus.io/path: /metrics
              prometheus.io/port: 9091
              prometheus.io/scrape: true
Status:       Running
IP:           10.42.2.33
IPs:
  IP:           10.42.2.33
Controlled By:  ReplicaSet/gateway-7bc858f4dc
Containers:
  gateway:
    Container ID:   containerd://1c9006854114f3dbfda12831a90706fce8651aa6941c1ec8da83b1581c01c3c0
    Image:          quay.io/solo-io/gateway:1.5.12
    Image ID:       quay.io/solo-io/gateway@sha256:ee14aa7adfc92de08f840a62c8cc38a5595072825335c8d3a80d9813d51b67cd
    Port:           8443/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Thu, 03 Dec 2020 22:23:17 +0300
    Ready:          False
    Restart Count:  0
    Readiness:      tcp-socket :8443 delay=1s timeout=1s period=2s #success=1 #failure=10
    Environment:
      POD_NAMESPACE:          gloo-system (v1:metadata.namespace)
      START_STATS_SERVER:     true
      VALIDATION_MUST_START:  true
    Mounts:
      /etc/gateway/validation-certs from validation-certs (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from gateway-token-f5rsq (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  validation-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  gateway-validation-certs
    Optional:    false
  gateway-token-f5rsq:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  gateway-token-f5rsq
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason       Age                From               Message
  ----     ------       ----               ----               -------
  Normal   Scheduled    43s                default-scheduler  Successfully assigned gloo-system/gateway-7bc858f4dc-v6zh9 to n2.home.cloud
  Warning  FailedMount  43s                kubelet            MountVolume.SetUp failed for volume "validation-certs" : failed to sync secret cache: timed out waiting for the condition
  Warning  FailedMount  43s                kubelet            MountVolume.SetUp failed for volume "gateway-token-f5rsq" : failed to sync secret cache: timed out waiting for the condition
  Normal   Pulled       42s                kubelet            Container image "quay.io/solo-io/gateway:1.5.12" already present on machine
  Normal   Created      42s                kubelet            Created container gateway
  Normal   Started      42s                kubelet            Started container gateway
  Warning  Unhealthy    2s (x20 over 40s)  kubelet            Readiness probe failed: dial tcp 10.42.2.33:8443: connect: connection refused
Sodman commented 3 years ago

Are any of the pods starting up successfully or is it just the gateway pod?

kubectl get pods -n gloo-system

Can you post logs/describe from the gateway-proxy pod?

aku commented 3 years ago

Other pods are fine

kubectl get po -n gloo-system
NAME                             READY   STATUS    RESTARTS   AGE
svclb-gateway-proxy-vln2h        2/2     Running   0          8m56s
svclb-gateway-proxy-tx2mf        2/2     Running   0          8m56s
discovery-b7cbd656-ndqf7         1/1     Running   0          8m56s
gateway-proxy-6c8775bb4c-f2dlw   1/1     Running   0          8m56s
gateway-7bc858f4dc-v6zh9         0/1     Running   0          8m56s
svclb-gateway-proxy-mrwmv        2/2     Running   0          8m56s
gloo-69dbd5bc7d-w8hl9            1/1     Running   0          8m56s
api-server-75bb986cc-b6bwz       3/3     Running   0          8m56s
kubectl logs -n gloo-system gateway-7bc858f4dc-v6zh9
{"level":"info","ts":1607023397.5895777,"logger":"gateway.v1.event_loop","caller":"v1/setup_event_loop.sk.go:57","msg":"event loop started","version":"1.5.12"}
{"level":"info","ts":1607023397.7344706,"logger":"gateway.v1.event_loop.gateway","caller":"validation/robust_client.go:34","msg":"starting proxy validation client... this may take a moment","version":"1.5.12","validation_server":"gloo:9988"}
aku commented 3 years ago
kubectl logs -n gloo-system gateway-proxy-6c8775bb4c-f2dlw
[2020-12-03 19:23:17.309][7][info][main] [external/envoy/source/server/server.cc:303] initializing epoch 0 (base id=0, hot restart version=disabled)
[2020-12-03 19:23:17.309][7][info][main] [external/envoy/source/server/server.cc:305] statically linked extensions:
[2020-12-03 19:23:17.309][7][info][main] [external/envoy/source/server/server.cc:307]   envoy.resource_monitors: envoy.resource_monitors.fixed_heap, envoy.resource_monitors.injected_resource
[2020-12-03 19:23:17.309][7][info][main] [external/envoy/source/server/server.cc:307]   envoy.thrift_proxy.transports: auto, framed, header, unframed
[2020-12-03 19:23:17.309][7][info][main] [external/envoy/source/server/server.cc:307]   envoy.transport_sockets.upstream: envoy.transport_sockets.alts, envoy.transport_sockets.quic, envoy.transport_sockets.raw_buffer, envoy.transport_sockets.tap, envoy.transport_sockets.tls, raw_buffer, tls
[2020-12-03 19:23:17.309][7][info][main] [external/envoy/source/server/server.cc:307]   envoy.dubbo_proxy.protocols: dubbo
[2020-12-03 19:23:17.309][7][info][main] [external/envoy/source/server/server.cc:307]   envoy.udp_listeners: raw_udp_listener
[2020-12-03 19:23:17.309][7][info][main] [external/envoy/source/server/server.cc:307]   envoy.internal_redirect_predicates: envoy.internal_redirect_predicates.allow_listed_routes, envoy.internal_redirect_predicates.previous_routes, envoy.internal_redirect_predicates.safe_cross_scheme
[2020-12-03 19:23:17.309][7][info][main] [external/envoy/source/server/server.cc:307]   envoy.thrift_proxy.filters: envoy.filters.thrift.rate_limit, envoy.filters.thrift.router
[2020-12-03 19:23:17.309][7][info][main] [external/envoy/source/server/server.cc:307]   envoy.upstreams: envoy.filters.connection_pools.http.generic, envoy.filters.connection_pools.http.http, envoy.filters.connection_pools.http.tcp
[2020-12-03 19:23:17.309][7][info][main] [external/envoy/source/server/server.cc:307]   envoy.dubbo_proxy.filters: envoy.filters.dubbo.router
[2020-12-03 19:23:17.309][7][info][main] [external/envoy/source/server/server.cc:307]   envoy.resolvers: envoy.ip
[2020-12-03 19:23:17.309][7][info][main] [external/envoy/source/server/server.cc:307]   envoy.dubbo_proxy.route_matchers: default
[2020-12-03 19:23:17.309][7][info][main] [external/envoy/source/server/server.cc:307]   envoy.filters.listener: envoy.filters.listener.http_inspector, envoy.filters.listener.original_dst, envoy.filters.listener.original_src, envoy.filters.listener.proxy_protocol, envoy.filters.listener.tls_inspector, envoy.listener.http_inspector, envoy.listener.original_dst, envoy.listener.original_src, envoy.listener.proxy_protocol, envoy.listener.tls_inspector
[2020-12-03 19:23:17.309][7][info][main] [external/envoy/source/server/server.cc:307]   envoy.retry_priorities: envoy.retry_priorities.previous_priorities
[2020-12-03 19:23:17.309][7][info][main] [external/envoy/source/server/server.cc:307]   envoy.clusters: envoy.cluster.eds, envoy.cluster.logical_dns, envoy.cluster.original_dst, envoy.cluster.static, envoy.cluster.strict_dns, envoy.clusters.aggregate, envoy.clusters.dynamic_forward_proxy, envoy.clusters.redis
[2020-12-03 19:23:17.309][7][info][main] [external/envoy/source/server/server.cc:307]   envoy.transport_sockets.downstream: envoy.transport_sockets.alts, envoy.transport_sockets.quic, envoy.transport_sockets.raw_buffer, envoy.transport_sockets.tap, envoy.transport_sockets.tls, raw_buffer, tls
[2020-12-03 19:23:17.309][7][info][main] [external/envoy/source/server/server.cc:307]   envoy.bootstrap: envoy.extensions.network.socket_interface.default_socket_interface
[2020-12-03 19:23:17.309][7][info][main] [external/envoy/source/server/server.cc:307]   envoy.filters.http: envoy.buffer, envoy.cors, envoy.csrf, envoy.ext_authz, envoy.fault, envoy.filters.http.adaptive_concurrency, envoy.filters.http.admission_control, envoy.filters.http.aws_lambda, envoy.filters.http.aws_request_signing, envoy.filters.http.buffer, envoy.filters.http.cache, envoy.filters.http.compressor, envoy.filters.http.cors, envoy.filters.http.csrf, envoy.filters.http.decompressor, envoy.filters.http.dynamic_forward_proxy, envoy.filters.http.dynamo, envoy.filters.http.ext_authz, envoy.filters.http.fault, envoy.filters.http.grpc_http1_bridge, envoy.filters.http.grpc_http1_reverse_bridge, envoy.filters.http.grpc_json_transcoder, envoy.filters.http.grpc_stats, envoy.filters.http.grpc_web, envoy.filters.http.gzip, envoy.filters.http.header_to_metadata, envoy.filters.http.health_check, envoy.filters.http.ip_tagging, envoy.filters.http.jwt_authn, envoy.filters.http.lua, envoy.filters.http.oauth, envoy.filters.http.on_demand, envoy.filters.http.original_src, envoy.filters.http.ratelimit, envoy.filters.http.rbac, envoy.filters.http.router, envoy.filters.http.squash, envoy.filters.http.tap, envoy.grpc_http1_bridge, envoy.grpc_json_transcoder, envoy.grpc_web, envoy.gzip, envoy.health_check, envoy.http_dynamo_filter, envoy.ip_tagging, envoy.lua, envoy.rate_limit, envoy.router, envoy.squash, io.solo.aws_lambda, io.solo.nats_streaming, io.solo.transformation
[2020-12-03 19:23:17.309][7][info][main] [external/envoy/source/server/server.cc:307]   envoy.filters.udp_listener: envoy.filters.udp.dns_filter, envoy.filters.udp_listener.udp_proxy
[2020-12-03 19:23:17.309][7][info][main] [external/envoy/source/server/server.cc:307]   envoy.http.cache: envoy.extensions.http.cache.simple
[2020-12-03 19:23:17.309][7][info][main] [external/envoy/source/server/server.cc:307]   envoy.access_loggers: envoy.access_loggers.file, envoy.access_loggers.http_grpc, envoy.access_loggers.tcp_grpc, envoy.file_access_log, envoy.http_grpc_access_log, envoy.tcp_grpc_access_log
[2020-12-03 19:23:17.309][7][info][main] [external/envoy/source/server/server.cc:307]   envoy.stats_sinks: envoy.dog_statsd, envoy.metrics_service, envoy.stat_sinks.dog_statsd, envoy.stat_sinks.hystrix, envoy.stat_sinks.metrics_service, envoy.stat_sinks.statsd, envoy.statsd
[2020-12-03 19:23:17.309][7][info][main] [external/envoy/source/server/server.cc:307]   envoy.guarddog_actions: envoy.watchdog.profile_action
[2020-12-03 19:23:17.309][7][info][main] [external/envoy/source/server/server.cc:307]   envoy.thrift_proxy.protocols: auto, binary, binary/non-strict, compact, twitter
[2020-12-03 19:23:17.309][7][info][main] [external/envoy/source/server/server.cc:307]   envoy.tracers: envoy.dynamic.ot, envoy.lightstep, envoy.tracers.datadog, envoy.tracers.dynamic_ot, envoy.tracers.lightstep, envoy.tracers.opencensus, envoy.tracers.xray, envoy.tracers.zipkin, envoy.zipkin
[2020-12-03 19:23:17.309][7][info][main] [external/envoy/source/server/server.cc:307]   envoy.compression.decompressor: envoy.compression.gzip.decompressor
[2020-12-03 19:23:17.309][7][info][main] [external/envoy/source/server/server.cc:307]   envoy.filters.network: envoy.client_ssl_auth, envoy.echo, envoy.ext_authz, envoy.filters.network.client_ssl_auth, envoy.filters.network.direct_response, envoy.filters.network.dubbo_proxy, envoy.filters.network.echo, envoy.filters.network.ext_authz, envoy.filters.network.http_connection_manager, envoy.filters.network.kafka_broker, envoy.filters.network.local_ratelimit, envoy.filters.network.mongo_proxy, envoy.filters.network.mysql_proxy, envoy.filters.network.postgres_proxy, envoy.filters.network.ratelimit, envoy.filters.network.rbac, envoy.filters.network.redis_proxy, envoy.filters.network.rocketmq_proxy, envoy.filters.network.sni_cluster, envoy.filters.network.sni_dynamic_forward_proxy, envoy.filters.network.tcp_proxy, envoy.filters.network.thrift_proxy, envoy.filters.network.zookeeper_proxy, envoy.http_connection_manager, envoy.mongo_proxy, envoy.ratelimit, envoy.redis_proxy, envoy.tcp_proxy
[2020-12-03 19:23:17.309][7][info][main] [external/envoy/source/server/server.cc:307]   envoy.health_checkers: envoy.health_checkers.redis
[2020-12-03 19:23:17.309][7][info][main] [external/envoy/source/server/server.cc:307]   envoy.grpc_credentials: envoy.grpc_credentials.aws_iam, envoy.grpc_credentials.default, envoy.grpc_credentials.file_based_metadata
[2020-12-03 19:23:17.309][7][info][main] [external/envoy/source/server/server.cc:307]   envoy.compression.compressor: envoy.compression.gzip.compressor
[2020-12-03 19:23:17.309][7][info][main] [external/envoy/source/server/server.cc:307]   envoy.udp_packet_writers: udp_default_writer
[2020-12-03 19:23:17.309][7][info][main] [external/envoy/source/server/server.cc:307]   envoy.retry_host_predicates: envoy.retry_host_predicates.omit_canary_hosts, envoy.retry_host_predicates.omit_host_metadata, envoy.retry_host_predicates.previous_hosts
[2020-12-03 19:23:17.309][7][info][main] [external/envoy/source/server/server.cc:307]   envoy.dubbo_proxy.serializers: dubbo.hessian2
[2020-12-03 19:23:17.314][7][info][main] [external/envoy/source/server/server.cc:323] HTTP header map info:
[2020-12-03 19:23:17.316][7][info][main] [external/envoy/source/server/server.cc:326]   request header map: 544 bytes: :authority,:method,:path,:protocol,:scheme,accept,accept-encoding,access-control-request-method,authorization,cache-control,connection,content-encoding,content-length,content-type,expect,grpc-accept-encoding,grpc-timeout,if-match,if-modified-since,if-none-match,if-range,if-unmodified-since,keep-alive,origin,pragma,proxy-connection,referer,te,transfer-encoding,upgrade,user-agent,via,x-client-trace-id,x-envoy-attempt-count,x-envoy-decorator-operation,x-envoy-downstream-service-cluster,x-envoy-downstream-service-node,x-envoy-expected-rq-timeout-ms,x-envoy-external-address,x-envoy-force-trace,x-envoy-hedge-on-per-try-timeout,x-envoy-internal,x-envoy-ip-tags,x-envoy-max-retries,x-envoy-original-path,x-envoy-original-url,x-envoy-retriable-header-names,x-envoy-retriable-status-codes,x-envoy-retry-grpc-on,x-envoy-retry-on,x-envoy-upstream-alt-stat-name,x-envoy-upstream-rq-per-try-timeout-ms,x-envoy-upstream-rq-timeout-alt-response,x-envoy-upstream-rq-timeout-ms,x-forwarded-client-cert,x-forwarded-for,x-forwarded-proto,x-ot-span-context,x-request-id
[2020-12-03 19:23:17.316][7][info][main] [external/envoy/source/server/server.cc:326]   request trailer map: 72 bytes:
[2020-12-03 19:23:17.316][7][info][main] [external/envoy/source/server/server.cc:326]   response header map: 368 bytes: :status,access-control-allow-credentials,access-control-allow-headers,access-control-allow-methods,access-control-allow-origin,access-control-expose-headers,access-control-max-age,age,cache-control,connection,content-encoding,content-length,content-type,date,etag,expires,grpc-message,grpc-status,keep-alive,last-modified,location,proxy-connection,server,transfer-encoding,upgrade,vary,via,x-envoy-attempt-count,x-envoy-decorator-operation,x-envoy-degraded,x-envoy-immediate-health-check-fail,x-envoy-ratelimited,x-envoy-upstream-canary,x-envoy-upstream-healthchecked-cluster,x-envoy-upstream-service-time,x-request-id
[2020-12-03 19:23:17.316][7][info][main] [external/envoy/source/server/server.cc:326]   response trailer map: 96 bytes: grpc-message,grpc-status
[2020-12-03 19:23:17.317][7][info][main] [external/envoy/source/server/server.cc:446] admin address: 127.0.0.1:19000
[2020-12-03 19:23:17.317][7][info][main] [external/envoy/source/server/server.cc:581] runtime: layers:
  - name: static_layer
    static_layer:
      overload:
        global_downstream_max_connections: 250000
  - name: admin_layer
    admin_layer:
      {}
[2020-12-03 19:23:17.317][7][info][config] [external/envoy/source/server/configuration_impl.cc:95] loading tracing configuration
[2020-12-03 19:23:17.317][7][info][config] [external/envoy/source/server/configuration_impl.cc:70] loading 0 static secret(s)
[2020-12-03 19:23:17.317][7][info][config] [external/envoy/source/server/configuration_impl.cc:76] loading 4 cluster(s)
[2020-12-03 19:23:17.320][7][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:101] StreamAggregatedResources gRPC config stream closed: 14, no healthy upstream
[2020-12-03 19:23:17.320][7][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:63] Unable to establish new stream
[2020-12-03 19:23:17.320][7][info][config] [external/envoy/source/server/configuration_impl.cc:80] loading 1 listener(s)
[2020-12-03 19:23:17.322][7][info][config] [external/envoy/source/server/configuration_impl.cc:121] loading stats sink configuration
[2020-12-03 19:23:17.323][7][info][main] [external/envoy/source/server/server.cc:677] starting main dispatch loop
[2020-12-03 19:23:17.752][7][info][runtime] [external/envoy/source/common/runtime/runtime_impl.cc:417] RTDS has finished initialization
[2020-12-03 19:23:17.752][7][info][upstream] [external/envoy/source/common/upstream/cluster_manager_impl.cc:174] cm init: initializing cds
[2020-12-03 19:23:17.875][7][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:101] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection failure
[2020-12-03 19:23:18.803][7][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:101] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection failure
[2020-12-03 19:23:26.561][7][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:101] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: local reset
[2020-12-03 19:23:32.753][7][info][upstream] [external/envoy/source/common/upstream/cluster_manager_impl.cc:178] cm init: all clusters initialized
[2020-12-03 19:23:32.753][7][info][main] [external/envoy/source/server/server.cc:658] all clusters initialized. initializing init manager
[2020-12-03 19:23:34.727][7][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:101] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: local reset
[2020-12-03 19:23:46.652][7][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:101] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: local reset
[2020-12-03 19:23:47.754][7][info][config] [external/envoy/source/server/listener_manager_impl.cc:888] all dependencies initialized. starting workers
[2020-12-03 19:24:08.160][7][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:101] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: local reset
[2020-12-03 19:24:23.273][7][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:101] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: local reset
[2020-12-03 19:24:28.535][7][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:101] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: local reset
[2020-12-03 19:24:59.194][7][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:101] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: local reset
[2020-12-03 19:25:12.637][7][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:101] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: local reset
[2020-12-03 19:25:25.034][7][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:101] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: local reset
[2020-12-03 19:25:57.608][7][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:101] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: local reset
[2020-12-03 19:26:22.345][7][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:101] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: local reset
[2020-12-03 19:26:28.980][7][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:101] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: local reset
[2020-12-03 19:26:39.602][7][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:101] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: local reset
[2020-12-03 19:26:48.386][7][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:101] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: local reset
[2020-12-03 19:27:01.996][7][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:101] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: local reset
[2020-12-03 19:27:22.457][7][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:101] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: local reset
[2020-12-03 19:27:41.931][7][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:101] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: local reset
[2020-12-03 19:28:13.991][7][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:101] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: local reset
[2020-12-03 19:28:43.783][7][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:101] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: local reset
[2020-12-03 19:29:13.024][7][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:101] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: local reset
[2020-12-03 19:29:27.467][7][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:101] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: local reset
[2020-12-03 19:29:57.535][7][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:101] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: local reset
[2020-12-03 19:30:16.225][7][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:101] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: local reset
[2020-12-03 19:30:28.287][7][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:101] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: local reset
[2020-12-03 19:30:57.338][7][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:101] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: local reset
[2020-12-03 19:31:31.724][7][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:101] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: local reset
[2020-12-03 19:31:59.038][7][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:101] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: local reset
[2020-12-03 19:32:26.490][7][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:101] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: local reset
Sodman commented 3 years ago

Looking again at your original describe, it looks like the mounts failed/timed out in the gateway pod for some reason:

  Warning  FailedMount  43s                kubelet            MountVolume.SetUp failed for volume "validation-certs" : failed to sync secret cache: timed out waiting for the condition
  Warning  FailedMount  43s                kubelet            MountVolume.SetUp failed for volume "gateway-token-f5rsq" : failed to sync secret cache: timed out waiting for the condition

Can you confirm that the secrets were created? kubectl get secrets -n gloo-system | grep gateway You should have a gateway-validation-certs secret and a gateway-token-f5rsq secret

Or If it really just timed out can you try bouncing the gateway pod? kubectl delete pod -n gloo-system gateway-7bc858f4dc-v6zh9

aku commented 3 years ago

@Sodman Looks like there is an issue with volumes

kubectl describe po -n gloo-system gateway-proxy-6c8775bb4c-f2dlw
Name:         gateway-proxy-6c8775bb4c-f2dlw
Namespace:    gloo-system
Priority:     0
Node:         n2.home.cloud/192.168.1.41
Start Time:   Thu, 03 Dec 2020 22:23:15 +0300
Labels:       gateway-proxy=live
              gateway-proxy-id=gateway-proxy
              gloo=gateway-proxy
              pod-template-hash=6c8775bb4c
Annotations:  prometheus.io/path: /metrics
              prometheus.io/port: 8081
              prometheus.io/scrape: true
Status:       Running
IP:           10.42.2.32
IPs:
  IP:           10.42.2.32
Controlled By:  ReplicaSet/gateway-proxy-6c8775bb4c
Containers:
  gateway-proxy:
    Container ID:  containerd://953f2168f08fdcb6bbc1933e6d638576279becd610ebfa409de9f241e2e1c37b
    Image:         quay.io/solo-io/gloo-envoy-wrapper:1.5.12
    Image ID:      quay.io/solo-io/gloo-envoy-wrapper@sha256:ff1a53f6dcd9cc8b530b7cba123a1f1e6ff3f1d254127958d7235774ce95a72e
    Ports:         8080/TCP, 8443/TCP
    Host Ports:    0/TCP, 0/TCP
    Args:
      --disable-hot-restart
    State:          Running
      Started:      Thu, 03 Dec 2020 22:23:17 +0300
    Ready:          True
    Restart Count:  0
    Environment:
      POD_NAMESPACE:  gloo-system (v1:metadata.namespace)
      POD_NAME:       gateway-proxy-6c8775bb4c-f2dlw (v1:metadata.name)
    Mounts:
      /etc/envoy from envoy-config (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from gateway-proxy-token-2xpsq (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  envoy-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      gateway-proxy-envoy-config
    Optional:  false
  gateway-proxy-token-2xpsq:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  gateway-proxy-token-2xpsq
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason       Age   From               Message
  ----     ------       ----  ----               -------
  Normal   Scheduled    18m   default-scheduler  Successfully assigned gloo-system/gateway-proxy-6c8775bb4c-f2dlw to n2.home.cloud
  Warning  FailedMount  18m   kubelet            MountVolume.SetUp failed for volume "envoy-config" : failed to sync configmap cache: timed out waiting for the condition
  Warning  FailedMount  18m   kubelet            MountVolume.SetUp failed for volume "gateway-proxy-token-2xpsq" : failed to sync secret cache: timed out waiting for the condition
  Normal   Pulled       18m   kubelet            Container image "quay.io/solo-io/gloo-envoy-wrapper:1.5.12" already present on machine
  Normal   Created      18m   kubelet            Created container gateway-proxy
  Normal   Started      18m   kubelet            Started container gateway-proxy
aku commented 3 years ago

@Sodman secrets are there

kubectl get secrets -n gloo-system | grep gateway
gateway-validation-certs     kubernetes.io/tls                     3      23m
gateway-token-f5rsq          kubernetes.io/service-account-token   3      23m
gateway-proxy-token-2xpsq    kubernetes.io/service-account-token   3      23m

I've killed gateway and gateway proxy pods. Now gloo reports that everything is fine. However Envoy status is still red in the UI dashboard with "Could not find gloo proxy resource for gateway-proxy gateway-proxy" message

I've tried to deploy a hello-world example. I can see upstream and virtual service but when I try to connect it gives a timeout

curl $(glooctl proxy url)/all-pets curl: (7) Failed to connect to 192.168.1.40 port 80: Operation timed out

Seems like Envoy does not work correctly

glooctl check
Checking deployments... OK
Checking pods... OK
Checking upstreams... OK
Checking upstream groups... OK
Checking auth configs... OK
Checking rate limit configs... OK
Checking secrets... OK
Checking virtual services... OK
Checking gateways... OK
Checking proxies... OK
No problems detected.
k get po -n gloo-system
NAME                             READY   STATUS    RESTARTS   AGE
svclb-gateway-proxy-vln2h        2/2     Running   0          32m
svclb-gateway-proxy-tx2mf        2/2     Running   0          32m
discovery-b7cbd656-ndqf7         1/1     Running   0          32m
svclb-gateway-proxy-mrwmv        2/2     Running   0          32m
api-server-75bb986cc-b6bwz       3/3     Running   0          32m
gateway-proxy-6c8775bb4c-n77rf   1/1     Running   0          6m57s
gloo-69dbd5bc7d-fljw5            1/1     Running   0          6m57s
gateway-7bc858f4dc-4j597         1/1     Running   0          7m56s
k logs -n gloo-system gateway-7bc858f4dc-4j597
{"level":"info","ts":1607024889.0534337,"logger":"gateway.v1.event_loop","caller":"v1/setup_event_loop.sk.go:57","msg":"event loop started","version":"1.5.12"}
{"level":"info","ts":1607024889.1699393,"logger":"gateway.v1.event_loop.gateway","caller":"validation/robust_client.go:34","msg":"starting proxy validation client... this may take a moment","version":"1.5.12","validation_server":"gloo:9988"}
{"level":"info","ts":1607024954.5061285,"logger":"gateway.v1.event_loop.gateway.validation-resync-notifications","caller":"validation/notification_channel.go:23","msg":"starting notification channel","version":"1.5.12"}
{"level":"info","ts":1607024954.5060897,"logger":"gateway.v1.event_loop.gateway.v1.event_loop","caller":"v1/api_event_loop.sk.go:57","msg":"event loop started","version":"1.5.12"}
{"level":"info","ts":1607024954.609073,"logger":"gateway.v1.event_loop.gateway.v1.event_loop.translatorSyncer","caller":"syncer/translator_syncer.go:67","msg":"begin sync 8575555701832387767 (0 virtual services, 2 gateways, 0 route tables)","version":"1.5.12"}
{"level":"info","ts":1607024954.6091943,"logger":"gateway.v1.event_loop.gateway.v1.event_loop.translatorSyncer","caller":"syncer/translator_syncer.go:79","msg":"end sync 8575555701832387767","version":"1.5.12"}
{"level":"info","ts":1607024954.6098301,"logger":"gateway.v1.event_loop.gateway","caller":"syncer/setup_syncer.go:314","msg":"starting gateway validation server","version":"1.5.12","port":8443,"cert":"/etc/gateway/validation-certs/tls.crt","key":"/etc/gateway/validation-certs/tls.key"}
k logs -n gloo-system gateway-proxy-6c8775bb4c-n77rf
[2020-12-03 19:49:07.810][8][info][main] [external/envoy/source/server/server.cc:303] initializing epoch 0 (base id=0, hot restart version=disabled)
[2020-12-03 19:49:07.811][8][info][main] [external/envoy/source/server/server.cc:305] statically linked extensions:
[2020-12-03 19:49:07.811][8][info][main] [external/envoy/source/server/server.cc:307]   envoy.dubbo_proxy.route_matchers: default
[2020-12-03 19:49:07.811][8][info][main] [external/envoy/source/server/server.cc:307]   envoy.transport_sockets.upstream: envoy.transport_sockets.alts, envoy.transport_sockets.quic, envoy.transport_sockets.raw_buffer, envoy.transport_sockets.tap, envoy.transport_sockets.tls, raw_buffer, tls
[2020-12-03 19:49:07.811][8][info][main] [external/envoy/source/server/server.cc:307]   envoy.resolvers: envoy.ip
[2020-12-03 19:49:07.811][8][info][main] [external/envoy/source/server/server.cc:307]   envoy.clusters: envoy.cluster.eds, envoy.cluster.logical_dns, envoy.cluster.original_dst, envoy.cluster.static, envoy.cluster.strict_dns, envoy.clusters.aggregate, envoy.clusters.dynamic_forward_proxy, envoy.clusters.redis
[2020-12-03 19:49:07.811][8][info][main] [external/envoy/source/server/server.cc:307]   envoy.retry_host_predicates: envoy.retry_host_predicates.omit_canary_hosts, envoy.retry_host_predicates.omit_host_metadata, envoy.retry_host_predicates.previous_hosts
[2020-12-03 19:49:07.811][8][info][main] [external/envoy/source/server/server.cc:307]   envoy.thrift_proxy.filters: envoy.filters.thrift.rate_limit, envoy.filters.thrift.router
[2020-12-03 19:49:07.811][8][info][main] [external/envoy/source/server/server.cc:307]   envoy.filters.http: envoy.buffer, envoy.cors, envoy.csrf, envoy.ext_authz, envoy.fault, envoy.filters.http.adaptive_concurrency, envoy.filters.http.admission_control, envoy.filters.http.aws_lambda, envoy.filters.http.aws_request_signing, envoy.filters.http.buffer, envoy.filters.http.cache, envoy.filters.http.compressor, envoy.filters.http.cors, envoy.filters.http.csrf, envoy.filters.http.decompressor, envoy.filters.http.dynamic_forward_proxy, envoy.filters.http.dynamo, envoy.filters.http.ext_authz, envoy.filters.http.fault, envoy.filters.http.grpc_http1_bridge, envoy.filters.http.grpc_http1_reverse_bridge, envoy.filters.http.grpc_json_transcoder, envoy.filters.http.grpc_stats, envoy.filters.http.grpc_web, envoy.filters.http.gzip, envoy.filters.http.header_to_metadata, envoy.filters.http.health_check, envoy.filters.http.ip_tagging, envoy.filters.http.jwt_authn, envoy.filters.http.lua, envoy.filters.http.oauth, envoy.filters.http.on_demand, envoy.filters.http.original_src, envoy.filters.http.ratelimit, envoy.filters.http.rbac, envoy.filters.http.router, envoy.filters.http.squash, envoy.filters.http.tap, envoy.grpc_http1_bridge, envoy.grpc_json_transcoder, envoy.grpc_web, envoy.gzip, envoy.health_check, envoy.http_dynamo_filter, envoy.ip_tagging, envoy.lua, envoy.rate_limit, envoy.router, envoy.squash, io.solo.aws_lambda, io.solo.nats_streaming, io.solo.transformation
[2020-12-03 19:49:07.811][8][info][main] [external/envoy/source/server/server.cc:307]   envoy.compression.decompressor: envoy.compression.gzip.decompressor
[2020-12-03 19:49:07.811][8][info][main] [external/envoy/source/server/server.cc:307]   envoy.transport_sockets.downstream: envoy.transport_sockets.alts, envoy.transport_sockets.quic, envoy.transport_sockets.raw_buffer, envoy.transport_sockets.tap, envoy.transport_sockets.tls, raw_buffer, tls
[2020-12-03 19:49:07.811][8][info][main] [external/envoy/source/server/server.cc:307]   envoy.filters.udp_listener: envoy.filters.udp.dns_filter, envoy.filters.udp_listener.udp_proxy
[2020-12-03 19:49:07.811][8][info][main] [external/envoy/source/server/server.cc:307]   envoy.retry_priorities: envoy.retry_priorities.previous_priorities
[2020-12-03 19:49:07.811][8][info][main] [external/envoy/source/server/server.cc:307]   envoy.thrift_proxy.transports: auto, framed, header, unframed
[2020-12-03 19:49:07.811][8][info][main] [external/envoy/source/server/server.cc:307]   envoy.dubbo_proxy.filters: envoy.filters.dubbo.router
[2020-12-03 19:49:07.811][8][info][main] [external/envoy/source/server/server.cc:307]   envoy.http.cache: envoy.extensions.http.cache.simple
[2020-12-03 19:49:07.811][8][info][main] [external/envoy/source/server/server.cc:307]   envoy.access_loggers: envoy.access_loggers.file, envoy.access_loggers.http_grpc, envoy.access_loggers.tcp_grpc, envoy.file_access_log, envoy.http_grpc_access_log, envoy.tcp_grpc_access_log
[2020-12-03 19:49:07.811][8][info][main] [external/envoy/source/server/server.cc:307]   envoy.tracers: envoy.dynamic.ot, envoy.lightstep, envoy.tracers.datadog, envoy.tracers.dynamic_ot, envoy.tracers.lightstep, envoy.tracers.opencensus, envoy.tracers.xray, envoy.tracers.zipkin, envoy.zipkin
[2020-12-03 19:49:07.811][8][info][main] [external/envoy/source/server/server.cc:307]   envoy.filters.listener: envoy.filters.listener.http_inspector, envoy.filters.listener.original_dst, envoy.filters.listener.original_src, envoy.filters.listener.proxy_protocol, envoy.filters.listener.tls_inspector, envoy.listener.http_inspector, envoy.listener.original_dst, envoy.listener.original_src, envoy.listener.proxy_protocol, envoy.listener.tls_inspector
[2020-12-03 19:49:07.811][8][info][main] [external/envoy/source/server/server.cc:307]   envoy.resource_monitors: envoy.resource_monitors.fixed_heap, envoy.resource_monitors.injected_resource
[2020-12-03 19:49:07.811][8][info][main] [external/envoy/source/server/server.cc:307]   envoy.compression.compressor: envoy.compression.gzip.compressor
[2020-12-03 19:49:07.811][8][info][main] [external/envoy/source/server/server.cc:307]   envoy.stats_sinks: envoy.dog_statsd, envoy.metrics_service, envoy.stat_sinks.dog_statsd, envoy.stat_sinks.hystrix, envoy.stat_sinks.metrics_service, envoy.stat_sinks.statsd, envoy.statsd
[2020-12-03 19:49:07.811][8][info][main] [external/envoy/source/server/server.cc:307]   envoy.thrift_proxy.protocols: auto, binary, binary/non-strict, compact, twitter
[2020-12-03 19:49:07.811][8][info][main] [external/envoy/source/server/server.cc:307]   envoy.filters.network: envoy.client_ssl_auth, envoy.echo, envoy.ext_authz, envoy.filters.network.client_ssl_auth, envoy.filters.network.direct_response, envoy.filters.network.dubbo_proxy, envoy.filters.network.echo, envoy.filters.network.ext_authz, envoy.filters.network.http_connection_manager, envoy.filters.network.kafka_broker, envoy.filters.network.local_ratelimit, envoy.filters.network.mongo_proxy, envoy.filters.network.mysql_proxy, envoy.filters.network.postgres_proxy, envoy.filters.network.ratelimit, envoy.filters.network.rbac, envoy.filters.network.redis_proxy, envoy.filters.network.rocketmq_proxy, envoy.filters.network.sni_cluster, envoy.filters.network.sni_dynamic_forward_proxy, envoy.filters.network.tcp_proxy, envoy.filters.network.thrift_proxy, envoy.filters.network.zookeeper_proxy, envoy.http_connection_manager, envoy.mongo_proxy, envoy.ratelimit, envoy.redis_proxy, envoy.tcp_proxy
[2020-12-03 19:49:07.811][8][info][main] [external/envoy/source/server/server.cc:307]   envoy.upstreams: envoy.filters.connection_pools.http.generic, envoy.filters.connection_pools.http.http, envoy.filters.connection_pools.http.tcp
[2020-12-03 19:49:07.811][8][info][main] [external/envoy/source/server/server.cc:307]   envoy.dubbo_proxy.serializers: dubbo.hessian2
[2020-12-03 19:49:07.811][8][info][main] [external/envoy/source/server/server.cc:307]   envoy.internal_redirect_predicates: envoy.internal_redirect_predicates.allow_listed_routes, envoy.internal_redirect_predicates.previous_routes, envoy.internal_redirect_predicates.safe_cross_scheme
[2020-12-03 19:49:07.811][8][info][main] [external/envoy/source/server/server.cc:307]   envoy.bootstrap: envoy.extensions.network.socket_interface.default_socket_interface
[2020-12-03 19:49:07.811][8][info][main] [external/envoy/source/server/server.cc:307]   envoy.dubbo_proxy.protocols: dubbo
[2020-12-03 19:49:07.811][8][info][main] [external/envoy/source/server/server.cc:307]   envoy.grpc_credentials: envoy.grpc_credentials.aws_iam, envoy.grpc_credentials.default, envoy.grpc_credentials.file_based_metadata
[2020-12-03 19:49:07.811][8][info][main] [external/envoy/source/server/server.cc:307]   envoy.udp_packet_writers: udp_default_writer
[2020-12-03 19:49:07.811][8][info][main] [external/envoy/source/server/server.cc:307]   envoy.health_checkers: envoy.health_checkers.redis
[2020-12-03 19:49:07.811][8][info][main] [external/envoy/source/server/server.cc:307]   envoy.udp_listeners: raw_udp_listener
[2020-12-03 19:49:07.811][8][info][main] [external/envoy/source/server/server.cc:307]   envoy.guarddog_actions: envoy.watchdog.profile_action
[2020-12-03 19:49:07.816][8][info][main] [external/envoy/source/server/server.cc:323] HTTP header map info:
[2020-12-03 19:49:07.817][8][info][main] [external/envoy/source/server/server.cc:326]   request header map: 544 bytes: :authority,:method,:path,:protocol,:scheme,accept,accept-encoding,access-control-request-method,authorization,cache-control,connection,content-encoding,content-length,content-type,expect,grpc-accept-encoding,grpc-timeout,if-match,if-modified-since,if-none-match,if-range,if-unmodified-since,keep-alive,origin,pragma,proxy-connection,referer,te,transfer-encoding,upgrade,user-agent,via,x-client-trace-id,x-envoy-attempt-count,x-envoy-decorator-operation,x-envoy-downstream-service-cluster,x-envoy-downstream-service-node,x-envoy-expected-rq-timeout-ms,x-envoy-external-address,x-envoy-force-trace,x-envoy-hedge-on-per-try-timeout,x-envoy-internal,x-envoy-ip-tags,x-envoy-max-retries,x-envoy-original-path,x-envoy-original-url,x-envoy-retriable-header-names,x-envoy-retriable-status-codes,x-envoy-retry-grpc-on,x-envoy-retry-on,x-envoy-upstream-alt-stat-name,x-envoy-upstream-rq-per-try-timeout-ms,x-envoy-upstream-rq-timeout-alt-response,x-envoy-upstream-rq-timeout-ms,x-forwarded-client-cert,x-forwarded-for,x-forwarded-proto,x-ot-span-context,x-request-id
[2020-12-03 19:49:07.817][8][info][main] [external/envoy/source/server/server.cc:326]   request trailer map: 72 bytes:
[2020-12-03 19:49:07.817][8][info][main] [external/envoy/source/server/server.cc:326]   response header map: 368 bytes: :status,access-control-allow-credentials,access-control-allow-headers,access-control-allow-methods,access-control-allow-origin,access-control-expose-headers,access-control-max-age,age,cache-control,connection,content-encoding,content-length,content-type,date,etag,expires,grpc-message,grpc-status,keep-alive,last-modified,location,proxy-connection,server,transfer-encoding,upgrade,vary,via,x-envoy-attempt-count,x-envoy-decorator-operation,x-envoy-degraded,x-envoy-immediate-health-check-fail,x-envoy-ratelimited,x-envoy-upstream-canary,x-envoy-upstream-healthchecked-cluster,x-envoy-upstream-service-time,x-request-id
[2020-12-03 19:49:07.817][8][info][main] [external/envoy/source/server/server.cc:326]   response trailer map: 96 bytes: grpc-message,grpc-status
[2020-12-03 19:49:07.818][8][info][main] [external/envoy/source/server/server.cc:446] admin address: 127.0.0.1:19000
[2020-12-03 19:49:07.818][8][info][main] [external/envoy/source/server/server.cc:581] runtime: layers:
  - name: static_layer
    static_layer:
      overload:
        global_downstream_max_connections: 250000
  - name: admin_layer
    admin_layer:
      {}
[2020-12-03 19:49:07.819][8][info][config] [external/envoy/source/server/configuration_impl.cc:95] loading tracing configuration
[2020-12-03 19:49:07.819][8][info][config] [external/envoy/source/server/configuration_impl.cc:70] loading 0 static secret(s)
[2020-12-03 19:49:07.819][8][info][config] [external/envoy/source/server/configuration_impl.cc:76] loading 4 cluster(s)
[2020-12-03 19:49:07.821][8][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:101] StreamAggregatedResources gRPC config stream closed: 14, no healthy upstream
[2020-12-03 19:49:07.821][8][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:63] Unable to establish new stream
[2020-12-03 19:49:07.821][8][info][config] [external/envoy/source/server/configuration_impl.cc:80] loading 1 listener(s)
[2020-12-03 19:49:07.824][8][info][config] [external/envoy/source/server/configuration_impl.cc:121] loading stats sink configuration
[2020-12-03 19:49:07.824][8][info][main] [external/envoy/source/server/server.cc:677] starting main dispatch loop
[2020-12-03 19:49:07.835][8][info][runtime] [external/envoy/source/common/runtime/runtime_impl.cc:417] RTDS has finished initialization
[2020-12-03 19:49:07.835][8][info][upstream] [external/envoy/source/common/upstream/cluster_manager_impl.cc:174] cm init: initializing cds
[2020-12-03 19:49:07.855][8][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:101] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection failure
[2020-12-03 19:49:09.168][8][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:101] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection failure
[2020-12-03 19:49:16.335][8][info][upstream] [external/envoy/source/common/upstream/cds_api_impl.cc:64] cds: add 0 cluster(s), remove 4 cluster(s)
[2020-12-03 19:49:16.335][8][info][upstream] [external/envoy/source/common/upstream/cluster_manager_impl.cc:178] cm init: all clusters initialized
[2020-12-03 19:49:16.335][8][info][main] [external/envoy/source/server/server.cc:658] all clusters initialized. initializing init manager
[2020-12-03 19:49:16.337][8][info][config] [external/envoy/source/server/listener_manager_impl.cc:888] all dependencies initialized. starting workers
Sodman commented 3 years ago

With regards to the UI, it's controlled by the api-server pod, so bouncing that pod may help reset it if it's somehow in a bad state from earlier.

In terms of why the curl isn't working though, my best bet is probably a networking error. What platform are you running on (AWS/GCP/Local etc)?

To quickly check if it's a networking thing, you can port-forward to the gateway-proxy pod directly and then curl that:

# Port forward
kubectl port-forward -n gloo-system gateway-proxy-6c8775bb4c-n77rf 8080:8080

# Curl locally forwarded port
curl localhost:8080/all-pets

I'm also happy to help you debug this in our slack, might be quicker turnaround - https://slack.solo.io/

aku commented 3 years ago

@Sodman it's quite late here, I will take another look at the issue tomorrow. Thanks for your assistance. Could you try to install gloo into a fresh k3s cluster yourself? I've tried to install gloo multiple times and each time it gives me this issue with volumes. I'm running k3s on top of ESXi VMs with Fedora CoreOS. Also tried to run a cluster created with k3d on my macbook

Sodman commented 3 years ago

I just tried with these steps on my Mac and had no issues:

# Install k3d:
brew install k3d

# Create a cluster in docker:
k3d cluster create

# Install Gloo with read-only UI:
glooctl install gateway --with-admin-console

# Add hello-world pets example service + route:
kubectl apply -f https://raw.githubusercontent.com/solo-io/gloo/master/example/petstore/petstore.yaml

glooctl add route --path-exact /all-pets --dest-name default-petstore-8080  --prefix-rewrite /api/pets

# Convenience method to expose dashboard locally. This should also pop a browser at http://localhost:8080/overview/
glooctl dashboard

# Port forward requests to localhost:8081 to gateway-proxy pod on 8080 (http port):
kubectl port-forward -n gloo-system gateway-proxy-6c8775bb4c-lm6gk 8081:8080

# Check that the routing to our pets service is working:
curl localhost:8081/all-pets
# Should have a response of: 
# [{"id":1,"name":"Dog","status":"available"},{"id":2,"name":"Cat","status":"pending"}]

For this exercise I also made sure to use glooctl version 1.5.12. The only thing notable that went "wrong" for me in this setup is that it didn't have permissions to expose the default ports 80/443 for the gateway-proxy loadbalancer service. These can be changed to ports >1024 (or run with elevated permissions) and should work fine, although you'll also need to expose the ports on the docker container k3d runs in, see https://k3d.io/usage/guides/exposing_services/ for more details on that.

As an aside, if you're just looking to play around with Gloo Edge locally without setting up all of k3s and deploying a live system, I'd recommend checking out kind - we use it for quickly spinning up and down local k8s clusters all running in docker.

aku commented 3 years ago

@Sodman

I've followed your steps and managed to run gloo in a fresh cluster created via k3d on my dev machine. However, I have the same problem with my main k3s cluster. I will try to reinstall everything from scratch. Probably we can close this issue as non reproducible.

Thank you for your help!

Sodman commented 3 years ago

Glad we could help! I'll close out this issue.

murphye commented 3 years ago

Here is my solution, which disables traefik while also leveraging port forwarding in k3d itself. This also works on Mac.

k3d cluster create --k3s-server-arg '--no-deploy=traefik' -p "8081:80@loadbalancer"
glooctl install gateway --with-admin-console
kubectl apply -f https://raw.githubusercontent.com/solo-io/gloo/master/example/petstore/petstore.yaml
glooctl add route --path-exact /all-pets --dest-name default-petstore-8080  --prefix-rewrite /api/pets
curl localhost:8081/all-pets
oskapt commented 3 years ago

@Sodman

However, I have the same problem with my main k3s cluster. I will try to reinstall everything from scratch.

I'm running Gloo in K3s, and I also work for Rancher. If you're still having issues, ping me on the Rancher Users Slack - I'm @adrian.