› kubectl logs -n grafana test-pod-name -f
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 590 100 590 0 0 2 0 0:04:55 0:03:35 0:01:20 131
{"data":[{"type":"kafka","description":"A Kafka cluster","labels":[{"description":"ID of the Kafka cluster","key":"kafka.id"}]},{"type":"connector","description":"A Kafka Connector","labels":[{"description":"ID of the connector","key":"connector.id"}]},{"type":"ksql","description":"A ksqlDB application","labels":[{"description":"ID of the ksqlDB application","key":"ksql.id"}]},{"type":"schema_registry","description":"A schema registry","labels":[{"description":"ID of the schema registry","key":"schema_registry.id"}]}],"meta":{"pagination":{"page_size":100,"total_size":4}},"links":{}}%
and sometimes:
› kubectl logs -n grafana test-pod-name -f
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- 0:04:17 --:--:-- 0
curl: (28) Failed to connect to api.telemetry.confluent.cloud port 443: Connection timed out
Looks like it's taking too long and ends up timing out at times.
Converted the
ccloudexporter
kubernetes files into a helm chart and am running into a timeout issue.The deployment has these env vars set:
Seeing:
which tells me the env vars (api_key|secret) are valid but the request is timing out.
Did a little test with a test pod:
and I see:
and sometimes:
Looks like it's taking too long and ends up timing out at times.
Any idea what could cause this in EKS?