Open thatsmydoing opened 1 year ago
This issue is currently awaiting triage.
SIG CLI takes a lead on issue triage for this repo, but any Kubernetes member can accept issues by applying the triage/accepted
label.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
@thatsmydoing Can you provide more information about what is happening. For example can you run your command with -v9
to see what's going on with the retries?
/triage needs-information
Here's the output,
% kubectl -v9 --request-timeout=1s get pods
I0202 13:25:58.246099 4034 loader.go:373] Config loaded from file: /home/thomas/.kube/config
I0202 13:25:58.247150 4034 round_trippers.go:466] curl -v -XGET -H "Accept: application/json;g=apidiscovery.k8s.io;v=v2beta1;as=APIGroupDiscoveryList,application/json" -H "User-Agent: kubectl/v1.26.1 (linux/amd64) kubernetes/8f94681" 'https://privatecluster/api?timeout=1s'
I0202 13:25:58.486471 4034 round_trippers.go:495] HTTP Trace: DNS Lookup for privatecluster resolved to [{10.0.0.100 }]
I0202 13:25:59.247588 4034 round_trippers.go:553] GET https://privatecluster/api?timeout=1s in 1000 milliseconds
I0202 13:25:59.247697 4034 round_trippers.go:570] HTTP Statistics: DNSLookup 195 ms Dial 0 ms TLSHandshake 0 ms Duration 1000 ms
I0202 13:25:59.247750 4034 round_trippers.go:577] Response Headers:
I0202 13:25:59.247730 4034 round_trippers.go:508] HTTP Trace: Dial to tcp:10.0.0.100:443 failed: dial tcp 10.0.0.100:443: i/o timeout
E0202 13:25:59.247897 4034 memcache.go:238] couldn't get current server API group list: Get "https://privatecluster/api?timeout=1s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0202 13:25:59.247940 4034 cached_discovery.go:120] skipped caching discovery info due to Get "https://privatecluster/api?timeout=1s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0202 13:25:59.248216 4034 round_trippers.go:466] curl -v -XGET -H "Accept: application/json;g=apidiscovery.k8s.io;v=v2beta1;as=APIGroupDiscoveryList,application/json" -H "User-Agent: kubectl/v1.26.1 (linux/amd64) kubernetes/8f94681" 'https://privatecluster/api?timeout=1s'
I0202 13:25:59.294454 4034 round_trippers.go:495] HTTP Trace: DNS Lookup for privatecluster resolved to [{10.0.0.100 }]
I0202 13:26:00.248814 4034 round_trippers.go:553] GET https://privatecluster/api?timeout=1s in 1000 milliseconds
I0202 13:26:00.248821 4034 round_trippers.go:508] HTTP Trace: Dial to tcp:10.0.0.100:443 failed: dial tcp 10.0.0.100:443: i/o timeout
I0202 13:26:00.248887 4034 round_trippers.go:570] HTTP Statistics: DNSLookup 46 ms Dial 954 ms TLSHandshake 0 ms Duration 1000 ms
I0202 13:26:00.248941 4034 round_trippers.go:577] Response Headers:
E0202 13:26:00.249071 4034 memcache.go:238] couldn't get current server API group list: Get "https://privatecluster/api?timeout=1s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0202 13:26:00.249112 4034 cached_discovery.go:120] skipped caching discovery info due to Get "https://privatecluster/api?timeout=1s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0202 13:26:00.249172 4034 shortcut.go:100] Error loading discovery information: Get "https://privatecluster/api?timeout=1s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0202 13:26:00.249493 4034 round_trippers.go:466] curl -v -XGET -H "Accept: application/json;g=apidiscovery.k8s.io;v=v2beta1;as=APIGroupDiscoveryList,application/json" -H "User-Agent: kubectl/v1.26.1 (linux/amd64) kubernetes/8f94681" 'https://privatecluster/api?timeout=1s'
I0202 13:26:00.267282 4034 round_trippers.go:495] HTTP Trace: DNS Lookup for privatecluster resolved to [{10.0.0.100 }]
I0202 13:26:01.249705 4034 round_trippers.go:553] GET https://privatecluster/api?timeout=1s in 1000 milliseconds
I0202 13:26:01.249810 4034 round_trippers.go:570] HTTP Statistics: DNSLookup 17 ms Dial 0 ms TLSHandshake 0 ms Duration 1000 ms
I0202 13:26:01.249824 4034 round_trippers.go:508] HTTP Trace: Dial to tcp:10.0.0.100:443 failed: dial tcp 10.0.0.100:443: i/o timeout
I0202 13:26:01.249843 4034 round_trippers.go:577] Response Headers:
E0202 13:26:01.249988 4034 memcache.go:238] couldn't get current server API group list: Get "https://privatecluster/api?timeout=1s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0202 13:26:01.250021 4034 cached_discovery.go:120] skipped caching discovery info due to Get "https://privatecluster/api?timeout=1s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0202 13:26:01.250316 4034 round_trippers.go:466] curl -v -XGET -H "Accept: application/json;g=apidiscovery.k8s.io;v=v2beta1;as=APIGroupDiscoveryList,application/json" -H "User-Agent: kubectl/v1.26.1 (linux/amd64) kubernetes/8f94681" 'https://privatecluster/api?timeout=1s'
I0202 13:26:01.293927 4034 round_trippers.go:495] HTTP Trace: DNS Lookup for privatecluster resolved to [{10.0.0.100 }]
I0202 13:26:02.251233 4034 round_trippers.go:553] GET https://privatecluster/api?timeout=1s in 1000 milliseconds
I0202 13:26:02.251382 4034 round_trippers.go:508] HTTP Trace: Dial to tcp:10.0.0.100:443 failed: dial tcp 10.0.0.100:443: i/o timeout
I0202 13:26:02.251410 4034 round_trippers.go:570] HTTP Statistics: DNSLookup 43 ms Dial 0 ms TLSHandshake 0 ms Duration 1000 ms
I0202 13:26:02.251487 4034 round_trippers.go:577] Response Headers:
E0202 13:26:02.251634 4034 memcache.go:238] couldn't get current server API group list: Get "https://privatecluster/api?timeout=1s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0202 13:26:02.251680 4034 cached_discovery.go:120] skipped caching discovery info due to Get "https://privatecluster/api?timeout=1s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0202 13:26:02.252100 4034 round_trippers.go:466] curl -v -XGET -H "User-Agent: kubectl/v1.26.1 (linux/amd64) kubernetes/8f94681" -H "Accept: application/json;g=apidiscovery.k8s.io;v=v2beta1;as=APIGroupDiscoveryList,application/json" 'https://privatecluster/api?timeout=1s'
I0202 13:26:02.302131 4034 round_trippers.go:495] HTTP Trace: DNS Lookup for privatecluster resolved to [{10.0.0.100 }]
I0202 13:26:03.253063 4034 round_trippers.go:553] GET https://privatecluster/api?timeout=1s in 1000 milliseconds
I0202 13:26:03.253155 4034 round_trippers.go:508] HTTP Trace: Dial to tcp:10.0.0.100:443 failed: dial tcp 10.0.0.100:443: i/o timeout
I0202 13:26:03.253216 4034 round_trippers.go:570] HTTP Statistics: DNSLookup 49 ms Dial 950 ms TLSHandshake 0 ms Duration 1000 ms
I0202 13:26:03.253260 4034 round_trippers.go:577] Response Headers:
E0202 13:26:03.253355 4034 memcache.go:238] couldn't get current server API group list: Get "https://privatecluster/api?timeout=1s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0202 13:26:03.253381 4034 cached_discovery.go:120] skipped caching discovery info due to Get "https://privatecluster/api?timeout=1s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0202 13:26:03.253466 4034 helpers.go:264] Connection error: Get https://privatecluster/api?timeout=1s: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Unable to connect to the server: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
kubectl -v9 --request-timeout=… took 5.061s
For context, our cluster is on a private network so you have to be connected to a VPN to access it. The DNS record for it is public though. This happens if you try to use kubectl outside the VPN.
/assign @seans3 We will investigate further and discuss on the biweekly sig-cli meeting.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
I checked this again and I only seem to get the issue on Linux but not on macOS
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
hey @seans3 , did you create pr for this issue , if not liked to work on it :)
What would you like to be added:
Allow disabling request retries.
Why is this needed:
When using
--request-timeout
, the timeout applies to each individual request and not the command as a whole. If a context is unreachable, commands likekubectl exec
(which don't retry) time out after--request-timeout
. However, commands likekubectl get
it will retry the command so it times out much later since it has to timeout multiple times before the command finishes.