Open pjediny opened 1 year ago
The docker stats
command shows numbers for me. May be the command is slow on your machine and needs sometime to change the numbers? If you still don't see non-zero values after waiting for sometime, Can you please share full logs from your machine?
And the reason kubectl top pod
doesn't work by default is because the metrics-server pod that this command depends on is not in the default
namespace but in the kube-system
namespace. So, running the command with -A
flag works for me. kubectl top pod -A
.
Here's the result of these two commands on my machine.
>docker stats
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
df75a52826b2 k8s_traefik_traefik-64f55bb67d-xbw9p_kube-system_2ae889e5-1ed6-4160-b6f2-4c5c3c642794_0 0.00% 47.82MiB / 24.83GiB 0.19% 0B / 0B 0B / 0B 22
47796445fc54 k8s_lb-tcp-443_svclb-traefik-33fc7298-rrk6m_kube-system_55bf3ff7-6571-45e6-a99e-e503b166fd97_0 0.00% 324KiB / 24.83GiB 0.00% 0B / 0B 0B / 0B 1
e60d581f1f88 k8s_POD_traefik-64f55bb67d-xbw9p_kube-system_2ae889e5-1ed6-4160-b6f2-4c5c3c642794_0 0.00% 300KiB / 24.83GiB 0.00% 0B / 0B 0B / 0B 1
04b332a38cab k8s_lb-tcp-80_svclb-traefik-33fc7298-rrk6m_kube-system_55bf3ff7-6571-45e6-a99e-e503b166fd97_0 0.00% 324KiB / 24.83GiB 0.00% 0B / 0B 0B / 0B 1
8e4e6a9916ea k8s_POD_svclb-traefik-33fc7298-rrk6m_kube-system_55bf3ff7-6571-45e6-a99e-e503b166fd97_0 0.00% 296KiB / 24.83GiB 0.00% 0B / 0B 0B / 0B 1
46ede831d143 k8s_metrics-server_metrics-server-648b5df564-7c96h_kube-system_58504364-988d-4abd-9260-2e5c799fce01_0 1.46% 36.68MiB / 24.83GiB 0.14% 0B / 0B 0B / 0B 22
32ee1f3df71e k8s_coredns_coredns-77ccd57875-cwtsk_kube-system_332d23dc-7311-40be-b520-0797c57e670c_0 0.42% 25.6MiB / 170MiB 15.06% 0B / 0B 0B / 0B 19
e520a4a3ec17 k8s_local-path-provisioner_local-path-provisioner-957fdf8bc-jjpqr_kube-system_0d43e11e-1cf1-436b-9fe3-948926dbf949_0 0.07% 17.29MiB / 24.83GiB 0.07% 0B / 0B 0B / 0B 17
9904992f2a78 k8s_POD_coredns-77ccd57875-cwtsk_kube-system_332d23dc-7311-40be-b520-0797c57e670c_0 0.00% 288KiB / 24.83GiB 0.00% 0B / 0B 0B / 0B 1
880824d63c84 k8s_POD_metrics-server-648b5df564-7c96h_kube-system_58504364-988d-4abd-9260-2e5c799fce01_0 0.00% 396KiB / 24.83GiB 0.00% 0B / 0B 0B / 0B 1
b094a907096f k8s_POD_local-path-provisioner-957fdf8bc-jjpqr_kube-system_0d43e11e-1cf1-436b-9fe3-948926dbf949_0 0.00% 288KiB / 24.83GiB 0.00% 0B / 0B 0B / 0B 1
>kubectl top pod -A
NAMESPACE NAME CPU(cores) MEMORY(bytes)
kube-system coredns-77ccd57875-cwtsk 4m 25Mi
kube-system local-path-provisioner-957fdf8bc-jjpqr 2m 17Mi
kube-system metrics-server-648b5df564-7c96h 8m 37Mi
kube-system svclb-traefik-33fc7298-rrk6m 0m 0Mi
kube-system traefik-64f55bb67d-xbw9p 1m 47Mi
@gunamata Just to be sure - are you running rd/k8s with moby on windows? How would I know the docker stats is slow on my machine? The machine is 20core 32GB, so it should handle it ok and the docker stats output is responsive (~2s to get the output, not great but ok), the wsl2 allocates 16GB to the rancher-desktop vm.
$ kubectl top pod -A
error: Metrics not available for pod default/minio-696dc7797b-q82pl, age: 563h44m45.96479546s
$ docker stats --no-stream --no-trunc
CONTAINER ID NAMEredacted_name CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
459fc21967b12ed5c8d40c6aeb82c5497a46bdea50c6217b4f4cada4b6093b32 k8s_redacted_name 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
fd8681f1ad4ef2e8f268476e0c84f26355a720e5793f764f3d33edc1e5378abc k8s_redacted_name 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
bde96aa1799cfdd52098b42d20af9c26e6c3dc9c65320e85fe22c6cdcce9f2c0 k8s_redacted_name 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
5138820fa54ce4745da0ece622c074a71cf550a55c6847851c54b32bab67f0f4 k8s_redacted_name 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
dd9c8dd221a20345890aa514a820cf4f83c5c0edd27eddd612af6e35cb98d1c6 k8s_redacted_name 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
99d2b66ab96b76a861022ae063ea137e84d07510ba1872460c5131ae01d04feb k8s_redacted_name 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
3b827d07979e8c3e9a492daf86a1ee52bd200bc97e6812a742315e3c2f122b4d k8s_redacted_name 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
49ca34e2a1fc363086c3c5bf6a4c3cc4737aec2adb3b6c65a3bc711c97dca24a k8s_redacted_name 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
de38b990c1ced7a5ed4c1232155093039ceef931fa6eaedc32e70fd26975bfcd k8s_redacted_name 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
344d20538d2c49b30f3555ab01d5a7361bc2fc8f076a254ab30a577974cd0e2c k8s_redacted_name 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
0e9e951590cbc10da76d95aec3784899e7cba1c7611ccd31c218ef074a890dd8 k8s_redacted_name 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
e9a806f2e8021b8e59f60f3054c07c1e340f4432e4a3e1ebc7a04c66e191d93f k8s_redacted_name 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
08d85af33d96178d29d9cc86a4a0cc461395b7831f40f8dcfeadc3922a0f56ed k8s_redacted_name 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
d5ed229af257d98abe5478368ba180b578e8c656cd202426341259ecac7e3dc5 k8s_redacted_name 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
024ab7c00e5f09e2c7137e8aebe5f731078a2c062b8711258e6e36cf02fb83ad k8s_redacted_name 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
bda13a09a1b0c4ba27ce2d5c16f507bc4a735d5bab0582123ed7500ecc4214c1 k8s_redacted_name 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
e26f066ba82856f182142668374bc0ad75852550517d0d7e576b24008698f5fb k8s_redacted_name 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
d7eb0fc9546fe0333153493f3afa146d7180fc3369e48750895fc8eefd4ad97b k8s_redacted_name 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
3b454c09f7b0ed00a59dff3907c1a15aa97d5b1f8aa682cc992a4cb31aaa8532 k8s_redacted_name 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
9d8887d0ff2323347fc171a8a7264b829126e868708e04180c6b21e68aa8af4d k8s_redacted_name 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
14203fcc9d1aa8b7512a2e83dd493d556731ebbcd860d4eafc3e64f64956635a k8s_redacted_name 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
ad98921ebce3a02f67948616c86c424903bfb1ffbcd8e98330dfe392ea5c488d k8s_redacted_name 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
db8ddbdfc747b6e407c8a9a4c590397175a821da8a6be2ec495244cc7d4dfbf2 k8s_redacted_name 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
5eba3c523212b28d8957c77fa47aabe86eca2796365b3b94ca98ccf8b94b6b42 k8s_redacted_name 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
11903c8e7c6f2360c62e0ebd5646a2eabfa40c9bfddfc29104fa5d3fb382e6bd k8s_redacted_name 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
d310c771534046495aababe78774ec30a552e8677658aa18c33c0e0b47ab0050 k8s_redacted_name 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
c353adca8c7f22173e9afe5a0ecf3c2b5a3f1c08efe675d713c06980eaa76c2e k8s_redacted_name 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
6872d2a5b3abc88a7fa24dcbc617a9c34e29c5e8bfaf07e5c4c1ee0ebffcf5ee k8s_redacted_name 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
c9f255efe4de8dccce77f9447bbed9892d9b3c776f513c7ca9b893433bf12d80 k8s_redacted_name 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
e2473f05988ac79e81650d8008354e942b9e12149f065f45bde059d1fe50d37a k8s_redacted_name 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
be9931259924b3744bfee89318f917f5cfbaf5cb6261f45f24657e8fde71d06b k8s_redacted_name 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
9eadcf5137b35acd1f5d98cf51c84de2ff1c65279f462ef9d8ccc2b65398fb31 k8s_redacted_name 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
d5ca91ff43c7aa50242e81104dec7ddcf545bcfdb52991cf31a74751f67a777c k8s_redacted_name 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
5578578627f69cb89727ae01026a01353edee36e28a5cd4b257e751d0eacb49d k8s_redacted_name 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
425058ab2eac0f8502d72b02fd224bfcf7790bad89428917e3efae7f0277791d k8s_redacted_name 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
60d71fb5ada21e42c1518b6598f08d44155628b3528d793481f2483452b9b494 k8s_redacted_name 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
a89971795d8947d884621081da5f8b4aad83ed053e2dfbf16534ba31f87b7c2d k8s_redacted_name 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
a66d48ad12ebcc5148afda9c6b7f1863e08d32bbbb24e167ae02601c022b668f k8s_redacted_name 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
6bd2d77504c664af7f2c528ad23343840f28d144417bbf6e48382975a4bb9ddc k8s_redacted_name 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
3c6f16ce4efa31da8ac2ac8b2350f28eb88d72163bca7b0022526e733083021b k8s_redacted_name 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
e0471e413d6fd577118cc3c71eb8ea9c2a76eb2e7ae151e3cb9eb0ea67d1ac1a k8s_redacted_name 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
9de53c8355e38d5b0edbf91182fcc406a3ac067665e3f947c573688720eb5b54 k8s_redacted_name 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
46116eccc00670cb17b184cdbb2d4d4164adfffb712e650a45a7507a80fc94ed k8s_redacted_name 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
6b545af047c1281efee586ce745c3293392f670d3fc52c2e77f5b46a783cfb7f k8s_redacted_name 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
e749cb5be34d767876413b308c3aceb360919ef602c2d411870d7f148f8f2508 k8s_redacted_name 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
72367cbae68ccec207aec9f2356b8707f716191a2228c639fb95e6ac83c52065 k8s_redacted_name 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
639bb6f9007687255b5812b01d4f3aabb9cb1d22a10c912844552f84c0f14071 k8s_redacted_name 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
71120ccbb343b6adef0401e3cdfebc180ddde9f905d250d6b7daaa504d482199 k8s_redacted_name 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
6c5696c0283f688149d4da3b36c7a8d99f23870bf0989068a2a9cb0f9125b68f k8s_redacted_name 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
822486fcd5eace99bcd7bcdd7c59fe575dd5479aa69d5e23787d9fb6976b67a4 k8s_redacted_name 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
bf2c6c2799a6bf2253a017a2c7faf1582dce9eb357927a5acf59c5a1bec29469 k8s_redacted_name 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
Btw I have tried to workaround the problem by using this inside of rancher-desktop vm:
mkdir /sys/fs/cgroup/systemd
mount -t cgroup -o none,name=systemd cgroup /sys/fs/cgroup/systemd
and it kind of starts working, but it only accounts some containers, not sure why. Of course I need to redo this every rancher-desktop restart.
@pjediny , yes I am using Rancher Desktop with moby
as the container engine. Looking at your machine specs it doesn't look like it can be a performance problem.
Would you be able to share the full application logs from your session? On Windows, you can find the logs at %USERPROFILE%//AppData/Local/rancher-desktop/logs
Also, As you indicated you are on VPN, it might be worth trying the experimental networking tunnel to see if it helps. You can enable this via Preferences > WSL > Network
.
I am having this same issue except I am not running kubernetes. I am using the moby
engine. We are transitioning from Docker Desktop to Rancher desktop. So far only 8 developers have switch but of those 8, 3 of them are experiencing this issue. I opened a similar issue #5246 but it was marked as a duplicate of this and closed.
@xiphoid24 , Would you be able to share your configuration (output of the command rdctl list-settings
) and Rancher Desktop application logs please. Please enable Debug mode by checking Enable debug mode
on the Troubleshooting
page. You can open the folder containing logs by clicking on Show Logs
on the Troubleshooting page.
@gunamata sure thing.
rdctl list-settings
output:
{
"version": 8,
"application": {
"adminAccess": false,
"debug": true,
"extensions": {
"allowed": {
"enabled": false,
"list": []
}
},
"pathManagementStrategy": "manual",
"telemetry": {
"enabled": false
},
"updater": {
"enabled": false
},
"autoStart": false,
"startInBackground": false,
"hideNotificationIcon": false,
"window": {
"quitOnClose": false
}
},
"containerEngine": {
"allowedImages": {
"enabled": false,
"patterns": []
},
"name": "moby"
},
"virtualMachine": {
"memoryInGB": 0,
"numberCPUs": 2,
"hostResolver": true
},
"WSL": {
"integrations": {
"Ubuntu": true
}
},
"kubernetes": {
"version": "1.27.3",
"port": 6443,
"enabled": false,
"options": {
"traefik": false,
"flannel": true
},
"ingress": {
"localhostOnly": false
}
},
"portForwarding": {
"includeKubernetesServices": false
},
"images": {
"showAll": true,
"namespace": "k8s.io"
},
"diagnostics": {
"showMuted": false,
"mutedChecks": {}
},
"extensions": {
"docker/resource-usage-extension": "1.0.3",
"docker/disk-usage-extension": "0.2.7"
},
"experimental": {
"virtualMachine": {
"type": "qemu",
"useRosetta": false,
"socketVMNet": false,
"mount": {
"type": "reverse-sshfs",
"9p": {
"securityModel": "none",
"protocolVersion": "9p2000.L",
"msizeInKib": 128,
"cacheMode": "mmap"
}
},
"networkingTunnel": false,
"proxy": {
"enabled": false,
"address": "",
"password": "",
"port": 3128,
"username": ""
}
}
}
}
Here are my log files. I assume you wanted all of them
extensions.log host-resolver-host.log host-resolver-peer.log images.log background.log dashboardServer.log deploymentProfile.log diagnostics.log docker.log mock.log nerdctl.log networking.log protocol-handler.log integrations.log k8s.log kube.log lima.log moby.log steve.log update.log vtunnel-host.log vtunnel-peer.log rancher-desktop-guestagent.log server.log settings.log shortcuts.log wsl-exec.log wsl-helper.log wsl-helper.Ubuntu.log wsl-init.log window_browser.log window_renderer.log wsl.log
Any update on this issue?
Any update?
I also just ran into this issue. Has someone found a solution already? I am already on the latest (1.11.1) Rancher Desktop version. But I'm not 100% sure all my other tools are updated as well.
I'm on version 1.12.3
(latest at this moment) and I also have the problem with docker stats
being zeroed out. Setting the network tunnel on or off made no difference.
docker stats
is zeroed if you call from Windows or from a WSL distro.
Btw I have tried to workaround the problem by using this inside of rancher-desktop vm:
mkdir /sys/fs/cgroup/systemd mount -t cgroup -o none,name=systemd cgroup /sys/fs/cgroup/systemd
and it kind of starts working, but it only accounts some containers, not sure why. Of course I need to redo this every rancher-desktop restart.
~I just tried that, and got the same result. I'm running 4 containers and I only see stats for one.~
After restarting the containers I'm now seeings stats for all of them. So, this is an effective workaround.
Another update, regarding mounting /sys/fs/cgroup/systemd
.
You don't get BLOCK I/O stats (you do get NET I/O).
You have to recreate it all after a reboot.
This is really annoying.
So we have a partial workaround.
Anyone knows how to get BLOCK I/O stats?
Same issue, and I would like to avoid doing the workaround (and hence altering my environment).
Hello, I am also experiencing the same issue. I am in a team of several people with the exact same hardware/windows. Half of us have the same issue, while the other half don't. I made several investigations, but with no success. Here are my observations:
docker stats
and kubectl top pods
are impacted/sys/fs/cgroup/memory/kubepods/[burstable|besteffort]/pod[id]/memory.usage_in_bytes
That is it. I hope it helps !
Actual Behavior
kubectl top pod
showserror: Metrics not available
docker stats
shows all stats zero for every containerSteps to Reproduce
just run:
kubectl top pod
ordocker stats
Result
kubectl top pod
showserror: Metrics not available
docker stats
shows all stats zero for every containerExpected Behavior
Some nice values
Additional Information
Minikube on ubuntu works On containerd engine it returns statistics for kubectl top pod but I cannot use containerd for other reasons. HorizontalPodAutoscaler is not working because of this docker.log shows error level messages:
collecting stats for <id>: no metrics received
loading cgroup for <number>
error=cgroups: cannot find cgroup mount destination
It looks like it might work on cgroupv2 (minikube on ubuntu works) but the rancher-desktop vm is on cgroups v1 and there might be some incompatibility with fresh containerd and moby, but this is just my speculation.
Rancher Desktop Version
1.9.1
Rancher Desktop K8s Version
1.27.3
Which container engine are you using?
moby (docker cli)
What operating system are you using?
Windows
Operating System / Build Version
Windows 10 Enterprice 21H2
What CPU architecture are you using?
x64
Linux only: what package format did you use to install Rancher Desktop?
None
Windows User Only
vpn: Check Point Endpoint Security