vmware / photon

Minimal Linux container host
https://vmware.github.io/photon
Other
3.06k stars 697 forks source link

Photon 3.0 cgroup exhaustion. VM shuts down with CPU disabled #1535

Open jkevinezz opened 10 months ago

jkevinezz commented 10 months ago

Describe the bug

What we are seeing is, we use photon 3.0 as the OS for our Tanzu K8S nodes. What we are seeing is randomly we see an error cgroups exhaution. then we see the VM reboot, and in vCenter events it shows cpu has been disabled

Reproduction steps

1. 2. 3. ... Haven't been able to reproduce manually, it just happens randomly.

Expected behavior

How do we find out whats causing the cgroup exhaustion and in return causing photon kernel to disable cpu and reboot itself

Additional context

No response

dcasota commented 10 months ago

Hi, as a volunteer here, have you tried to open a SR @ https://www.vmware.com/go/customerconnect ? See SAM offering. For TKG, you could collect all logfiles in reference to kb90319. Also, have a look to the VMware Tanzu Compliance Product Documentation. Could be a subcomponent bug and/or resource limitation related without burst possibility, but without logs and compliance status that's a guess only. Hope this helps.

jkevinezz commented 10 months ago

Yes we opened multiple cases with vmware support with tanzu team and they have stated thar cgroup memory exhaustions are from the photon kernel and that we should open a bug with the photon OS team and that’s why we opened this bug report. We have had over 8 cases.

Please let us know which logs you need from the photon os and how to get them and we can provide you those logs from the photon 3 based vm which we use as a tanzu node

Please reply to all so my team can be aware of these email exchanges

Thx Julius

From: dcasota @.> Sent: Thursday, January 18, 2024 11:59 Amer 8 To: vmware/photon @.> Cc: Julius Kevinezz @.>; Author @.> Subject: {EXT} Re: [vmware/photon] Photon 3.0 cgroup exhaustion. VM shuts down with CPU disabled (Issue #1535)

Hi, as a volunteer here, have you tried to open a SR @ https://www.vmware.com/go/customerconnecthttps://urldefense.com/v3/__https:/www.vmware.com/go/customerconnect__;!!OxqwZwwAVvYGpJMosQ!ew64vGDCA2jx95OXXYoNMyZKa92Z11wC_Y7l9oacjMiDGXV_Q0Vvg-RxdqiVWJ-iJVrUxCAVB_r3zf4QkQ6hL2EN$ ? See SAMhttps://urldefense.com/v3/__https:/www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/docs/vmware-support-account-manager-specific-program-document.pdf__;!!OxqwZwwAVvYGpJMosQ!ew64vGDCA2jx95OXXYoNMyZKa92Z11wC_Y7l9oacjMiDGXV_Q0Vvg-RxdqiVWJ-iJVrUxCAVB_r3zf4QkV5E7Sj2$ offering. For TKG, you could collect all logfiles in reference to kb90319https://urldefense.com/v3/__https:/kb.vmware.com/s/article/90319__;!!OxqwZwwAVvYGpJMosQ!ew64vGDCA2jx95OXXYoNMyZKa92Z11wC_Y7l9oacjMiDGXV_Q0Vvg-RxdqiVWJ-iJVrUxCAVB_r3zf4QkSq_nOnT$. Also, have a look to the VMware Tanzu Compliance Product Documentationhttps://urldefense.com/v3/__https:/docs.vmware.com/en/VMware-Tanzu-Compliance/index.html__;!!OxqwZwwAVvYGpJMosQ!ew64vGDCA2jx95OXXYoNMyZKa92Z11wC_Y7l9oacjMiDGXV_Q0Vvg-RxdqiVWJ-iJVrUxCAVB_r3zf4QkUGm1Yqa$. Could be a subcomponent bug and/or resource limitation related without burst possibility, but without logs and compliance status that's a guess only. Hope this helps.

— Reply to this email directly, view it on GitHubhttps://urldefense.com/v3/__https:/github.com/vmware/photon/issues/1535*issuecomment-1898866474__;Iw!!OxqwZwwAVvYGpJMosQ!ew64vGDCA2jx95OXXYoNMyZKa92Z11wC_Y7l9oacjMiDGXV_Q0Vvg-RxdqiVWJ-iJVrUxCAVB_r3zf4Qkb7z86EA$, or unsubscribehttps://urldefense.com/v3/__https:/github.com/notifications/unsubscribe-auth/AILMTBCQBW7TVQRY2IEUYM3YPFIEDAVCNFSM6AAAAABCAMHJROVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQOJYHA3DMNBXGQ__;!!OxqwZwwAVvYGpJMosQ!ew64vGDCA2jx95OXXYoNMyZKa92Z11wC_Y7l9oacjMiDGXV_Q0Vvg-RxdqiVWJ-iJVrUxCAVB_r3zf4QkTOZBCVf$. You are receiving this because you authored the thread.Message ID: @.**@.>>

dcasota commented 10 months ago

orchestrating 8 cases++ /cc @Vasavisirnapalli

jaankit commented 10 months ago

@jkevinezz ,

Which kernel version are you using? Do you see cgroup.memory=nokmem in cat /proc/cmdline? Could you please share kernel logs.

Thanks.

jkevinezz commented 10 months ago

Could you please tell me how to gather kernel logs from photon 3.0

From: jaankit @.> Sent: Thursday, January 18, 2024 1:08 PM To: vmware/photon @.> Cc: Julius Kevinezz @.>; Mention @.> Subject: {EXT} Re: [vmware/photon] Photon 3.0 cgroup exhaustion. VM shuts down with CPU disabled (Issue #1535)

@jkevinezzhttps://urldefense.com/v3/__https:/github.com/jkevinezz__;!!OxqwZwwAVvYGpJMosQ!bogzKpmZ6W6JGr_ppo1tWRYFEyG0EHEAG2kU9yBw2l1iUPLTQRMKxNtyx_6uVitHb6GwWr4EqytFeWiXe3Qbo3p_$ ,

Which kernel version are you using? Do you see cgroup.memory=nokmem in cat /proc/cmdline? Could you please share kernel logs.

Thanks.

— Reply to this email directly, view it on GitHubhttps://urldefense.com/v3/__https:/github.com/vmware/photon/issues/1535*issuecomment-1898972429__;Iw!!OxqwZwwAVvYGpJMosQ!bogzKpmZ6W6JGr_ppo1tWRYFEyG0EHEAG2kU9yBw2l1iUPLTQRMKxNtyx_6uVitHb6GwWr4EqytFeWiXe-wJq30-$, or unsubscribehttps://urldefense.com/v3/__https:/github.com/notifications/unsubscribe-auth/AILMTBCQGGNOIMFMWYIP67LYPFQGTAVCNFSM6AAAAABCAMHJROVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQOJYHE3TENBSHE__;!!OxqwZwwAVvYGpJMosQ!bogzKpmZ6W6JGr_ppo1tWRYFEyG0EHEAG2kU9yBw2l1iUPLTQRMKxNtyx_6uVitHb6GwWr4EqytFeWiXezyrLPmv$. You are receiving this because you were mentioned.Message ID: @.**@.>>

jkevinezz commented 10 months ago

Here is a log snippet e saved from one of the photon 3.0 Tanzu node random reboot.

When you say kernel logs, you just want the vm logs right?

Thx Julius

From: jaankit @.> Sent: Thursday, January 18, 2024 1:08 PM To: vmware/photon @.> Cc: Julius Kevinezz @.>; Mention @.> Subject: {EXT} Re: [vmware/photon] Photon 3.0 cgroup exhaustion. VM shuts down with CPU disabled (Issue #1535)

@jkevinezzhttps://urldefense.com/v3/__https:/github.com/jkevinezz__;!!OxqwZwwAVvYGpJMosQ!bogzKpmZ6W6JGr_ppo1tWRYFEyG0EHEAG2kU9yBw2l1iUPLTQRMKxNtyx_6uVitHb6GwWr4EqytFeWiXe3Qbo3p_$ ,

Which kernel version are you using? Do you see cgroup.memory=nokmem in cat /proc/cmdline? Could you please share kernel logs.

Thanks.

— Reply to this email directly, view it on GitHubhttps://urldefense.com/v3/__https:/github.com/vmware/photon/issues/1535*issuecomment-1898972429__;Iw!!OxqwZwwAVvYGpJMosQ!bogzKpmZ6W6JGr_ppo1tWRYFEyG0EHEAG2kU9yBw2l1iUPLTQRMKxNtyx_6uVitHb6GwWr4EqytFeWiXe-wJq30-$, or unsubscribehttps://urldefense.com/v3/__https:/github.com/notifications/unsubscribe-auth/AILMTBCQGGNOIMFMWYIP67LYPFQGTAVCNFSM6AAAAABCAMHJROVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQOJYHE3TENBSHE__;!!OxqwZwwAVvYGpJMosQ!bogzKpmZ6W6JGr_ppo1tWRYFEyG0EHEAG2kU9yBw2l1iUPLTQRMKxNtyx_6uVitHb6GwWr4EqytFeWiXezyrLPmv$. You are receiving this because you were mentioned.Message ID: @.**@.>>

prashant1221 commented 10 months ago

I cannot see any log snippet. Check kernel version via uname -a. kernel logs is dmesg command output. Also run cat /proc/cmdline to check if cgroup.memory=nokmem parameter is present. We suspect it can be older kernel issue which was fixed by https://github.com/vmware/photon/commit/1c4e9360cc516c9e9a086b441c9b4df63df3449a

jkevinezz commented 10 months ago

Thank you

We run the same photon 3.0 in 3 different datacenters and it only seems to be happening in 1 datacenter but we will check what you have posted below and get back

Thank you all


From: Prashant Singh Chauhan @.> Sent: Friday, January 19, 2024 1:19:42 AM To: vmware/photon @.> Cc: Julius Kevinezz @.>; Mention @.> Subject: {EXT} Re: [vmware/photon] Photon 3.0 cgroup exhaustion. VM shuts down with CPU disabled (Issue #1535)

I cannot see any log snippet. Check kernel version via uname -a. kernel logs is dmesg command output. Also run cat /proc/cmdline to check if cgroup.memory=nokmem parameter is present. We suspect it can be older kernel issue which was fixed by 1c4e936https://urldefense.com/v3/__https://github.com/vmware/photon/commit/1c4e9360cc516c9e9a086b441c9b4df63df3449a__;!!OxqwZwwAVvYGpJMosQ!dqchmuaYd0CCIC1Jz1FYQiM59iMImtPGZPuo_Fdq5xOSxwWo3Ry2hqFf8oePyAEhi_S6AmIb3LUXq8TYBMWEQZHh$

— Reply to this email directly, view it on GitHubhttps://urldefense.com/v3/__https://github.com/vmware/photon/issues/1535*issuecomment-1899831350__;Iw!!OxqwZwwAVvYGpJMosQ!dqchmuaYd0CCIC1Jz1FYQiM59iMImtPGZPuo_Fdq5xOSxwWo3Ry2hqFf8oePyAEhi_S6AmIb3LUXq8TYBOmMLjYt$, or unsubscribehttps://urldefense.com/v3/__https://github.com/notifications/unsubscribe-auth/AILMTBGREZ7UHLIN2OZS3STYPIF75AVCNFSM6AAAAABCAMHJROVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQOJZHAZTCMZVGA__;!!OxqwZwwAVvYGpJMosQ!dqchmuaYd0CCIC1Jz1FYQiM59iMImtPGZPuo_Fdq5xOSxwWo3Ry2hqFf8oePyAEhi_S6AmIb3LUXq8TYBFHSgon4$. You are receiving this because you were mentioned.Message ID: @.***>

jkevinezz commented 10 months ago

@.*** [ ~ ]$ sudo su root [ /home/capv ]# uname -a Linux ts-sharedplatform-ash-prod-md1-7cd78d79b9-w2gmm 4.19.189-5.ph3 #1-photon SMP Thu May 13 16:00:29 UTC 2021 x86_64 GNU/Linux

root [ /home/capv ]# cat /proc/cmdline BOOT_IMAGE=/boot/vmlinuz-4.19.189-5.ph3 root=PARTUUID=aac1ba00-26c4-414e-9662-611408371055 init=/lib/systemd/systemd ro loglevel=3 quiet no-vmw-sta cgroup.memory=nokmem net.ifnames=0 plymouth.enable=0 systemd.legacy_systemd_cgroup_controller=yes


From: Prashant Singh Chauhan @.> Sent: Friday, January 19, 2024 1:19:42 AM To: vmware/photon @.> Cc: Julius Kevinezz @.>; Mention @.> Subject: {EXT} Re: [vmware/photon] Photon 3.0 cgroup exhaustion. VM shuts down with CPU disabled (Issue #1535)

I cannot see any log snippet. Check kernel version via uname -a. kernel logs is dmesg command output. Also run cat /proc/cmdline to check if cgroup.memory=nokmem parameter is present. We suspect it can be older kernel issue which was fixed by 1c4e936https://urldefense.com/v3/__https://github.com/vmware/photon/commit/1c4e9360cc516c9e9a086b441c9b4df63df3449a__;!!OxqwZwwAVvYGpJMosQ!dqchmuaYd0CCIC1Jz1FYQiM59iMImtPGZPuo_Fdq5xOSxwWo3Ry2hqFf8oePyAEhi_S6AmIb3LUXq8TYBMWEQZHh$

— Reply to this email directly, view it on GitHubhttps://urldefense.com/v3/__https://github.com/vmware/photon/issues/1535*issuecomment-1899831350__;Iw!!OxqwZwwAVvYGpJMosQ!dqchmuaYd0CCIC1Jz1FYQiM59iMImtPGZPuo_Fdq5xOSxwWo3Ry2hqFf8oePyAEhi_S6AmIb3LUXq8TYBOmMLjYt$, or unsubscribehttps://urldefense.com/v3/__https://github.com/notifications/unsubscribe-auth/AILMTBGREZ7UHLIN2OZS3STYPIF75AVCNFSM6AAAAABCAMHJROVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQOJZHAZTCMZVGA__;!!OxqwZwwAVvYGpJMosQ!dqchmuaYd0CCIC1Jz1FYQiM59iMImtPGZPuo_Fdq5xOSxwWo3Ry2hqFf8oePyAEhi_S6AmIb3LUXq8TYBFHSgon4$. You are receiving this because you were mentioned.Message ID: @.***>

dcasota commented 10 months ago

@prashant1221 what about 'kernel panic' fix https://github.com/vmware/photon/commit/f029de15b4453daa80fb4edd4b81b0a9eb021f96, in correlation to 'node random reboot' and features eligible for 3 datacenter ? Here a patch filtering attempt using keywords.

vbrahmajosyula1 commented 10 months ago

Also, can you please share the output of slabtop -sc --once in the nodes which experiance this issue often.

dcasota commented 10 months ago

@jkevinezz fyi

Accordingly to Photon OS – Planned End of Support Schedule, an upgrade of Photon OS 3 is recommended.

Patch/Update as continuous action has been addressed in the last years by introducing somewhat a bunch of improvements.

The actual docs provide a short description about the upgrade process which is very easy btw.

Inplace migrations including fips mode, bios->uefi, kernel, docker, kubernetes, etc. afaik were never drilled down systematically for the docs of the open-source version of Photon OS. My bad, the doc attempt here was somewhat insufficient and no other attempts were been made since then. As every software is a continuous additions/deletions pulse, yes there were a few issues as well, e.g. 1244, 1226, 1234, 1420.

Having said that, with your VMware Support Account Manager, populating a migration path solution should be considered, using

As soon as the content of your Tanzu K8S nodes is sort of sbom'ified and populated for the migration, planning it for the maintenance schedule gets easy.

jkevinezz commented 10 months ago

Yes we are in the process but it takes time we are a huge environment so we need to understand what’s happening in photon 3.0


From: dcasota @.> Sent: Friday, January 19, 2024 8:20:56 AM To: vmware/photon @.> Cc: Julius Kevinezz @.>; Mention @.> Subject: {EXT} Re: [vmware/photon] Photon 3.0 cgroup exhaustion. VM shuts down with CPU disabled (Issue #1535)

@jkevinezzhttps://urldefense.com/v3/__https://github.com/jkevinezz__;!!OxqwZwwAVvYGpJMosQ!ZRKJepFkDtZgSIT5LPgvyKjDuaBiJUKJfffA1K89KsZcFB_WToTsG99Omunlwo5l5I210hx7gaNI03q1gEc9ITQx$ fyi

Accordingly to Photon OS – Planned End of Support Schedulehttps://urldefense.com/v3/__https://blogs.vmware.com/vsphere/2022/01/photon-1-x-end-of-support-announcement.html__;!!OxqwZwwAVvYGpJMosQ!ZRKJepFkDtZgSIT5LPgvyKjDuaBiJUKJfffA1K89KsZcFB_WToTsG99Omunlwo5l5I210hx7gaNI03q1gNVNj-_0$, an upgrade of Photon OS 3 is recommended.

Patch/Update as continuous action has been addressed in the last years by introducing somewhat a bunch of improvements.

The actual docs provide a short description about the upgrade process which is very easy btw.

Inplace migrations including fips mode, bios->uefi, kernel, docker, kubernetes, etc. afaik were never drilled down systematically for the docs of the open-source version of Photon OS. My bad, the doc attempt herehttps://urldefense.com/v3/__https://github.com/vmware/photon/pull/1478/files*diff-b60ebb61421e67fd0f8d18547fce1872f0a2381099f747a86003ef8654a46cad__;Iw!!OxqwZwwAVvYGpJMosQ!ZRKJepFkDtZgSIT5LPgvyKjDuaBiJUKJfffA1K89KsZcFB_WToTsG99Omunlwo5l5I210hx7gaNI03q1gDIiX_WL$ was somewhat insufficient and no other attempts were been made since then. As every software is a continuous additions/deletions pulse, yes there were a few issues as well, e.g. 1244https://urldefense.com/v3/__https://github.com/vmware/photon/issues/1244__;!!OxqwZwwAVvYGpJMosQ!ZRKJepFkDtZgSIT5LPgvyKjDuaBiJUKJfffA1K89KsZcFB_WToTsG99Omunlwo5l5I210hx7gaNI03q1gP9f8wFl$, 1226https://urldefense.com/v3/__https://github.com/vmware/photon/issues/1226__;!!OxqwZwwAVvYGpJMosQ!ZRKJepFkDtZgSIT5LPgvyKjDuaBiJUKJfffA1K89KsZcFB_WToTsG99Omunlwo5l5I210hx7gaNI03q1gJcm15H2$, 1234https://urldefense.com/v3/__https://github.com/vmware/photon/issues/1234__;!!OxqwZwwAVvYGpJMosQ!ZRKJepFkDtZgSIT5LPgvyKjDuaBiJUKJfffA1K89KsZcFB_WToTsG99Omunlwo5l5I210hx7gaNI03q1gHQ31gxu$, 1420https://urldefense.com/v3/__https://github.com/vmware/photon/issues/1420__;!!OxqwZwwAVvYGpJMosQ!ZRKJepFkDtZgSIT5LPgvyKjDuaBiJUKJfffA1K89KsZcFB_WToTsG99Omunlwo5l5I210hx7gaNI03q1gPJO1Miw$.

Having said that, with your VMware Support Account Manager, populating a migration path solution should be considered, using

As soon as the content of your Tanzu K8S nodes is sort of sbom'ified and populated for the migration, planning it for the maintenance schedule gets easy.

— Reply to this email directly, view it on GitHubhttps://urldefense.com/v3/__https://github.com/vmware/photon/issues/1535*issuecomment-1900416733__;Iw!!OxqwZwwAVvYGpJMosQ!ZRKJepFkDtZgSIT5LPgvyKjDuaBiJUKJfffA1K89KsZcFB_WToTsG99Omunlwo5l5I210hx7gaNI03q1gAlEzuqI$, or unsubscribehttps://urldefense.com/v3/__https://github.com/notifications/unsubscribe-auth/AILMTBA67YMICCWXZXIZKULYPJXLRAVCNFSM6AAAAABCAMHJROVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMBQGQYTMNZTGM__;!!OxqwZwwAVvYGpJMosQ!ZRKJepFkDtZgSIT5LPgvyKjDuaBiJUKJfffA1K89KsZcFB_WToTsG99Omunlwo5l5I210hx7gaNI03q1gMGuL8vj$. You are receiving this because you were mentioned.Message ID: @.***>

jkevinezz commented 10 months ago

@. ~ % ssh @*.**@*.> @. [ ~ ]$ sudo su root [ /home/capv ]# root [ /home/capv ]# root [ /home/capv ]# slabtop -sc --once Active / Total Objects (% used) : 17171204 / 17313766 (99.2%) Active / Total Slabs (% used) : 914338 / 914629 (100.0%) Active / Total Caches (% used) : 107 / 135 (79.3%) Active / Total Size (% used) : 3345421.32K / 3394139.04K (98.6%) Minimum / Average / Maximum Object : 0.02K / 0.20K / 4096.00K

OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME 670320 670192 99% 1.05K 223440 3 893760K ext4_inode_cache 8664357 8578099 99% 0.10K 222163 39 888652K buffer_head 2282826 2279920 99% 0.19K 108706 21 434824K dentry 322629 322627 99% 1.05K 107543 3 430172K nfs_inode_cache 264138 262238 99% 0.56K 37734 7 150936K radix_tree_node 126452 125853 99% 1.00K 31613 4 126452K kmalloc-1024 166170 166059 99% 0.58K 27695 6 110780K inode_cache 137094 137016 99% 0.66K 22849 6 91396K ovl_inode 164224 163864 99% 0.50K 20528 8 82112K kmalloc-512 283360 281753 99% 0.25K 17710 16 70840K skbuff_head_cache 370923 370789 99% 0.19K 17663 21 70652K kmalloc-192 879616 872108 99% 0.06K 13744 64 54976K kmalloc-64 1319967 1311072 99% 0.04K 13333 99 53332K ext4_extent_status 2436 2370 97% 10.94K 2436 1 38976K task_struct 283840 282911 99% 0.12K 8870 32 35480K kmalloc-128 540351 538860 99% 0.06K 8577 63 34308K dmaengine-unmap-2 182144 181804 99% 0.12K 5692 32 22768K kernfs_node_cache 164064 157068 95% 0.12K 5127 32 20508K kmalloc-96 10 1 10% 2048.00K 10 1 20480K kmalloc-2097152 16734 16529 98% 0.65K 2789 6 11156K proc_inode_cache 272304 269634 99% 0.03K 2196 124 8784K kmalloc-32 2077 2020 97% 4.00K 2077 1 8308K kmalloc-4096 28368 28137 99% 0.25K 1773 16 7092K kmalloc-256 6 2 33% 1024.00K 6 1 6144K kmalloc-1048576 3036 2906 95% 2.00K 1518 2 6072K kmalloc-2048 29421 29299 99% 0.19K 1401 21 5604K cred_jar 6534 6322 96% 0.68K 594 11 4752K shmem_inode_cache 7 1 14% 512.00K 7 1 3584K kmalloc-524288 12736 8444 66% 0.25K 796 16 3184K filp 6180 5891 95% 0.38K 618 10 2472K mnt_cache 3264 3181 97% 0.62K 544 6 2176K sock_inode_cache 233 222 95% 8.00K 233 1 1864K kmalloc-8192 57 24 42% 32.00K 57 1 1824K kmalloc-32768 648 622 95% 2.25K 216 3 1728K TCPv6 6 2 33% 256.00K 6 1 1536K kmalloc-262144 6032 5061 83% 0.25K 377 16 1508K nf_conntrack 22848 22448 98% 0.06K 357 64 1428K anon_vma_chain 17700 17449 98% 0.08K 354 50 1416K anon_vma 531 523 98% 2.06K 177 3 1416K sighand_cache 10 1 10% 128.00K 10 1 1280K kmalloc-131072 158 115 72% 8.00K 158 1 1264K biovec-max 992 960 96% 0.94K 248 4 992K RAW 7584 5319 70% 0.12K 237 32 948K pid 351 337 96% 2.12K 117 3 936K TCP 805 745 92% 1.06K 115 7 920K signal_cache 56 41 73% 16.00K 56 1 896K kmalloc-16384 3472 1400 40% 0.25K 217 16 868K pool_workqueue 11 9 81% 64.00K 11 1 704K kmalloc-65536 609 601 98% 1.12K 87 7 696K RAWv6 168 168 100% 4.00K 168 1 672K names_cache 3465 3365 97% 0.19K 165 21 660K proc_dir_entry 536 457 85% 1.00K 134 4 536K UNIX 627 576 91% 0.69K 57 11 456K files_cache 2268 1821 80% 0.19K 108 21 432K dmaengine-unmap-16 5432 4842 89% 0.07K 97 56 388K Acpi-Operand 380 380 100% 1.00K 95 4 380K mm_struct 1547 1233 79% 0.23K 91 17 364K tw_sock_TCPv6 2408 2400 99% 0.14K 86 28 344K ext4_groupinfo_4k 632 540 85% 0.50K 79 8 316K skbuff_fclone_cache 2048 1746 85% 0.12K 64 32 256K secpath_cache 29 29 100% 5.75K 29 1 232K net_namespace 4482 4282 95% 0.05K 54 83 216K ftrace_event_field 2856 2431 85% 0.07K 51 56 204K eventpoll_pwq 867 764 88% 0.23K 51 17 204K tw_sock_TCP 1472 766 52% 0.12K 46 32 184K scsi_sense_cache 2070 1924 92% 0.09K 45 46 180K trace_event_file 4257 3858 90% 0.04K 43 99 172K Acpi-Namespace 216 141 65% 0.62K 36 6 144K task_group 850 590 69% 0.16K 34 25 136K sigqueue 276 146 52% 0.32K 23 12 92K taskstats 1173 1078 91% 0.08K 23 51 92K inotify_inode_mark 33 26 78% 2.40K 11 3 88K request_queue 40 34 85% 2.00K 20 2 80K biovec-128 57 30 52% 1.25K 19 3 76K UDPv6 663 406 61% 0.10K 17 39 68K blkdev_ioc 1008 775 76% 0.06K 16 63 64K fs_cache 60 30 50% 0.81K 15 4 60K bdev_cache 195 163 83% 0.29K 15 13 60K request_sock_TCP 224 135 60% 0.25K 14 16 56K dquot 192 134 69% 0.25K 12 16 48K kmem_cache 781 496 63% 0.05K 11 71 44K Acpi-Parse 396 174 43% 0.11K 11 36 44K jbd2_journal_head 40 37 92% 0.94K 10 4 40K mqueue_inode_cache 10 4 40% 4.00K 10 1 40K sgpool-128 1467 1147 78% 0.02K 9 163 36K fsnotify_mark_connector 152 72 47% 0.20K 8 19 32K ip4-frags 32 32 100% 0.88K 8 4 32K nfs_read_data 693 449 64% 0.04K 7 99 28K pde_opener 140 9 6% 0.20K 7 20 28K file_lock_cache 448 132 29% 0.06K 7 64 28K ext4_io_end 45 29 64% 0.43K 5 9 20K uts_namespace 20 13 65% 1.00K 5 4 20K biovec-64 415 269 64% 0.05K 5 83 20K jbd2_journal_handle 48 32 66% 0.31K 4 12 16K xfrm_dst_cache 96 71 73% 0.12K 3 32 12K ext4_allocation_context 30 15 50% 0.26K 2 15 8K numa_policy 7 1 14% 1.06K 1 7 8K dmaengine-unmap-128 3 1 33% 2.06K 1 3 8K dmaengine-unmap-256 12 3 25% 0.60K 2 6 8K hugetlbfs_inode_cache 142 59 41% 0.05K 2 71 8K mbcache 248 2 0% 0.03K 2 124 8K xfs_ifork 7 1 14% 1.12K 1 7 8K PINGv6 11 4 36% 0.69K 1 11 8K nfs_commit_data 51 27 52% 0.08K 1 51 4K Acpi-State 5 1 20% 0.75K 1 5 4K dax_cache 240 2 0% 0.02K 1 240 4K jbd2_revoke_table_s 5 3 60% 0.71K 1 5 4K fat_inode_cache 0 0 0% 4096.00K 0 1 0K kmalloc-4194304 0 0 0% 0.11K 0 36 0K iint_cache 0 0 0% 0.45K 0 8 0K user_namespace 0 0 0% 0.30K 0 13 0K blkdev_requests 0 0 0% 0.94K 0 4 0K PING 0 0 0% 0.75K 0 5 0K xfrm_state 0 0 0% 0.23K 0 17 0K posix_timers_cache 0 0 0% 0.03K 0 124 0K dnotify_struct 0 0 0% 0.80K 0 5 0K ext2_inode_cache 0 0 0% 0.03K 0 124 0K jbd2_revoke_record_s 0 0 0% 0.62K 0 6 0K isofs_inode_cache 0 0 0% 0.73K 0 5 0K udf_inode_cache 0 0 0% 0.18K 0 22 0K xfs_log_ticket 0 0 0% 0.22K 0 18 0K xfs_btree_cur 0 0 0% 0.47K 0 8 0K xfs_da_state 0 0 0% 0.27K 0 15 0K xfs_buf_item 0 0 0% 0.43K 0 9 0K xfs_efd_item 0 0 0% 0.94K 0 4 0K xfs_inode 0 0 0% 0.17K 0 23 0K xfs_rud_item 0 0 0% 0.68K 0 11 0K xfs_rui_item 0 0 0% 0.21K 0 18 0K xfs_bui_item 0 0 0% 0.49K 0 8 0K xfs_dquot 0 0 0% 0.52K 0 7 0K xfs_dqtrx 0 0 0% 0.12K 0 34 0K cfq_io_cq 0 0 0% 0.29K 0 13 0K request_sock_TCPv6 0 0 0% 0.03K 0 124 0K fat_cache 0 0 0% 0.62K 0 6 0K rpc_inode_cache 0 0 0% 0.35K 0 11 0K nfs_direct_cache root [ /home/capv ]#


From: Vamsi Krishna Brahmajosyula @.> Sent: Friday, January 19, 2024 5:14 AM To: vmware/photon @.> Cc: Julius Kevinezz @.>; Mention @.> Subject: {EXT} Re: [vmware/photon] Photon 3.0 cgroup exhaustion. VM shuts down with CPU disabled (Issue #1535)

Also, can you please share the output of slabtop -sc --once in the nodes which experiance this issue often.

— Reply to this email directly, view it on GitHubhttps://urldefense.com/v3/__https://github.com/vmware/photon/issues/1535*issuecomment-1900123203__;Iw!!OxqwZwwAVvYGpJMosQ!YJx3mpFPV6hPCse6qBl4r_Z7lcx9wJItQ2GW0lscO9VlFBvIEZwp2uk7dn3T3UIdGWhCalMt4lT05kJHC5gyOw-b$, or unsubscribehttps://urldefense.com/v3/__https://github.com/notifications/unsubscribe-auth/AILMTBAMTCXVNNSM67ORISTYPJBQBAVCNFSM6AAAAABCAMHJROVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMBQGEZDGMRQGM__;!!OxqwZwwAVvYGpJMosQ!YJx3mpFPV6hPCse6qBl4r_Z7lcx9wJItQ2GW0lscO9VlFBvIEZwp2uk7dn3T3UIdGWhCalMt4lT05kJHC0xX-Icg$. You are receiving this because you were mentioned.Message ID: @.***>

jkevinezz commented 10 months ago

2023-12-04T10:49:30.103Z In(05) vcpu-5 - Guest: <4>[39261021.091603] Call Trace: 2023-12-04T10:49:30.103Z In(05) vcpu-5 - Guest: <4>[39261021.091611] dump_stack+0x6d/0x8b 2023-12-04T10:49:30.104Z In(05) vcpu-5 - Guest: <4>[39261021.091614] dump_header+0x6c/0x282 2023-12-04T10:49:30.104Z In(05) vcpu-5 - Guest: <4>[39261021.091619] oom_kill_process+0x243/0x270 2023-12-04T10:49:30.104Z In(05) vcpu-5 - Guest: <4>[39261021.091620] out_of_memory+0x100/0x4e0 2023-12-04T10:49:30.104Z In(05) vcpu-5 - Guest: <4>[39261021.091624] mem_cgroup_out_of_memory+0xa4/0xc0 2023-12-04T10:49:30.104Z In(05) vcpu-5 - Guest: <4>[39261021.091626] try_charge+0x700/0x740 2023-12-04T10:49:30.104Z In(05) vcpu-5 - Guest: <4>[39261021.091628] ? alloc_pages_nodemask+0xdc/0x250 2023-12-04T10:49:30.104Z In(05) vcpu-5 - Guest: <4>[39261021.091630] mem_cgroup_try_charge+0x86/0x190 2023-12-04T10:49:30.104Z In(05) vcpu-5 - Guest: <4>[39261021.091632] mem_cgroup_try_charge_delay+0x1d/0x40 2023-12-04T10:49:30.104Z In(05) vcpu-5 - Guest: <4>[39261021.091636] handle_mm_fault+0x823/0xee0 2023-12-04T10:49:30.104Z In(05) vcpu-5 - Guest: <4>[39261021.091639] ? switch_to_asm+0x35/0x70 2023-12-04T10:49:30.104Z In(05) vcpu-5 - Guest: <4>[39261021.091640] handle_mm_fault+0xde/0x240 2023-12-04T10:49:30.104Z In(05) vcpu-5 - Guest: <4>[39261021.091643] __do_page_fault+0x226/0x4b0 2023-12-04T10:49:30.104Z In(05) vcpu-5 - Guest: <4>[39261021.091644] do_page_fault+0x2d/0xf0 2023-12-04T10:49:30.104Z In(05) vcpu-5 - Guest: <4>[39261021.091646] ? page_fault+0x8/0x30 2023-12-04T10:49:30.104Z In(05) vcpu-5 - Guest: <4>[39261021.091646] page_fault+0x1e/0x30 2023-12-04T10:49:30.104Z In(05) vcpu-5 - Guest: <4>[39261021.091648] RIP: 0033:0x1321050 2023-12-04T10:49:30.104Z In(05) vcpu-5 - Guest: <4>[39261021.091650] Code: 31 f6 41 ba 7f 00 00 00 41 bb fd ff ff ff 0f b6 10 44 0f b6 65 df 48 83 c0 01 84 d2 78 3f 41 80 fc 0c 75 39 66 0f 1f 44 00 00 <66> 89 13 48 83 c3 02 49 39 c1 77 d8 0f 1f 40 00 48 8d 7d df e8 c7 2023-12-04T10:49:30.104Z In(05) vcpu-5 - Guest: <4>[39261021.091650] RSP: 002b:00007fffffa40470 EFLAGS: 00010246 2023-12-04T10:49:30.105Z In(05) vcpu-5 - Guest: <4>[39261021.091651] RAX: 00007f13068e5787 RBX: 00001e4cc8552000 RCX: 0000000000000001 2023-12-04T10:49:30.105Z In(05) vcpu-5 - Guest: <4>[39261021.091652] RDX: 0000000000000022 RSI: 0000000000000000 RDI: 000000000000003f 2023-12-04T10:49:30.105Z In(05) vcpu-5 - Guest: <4>[39261021.091652] RBP: 00007fffffa404a0 R08: 000000000000000c R09: 00007f1306e15e1d 2023-12-04T10:49:30.105Z In(05) vcpu-5 - Guest: <4>[39261021.091653] R10: 000000000000007f R11: 00000000fffffffd R12: 000000000000000c 2023-12-04T10:49:30.105Z In(05) vcpu-5 - Guest: <4>[39261021.091653] R13: 00007f130686be5a R14: 0000000000c78e0d R15: 00007f130619d010 2023-12-04T10:49:30.105Z In(05) vcpu-5 - Guest: <6>[39261021.091654] Task in /kubepods/burstable/pod69a00a89-5b57-4b35-9456-34f1b9f1d46f/9cc2d291785856a8b81704d96ca165405e2722f0d18e1a43a8154f457b0cbe18 killed as a result of limit of /kubepods/burstable/pod69a00a89-5b57-4b35-9456-34f1b9f1d46f 2023-12-04T10:49:30.105Z In(05) vcpu-5 - Guest: <6>[39261021.091660] memory: usage 614400kB, limit 614400kB, failcnt 81016 2023-12-04T10:49:30.105Z In(05) vcpu-5 - Guest: <6>[39261021.091660] memory+swap: usage 614400kB, limit 9007199254740988kB, failcnt 0 2023-12-04T10:49:30.105Z In(05) vcpu-5 - Guest: <6>[39261021.091661] kmem: usage 0kB, limit 9007199254740988kB, failcnt 0 2023-12-04T10:49:30.105Z In(05) vcpu-5 - Guest: <6>[39261021.091661] Memory cgroup stats for /kubepods/burstable/pod69a00a89-5b57-4b35-9456-34f1b9f1d46f: cache:0KB rss:0KB rss_huge:0KB shmem:0KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB 2023-12-04T10:49:30.105Z In(05) vcpu-5 - Guest: <6>[39261021.091666] Memory cgroup stats for /kubepods/burstable/pod69a00a89-5b57-4b35-9456-34f1b9f1d46f/2a6484f8f136d41bf17b6925821421216e8b9aa524aff498a5efa1e7ac037f9c: cache:0KB rss:0KB rss_huge:0KB shmem:0KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:40KB inactive_file:0KB active_file:0KB unevictable:0KB 2023-12-04T10:49:30.105Z In(05) vcpu-5 - Guest: <6>[39261021.091670] Memory cgroup stats for /kubepods/burstable/pod69a00a89-5b57-4b35-9456-34f1b9f1d46f/9cc2d291785856a8b81704d96ca165405e2722f0d18e1a43a8154f457b0cbe18: cache:0KB rss:613540KB rss_huge:108544KB shmem:0KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:614184KB inactive_file:4KB active_file:0KB unevictable:0KB 2023-12-04T10:49:30.105Z In(05) vcpu-5 - Guest: <6>[39261021.091673] Tasks state (memory values in pages): 2023-12-04T10:49:30.105Z In(05) vcpu-5 - Guest: <6>[39261021.091673] [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name 2023-12-04T10:49:30.105Z In(05) vcpu-5 - Guest: <6>[39261021.091875] [ 23437] 0 23437 242 1 28672 0 -998 pause 2023-12-04T10:49:30.105Z In(05) vcpu-5 - Guest: <6>[39261021.091920] [ 29498] 1000 29498 226154 29849 1282048 0 999 npm start 2023-12-04T10:49:30.105Z In(05) vcpu-5 - Guest: <6>[39261021.091921] [ 29859] 1000 29859 620 17 45056 0 999 sh 2023-12-04T10:49:30.105Z In(05) vcpu-5 - Guest: <6>[39261021.091922] [ 29860] 1000 29860 5590533 137614 7958528 0 999 node 2023-12-04T10:49:30.105Z In(05) vcpu-5 - Guest: <3>[39261021.091938] Memory cgroup out of memory: Kill process 29860 (node) score 1903 or sacrifice child 2023-12-04T10:49:30.106Z In(05) vcpu-5 - Guest: <3>[39261021.092001] Killed process 29860 (node) total-vm:22362132kB, anon-rss:517572kB, file-rss:32884kB, shmem-rss:0kB 2023-12-04T10:49:30.106Z In(05) vcpu-5 - Guest: <6>[39261021.113862] oom_reaper: reaped process 29860 (node), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB 2023-12-04T10:49:30.106Z In(05) vcpu-5 - Guest: <4>[39261111.880124] node invoked oom-killer: gfp_mask=0x6000c0(GFP_KERNEL), nodemask=(null), order=0, oom_score_adj=999 2023-12-04T10:49:30.106Z In(05) vcpu-5 - Guest: <6>[39261111.880126] node cpuset=64ecfc841af390d50da31f6be0a840d979b1a0178f2a9b1ddb9676162b3654ad mems_allowed=0-1 2023-12-04T10:49:30.106Z In(05) vcpu-5 - Guest: <4>[39261111.880132] CPU: 10 PID: 30411 Comm: node Tainted: G W 4.19.189-5.ph3 #1-photon 2023-12-04T10:49:30.106Z In(05) vcpu-5 - Guest: <4>[39261111.880133] Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 12/12/2018 2023-12-04T10:49:30.106Z In(05) vcpu-5 - Guest: <4>[39261111.880134] Call Trace: 2023-12-04T10:49:30.106Z In(05) vcpu-5 - Guest: <4>[39261111.880143] dump_stack+0x6d/0x8b 2023-12-04T10:49:30.106Z In(05) vcpu-5 - Guest: <4>[39261111.880150] dump_header+0x6c/0x282 2023-12-04T10:49:30.106Z In(05) vcpu-5 - Guest: <4>[39261111.880156] oom_kill_process+0x243/0x270 2023-12-04T10:49:30.106Z In(05) vcpu-5 - Guest: <4>[39261111.880160] out_of_memory+0x100/0x4e0 2023-12-04T10:49:30.106Z In(05) vcpu-5 - Guest: <4>[39261111.880165] mem_cgroup_out_of_memory+0xa4/0xc0 2023-12-04T10:49:30.106Z In(05) vcpu-5 - Guest: <4>[39261111.880167] try_charge+0x700/0x740 2023-12-04T10:49:30.106Z In(05) vcpu-5 - Guest: <4>[39261111.880170] ? alloc_pages_nodemask+0xdc/0x250 2023-12-04T10:49:30.106Z In(05) vcpu-5 - Guest: <4>[39261111.880173] mem_cgroup_try_charge+0x86/0x190 2023-12-04T10:49:30.106Z In(05) vcpu-5 - Guest: <4>[39261111.880175] mem_cgroup_try_charge_delay+0x1d/0x40 2023-12-04T10:49:30.106Z In(05) vcpu-5 - Guest: <4>[39261111.880179] handle_mm_fault+0x823/0xee0 2023-12-04T10:49:30.106Z In(05) vcpu-5 - Guest: <4>[39261111.880183] ? switch_to_asm+0x35/0x70 2023-12-04T10:49:30.106Z In(05) vcpu-5 - Guest: <4>[39261111.880186] handle_mm_fault+0xde/0x240 2023-12-04T10:49:30.107Z In(05) vcpu-5 - Guest: <4>[39261111.880190] do_page_fault+0x226/0x4b0 2023-12-04T10:49:30.107Z In(05) vcpu-5 - Guest: <4>[39261111.880191] do_page_fault+0x2d/0xf0 2023-12-04T10:49:30.107Z In(05) vcpu-5 - Guest: <4>[39261111.880194] ? page_fault+0x8/0x30 2023-12-04T10:49:30.107Z In(05) vcpu-5 - Guest: <4>[39261111.880195] page_fault+0x1e/0x30 2023-12-04T10:49:30.107Z In(05) vcpu-5 - Guest: <4>[39261111.880197] RIP: 0033:0x7f078f501b97 2023-12-04T10:49:30.107Z In(05) vcpu-5 - Guest: <4>[39261111.880199] Code: 48 39 f7 72 17 74 25 4c 8d 0c 16 4c 39 cf 0f 82 2a 02 00 00 48 89 f9 48 29 f1 eb 06 48 89 f1 48 29 f9 83 f9 3f 76 7b 48 89 d1 a4 c3 80 fa 10 73 17 80 fa 08 73 2f 80 fa 04 73 3b 80 fa 01 77 2023-12-04T10:49:30.107Z In(05) vcpu-5 - Guest: <4>[39261111.880200] RSP: 002b:00007fffb88bba68 EFLAGS: 00010292 2023-12-04T10:49:30.107Z In(05) vcpu-5 - Guest: <4>[39261111.880201] RAX: 00003ffda93690f8 RBX: 0000000000004000 RCX: 0000000000008000 2023-12-04T10:49:30.107Z In(05) vcpu-5 - Guest: <4>[39261111.880202] RDX: 0000000000008000 RSI: 0000058cc3501140 RDI: 00003ffda93690f8 2023-12-04T10:49:30.107Z In(05) vcpu-5 - Guest: <4>[39261111.880202] RBP: 00007fffb88bbad0 R08: 00007fffb88bbb50 R09: 0000058cc3509140 2023-12-04T10:49:30.107Z In(05) vcpu-5 - Guest: <4>[39261111.880204] R10: 0000058cc3501131 R11: 00003ffda93690f8 R12: 0000000000000000 2023-12-04T10:49:30.107Z In(05) vcpu-5 - Guest: <4>[39261111.880205] R13: 00003ffda93690f8 R14: 00007fffb88bbb50 R15: 0000000000004000 2023-12-04T10:49:30.107Z In(05) vcpu-5 - Guest: <6>[39261111.880340] Task in /kubepods/burstable/pod9d3270ac-3765-4280-b195-ab3703f79147/64ecfc841af390d50da31f6be0a840d979b1a0178f2a9b1ddb9676162b3654ad killed as a result of limit of /kubepods/burstable/pod9d3270ac-3765-4280-b195-ab3703f79147 2023-12-04T10:49:30.107Z In(05) vcpu-5 - Guest: <6>[39261111.880347] memory: usage 614400kB, limit 614400kB, failcnt 58907 2023-12-04T10:49:30.107Z In(05) vcpu-5 - Guest: <6>[39261111.880348] memory+swap: usage 614400kB, limit 9007199254740988kB, failcnt 0 2023-12-04T10:49:30.107Z In(05) vcpu-5 - Guest: <6>[39261111.880349] kmem: usage 0kB, limit 9007199254740988kB, failcnt 0 2023-12-04T10:49:30.107Z In(05) vcpu-5 - Guest: <6>[39261111.880350] Memory cgroup stats for /kubepods/burstable/pod9d3270ac-3765-4280-b195-ab3703f79147: cache:0KB rss:0KB rss_huge:0KB shmem:0KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB 2023-12-04T10:49:30.107Z In(05) vcpu-5 - Guest: <6>[39261111.880357] Memory cgroup stats for /kubepods/burstable/pod9d3270ac-3765-4280-b195-ab3703f79147/568ee417d5ffa86f21097fe3884f68c86423420004e79a573d2f93595cfcb2b3: cache:0KB rss:24KB rss_huge:0KB shmem:0KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:36KB inactive_file:0KB active_file:0KB unevictable:0KB 2023-12-04T10:49:30.108Z In(05) vcpu-5 - Guest: <6>[39261111.880362] Memory cgroup stats for /kubepods/burstable/pod9d3270ac-3765-4280-b195-ab3703f79147/64ecfc841af390d50da31f6be0a840d979b1a0178f2a9b1ddb9676162b3654ad: cache:0KB rss:613564KB rss_huge:184320KB shmem:0KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:614292KB inactive_file:8KB active_file:4KB unevictable:0KB 2023-12-04T10:49:30.108Z In(05) vcpu-5 - Guest: <6>[39261111.880367] Tasks state (memory values in pages): 2023-12-04T10:49:30.108Z In(05) vcpu-5 - Guest: <6>[39261111.880367] [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name 2023-12-04T10:49:30.108Z In(05) vcpu-5 - Guest: <6>[39261111.880767] [ 22759] 0 22759 242 1 28672 0 -998 pause 2023-12-04T10:49:30.108Z In(05) vcpu-5 - Guest: <6>[39261111.880871] [ 30332] 1000 30332 233831 37207 1380352 0 999 npm start 2023-12-04T10:49:30.108Z In(05) vcpu-5 - Guest: <6>[39261111.880873] [ 30410] 1000 30410 620 41 45056 0 999 sh 2023-12-04T10:49:30.108Z In(05) vcpu-5 - Guest: <6>[39261111.880875] [ 30411] 1000 30411 5585492 130593 7983104 0 999 node 2023-12-04T10:49:30.108Z In(05) vcpu-5 - Guest: <3>[39261111.880892] Memory cgroup out of memory: Kill process 30411 (node) score 1858 or sacrifice child 2023-12-04T10:49:30.108Z In(05) vcpu-5 - Guest: <3>[39261111.880992] Killed process 30411 (node) total-vm:22341968kB, anon-rss:488444kB, file-rss:33928kB, shmem-rss:0kB 2023-12-04T10:49:30.108Z In(05) vcpu-5 - Guest: <6>[39261111.895994] oom_reaper: reaped process 30411 (node), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB 2023-12-04T10:49:30.108Z In(05) vcpu-5 - Guest: <4>[39261133.295332] node invoked oom-killer: gfp_mask=0x6000c0(GFP_KERNEL), nodemask=(null), order=0, oom_score_adj=999 2023-12-04T10:49:30.108Z In(05) vcpu-5 - Guest: <6>[39261133.295334] node cpuset=56d2af57b29f622846fe1efe76aae4d8c4b48c507c95851962d5c47b870c1981 mems_allowed=0-1 2023-12-04T10:49:30.108Z In(05) vcpu-5 - Guest: <4>[39261133.295339] CPU: 14 PID: 26868 Comm: node Tainted: G W 4.19.189-5.ph3 #1-photon 2023-12-04T10:49:30.108Z In(05) vcpu-5 - Guest: <4>[39261133.295340] Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 12/12/2018 2023-12-04T10:49:30.108Z In(05) vcpu-5 - Guest: <4>[39261133.295340] Call Trace: 2023-12-04T10:49:30.108Z In(05) vcpu-5 - Guest: <4>[39261133.295348] dump_stack+0x6d/0x8b 2023-12-04T10:49:30.108Z In(05) vcpu-5 - Guest: <4>[39261133.295352] dump_header+0x6c/0x282 2023-12-04T10:49:30.108Z In(05) vcpu-5 - Guest: <4>[39261133.295357] oom_kill_process+0x243/0x270 2023-12-04T10:49:30.108Z In(05) vcpu-5 - Guest: <4>[39261133.295358] out_of_memory+0x100/0x4e0 2023-12-04T10:49:30.109Z In(05) vcpu-5 - Guest: <4>[39261133.295362] mem_cgroup_out_of_memory+0xa4/0xc0 2023-12-04T10:49:30.109Z In(05) vcpu-5 - Guest: <4>[39261133.295364] try_charge+0x700/0x740 2023-12-04T10:49:30.109Z In(05) vcpu-5 - Guest: <4>[39261133.295367] ? __alloc_pages_nodemask+0xdc/0x250 2023-12-04T10:49:30.109Z In(05) vcpu-5 - Guest: <4>[39261133.295369] mem_cgroup_try_charge+0x86/0x190 2023-12-04T10:49:30.109Z In(05) vcpu-5 - Guest: <4>[39261133.295371] mem_cgroup_try_charge_delay+0x1d/0x40 2023-12-04T10:49:30.109Z In(05) vcpu-5 - Guest: <4>[39261133.295375] handle_mm_fault+0x823/0xee0 2023-12-04T10:49:30.109Z In(05) vcpu-5 - Guest: <4>[39261133.295377] handle_mm_fault+0xde/0x240 2023-12-04T10:49:30.109Z In(05) vcpu-5 - Guest: <4>[39261133.295380] do_page_fault+0x226/0x4b0 2023-12-04T10:49:30.109Z In(05) vcpu-5 - Guest: <4>[39261133.295381] do_page_fault+0x2d/0xf0 2023-12-04T10:49:30.109Z In(05) vcpu-5 - Guest: <4>[39261133.295385] ? page_fault+0x8/0x30 2023-12-04T10:49:30.109Z In(05) vcpu-5 - Guest: <4>[39261133.295386] page_fault+0x1e/0x30 2023-12-04T10:49:30.109Z In(05) vcpu-5 - Guest: <4>[39261133.295388] RIP: 0033:0x7f32f0462b97 2023-12-04T10:49:30.109Z In(05) vcpu-5 - Guest: <4>[39261133.295390] Code: 48 39 f7 72 17 74 25 4c 8d 0c 16 4c 39 cf 0f 82 2a 02 00 00 48 89 f9 48 29 f1 eb 06 48 89 f1 48 29 f9 83 f9 3f 76 7b 48 89 d1 a4 c3 80 fa 10 73 17 80 fa 08 73 2f 80 fa 04 73 3b 80 fa 01 77 2023-12-04T10:49:30.109Z In(05) vcpu-5 - Guest: <4>[39261133.295391] RSP: 002b:00007fff8657fad8 EFLAGS: 00010292 2023-12-04T10:49:30.109Z In(05) vcpu-5 - Guest: <4>[39261133.295392] RAX: 0000072c592810f4 RBX: 0000000000004000 RCX: 00000000000010f4 2023-12-04T10:49:30.109Z In(05) vcpu-5 - Guest: <4>[39261133.295393] RDX: 0000000000008000 RSI: 00002fa44662810c RDI: 0000072c59288000 2023-12-04T10:49:30.109Z In(05) vcpu-5 - Guest: <4>[39261133.295394] RBP: 00007fff8657fb40 R08: 00007fff8657fbc0 R09: 00002fa4466211d1 2023-12-04T10:49:30.109Z In(05) vcpu-5 - Guest: <4>[39261133.295395] R10: 00002fa4466211f1 R11: 0000072c592810f4 R12: 0000000000000000 2023-12-04T10:49:30.109Z In(05) vcpu-5 - Guest: <4>[39261133.295396] R13: 0000072c592810f4 R14: 00007fff8657fbc0 R15: 0000000000004000 2023-12-04T10:49:30.110Z In(05) vcpu-5 - Guest: <6>[39261133.295398] Task in /kubepods/burstable/podfe3f7a38-b3f7-48aa-8997-a697ae3d0741/56d2af57b29f622846fe1efe76aae4d8c4b48c507c95851962d5c47b870c1981 killed as a result of limit of /kubepods/burstable/podfe3f7a38-b3f7-48aa-8997-a697ae3d0741 2023-12-04T10:49:30.110Z In(05) vcpu-5 - Guest: <6>[39261133.295404] memory: usage 614400kB, limit 614400kB, failcnt 83151 2023-12-04T10:49:30.110Z In(05) vcpu-5 - Guest: <6>[39261133.295404] memory+swap: usage 614400kB, limit 9007199254740988kB, failcnt 0 2023-12-04T10:49:30.110Z In(05) vcpu-5 - Guest: <6>[39261133.295405] kmem: usage 0kB, limit 9007199254740988kB, failcnt 0 2023-12-04T10:49:30.110Z In(05) vcpu-5 - Guest: <6>[39261133.295405] Memory cgroup stats for /kubepods/burstable/podfe3f7a38-b3f7-48aa-8997-a697ae3d0741: cache:0KB rss:0KB rss_huge:0KB shmem:0KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB 2023-12-04T10:49:30.110Z In(05) vcpu-5 - Guest: <6>[39261133.295411] Memory cgroup stats for /kubepods/burstable/podfe3f7a38-b3f7-48aa-8997-a697ae3d0741/baedd8c922d5616ded87a51c41ceb2dd9085c020077d15a06d95b5d7a033a73d: cache:0KB rss:0KB rss_huge:0KB shmem:0KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:40KB inactive_file:0KB active_file:0KB unevictable:0KB 2023-12-04T10:49:30.110Z In(05) vcpu-5 - Guest: <6>[39261133.295417] Memory cgroup stats for /kubepods/burstable/podfe3f7a38-b3f7-48aa-8997-a697ae3d0741/56d2af57b29f622846fe1efe76aae4d8c4b48c507c95851962d5c47b870c1981: cache:0KB rss:613564KB rss_huge:165888KB shmem:0KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:614256KB inactive_file:4KB active_file:4KB unevictable:0KB 2023-12-04T10:49:30.110Z In(05) vcpu-5 - Guest: <6>[39261133.295423] Tasks state (memory values in pages): 2023-12-04T10:49:30.110Z In(05) vcpu-5 - Guest: <6>[39261133.295423] [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name 2023-12-04T10:49:30.110Z In(05) vcpu-5 - Guest: <6>[39261133.295622] [ 22921] 0 22921 242 1 28672 0 -998 pause 2023-12-04T10:49:30.110Z In(05) vcpu-5 - Guest: <6>[39261133.295675] [ 26841] 1000 26841 234087 37516 1355776 0 999 npm start 2023-12-04T10:49:30.110Z In(05) vcpu-5 - Guest: <6>[39261133.295676] [ 26867] 1000 26867 620 27 45056 0 999 sh 2023-12-04T10:49:30.110Z In(05) vcpu-5 - Guest: <6>[39261133.295677] [ 26868] 1000 26868 5583298 130365 7737344 0 999 node 2023-12-04T10:49:30.110Z In(05) vcpu-5 - Guest: <3>[39261133.295690] Memory cgroup out of memory: Kill process 26868 (node) score 1856 or sacrifice child 2023-12-04T10:49:30.110Z In(05) vcpu-5 - Guest: <3>[39261133.295756] Killed process 26868 (node) total-vm:22333192kB, anon-rss:487496kB, file-rss:33964kB, shmem-rss:0kB 2023-12-04T10:49:30.110Z In(05) vcpu-5 - Guest: <6>[39261133.315821] oom_reaper: reaped process 26868 (node), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB 2023-12-04T10:49:30.110Z In(05) vcpu-5 - Guest: <4>[39262699.851704] node invoked oom-killer: gfp_mask=0x6000c0(GFP_KERNEL), nodemask=(null), order=0, oom_score_adj=999 2023-12-04T10:49:30.110Z In(05) vcpu-5 - Guest: <6>[39262699.851706] node cpuset=377a26f84172c63e53d4dae40296951ba7e622a6700f859bf092978be713be7d mems_allowed=0-1 2023-12-04T10:49:30.111Z In(05) vcpu-5 - Guest: <4>[39262699.851712] CPU: 9 PID: 29920 Comm: node Tainted: G W 4.19.189-5.ph3 #1-photon 2023-12-04T10:49:30.111Z In(05) vcpu-5 - Guest: <4>[39262699.851713] Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 12/12/2018 2023-12-04T10:49:30.111Z In(05) vcpu-5 - Guest: <4>[39262699.851713] Call Trace: 2023-12-04T10:49:30.111Z In(05) vcpu-5 - Guest: <4>[39262699.851720] dump_stack+0x6d/0x8b 2023-12-04T10:49:30.111Z In(05) vcpu-5 - Guest: <4>[39262699.851724] dump_header+0x6c/0x282 2023-12-04T10:49:30.111Z In(05) vcpu-5 - Guest: <4>[39262699.851730] oom_kill_process+0x243/0x270 2023-12-04T10:49:30.111Z In(05) vcpu-5 - Guest: <4>[39262699.851731] out_of_memory+0x100/0x4e0 2023-12-04T10:49:30.111Z In(05) vcpu-5 - Guest: <4>[39262699.851735] mem_cgroup_out_of_memory+0xa4/0xc0 2023-12-04T10:49:30.111Z In(05) vcpu-5 - Guest: <4>[39262699.851737] try_charge+0x700/0x740 2023-12-04T10:49:30.111Z In(05) vcpu-5 - Guest: <4>[39262699.851740] ? __alloc_pages_nodemask+0xdc/0x250 2023-12-04T10:49:30.111Z In(05) vcpu-5 - Guest: <4>[39262699.851745] mem_cgroup_try_charge+0x86/0x190 2023-12-04T10:49:30.111Z In(05) vcpu-5 - Guest: <4>[39262699.851747] mem_cgroup_try_charge_delay+0x1d/0x40 2023-12-04T10:49:30.111Z In(05) vcpu-5 - Guest: <4>[39262699.851752] handle_mm_fault+0x823/0xee0 2023-12-04T10:49:30.111Z In(05) vcpu-5 - Guest: <4>[39262699.851756] ? switch_to_asm+0x35/0x70 2023-12-04T10:49:30.111Z In(05) vcpu-5 - Guest: <4>[39262699.851758] handle_mm_fault+0xde/0x240 2023-12-04T10:49:30.111Z In(05) vcpu-5 - Guest: <4>[39262699.851761] do_page_fault+0x226/0x4b0 2023-12-04T10:49:30.111Z In(05) vcpu-5 - Guest: <4>[39262699.851762] do_page_fault+0x2d/0xf0 2023-12-04T10:49:30.111Z In(05) vcpu-5 - Guest: <4>[39262699.851764] ? page_fault+0x8/0x30 2023-12-04T10:49:30.111Z In(05) vcpu-5 - Guest: <4>[39262699.851765] page_fault+0x1e/0x30 2023-12-04T10:49:30.112Z In(05) vcpu-5 - Guest: <4>[39262699.851767] RIP: 0033:0xd709d1 2023-12-04T10:49:30.112Z In(05) vcpu-5 - Guest: <4>[39262699.851769] Code: 39 c7 0f 82 41 01 00 00 48 89 d1 31 c0 66 0f ef c9 48 83 e1 f0 0f 1f 40 00 f3 0f 6f 04 06 66 0f 6f d0 66 0f 68 c1 66 0f 60 d1 <0f> 11 44 47 10 0f 11 14 47 48 83 c0 10 48 39 c8 75 dd 49 89 d0 48 2023-12-04T10:49:30.112Z In(05) vcpu-5 - Guest: <4>[39262699.851769] RSP: 002b:00007ffcf183f418 EFLAGS: 00010287 2023-12-04T10:49:30.112Z In(05) vcpu-5 - Guest: <4>[39262699.851770] RAX: 0000000000528760 RBX: 00000cd45f241140 RCX: 0000000000d006c0 2023-12-04T10:49:30.112Z In(05) vcpu-5 - Guest: <4>[39262699.851771] RDX: 0000000000d006c8 RSI: 00007f37f0882010 RDI: 00000cd45f241140 2023-12-04T10:49:30.112Z In(05) vcpu-5 - Guest: <4>[39262699.851772] RBP: 00007ffcf183f450 R08: 0000000001a790f0 R09: 0000000000000000 2023-12-04T10:49:30.112Z In(05) vcpu-5 - Guest: <4>[39262699.851773] R10: 00007ffcf183f080 R11: 0000000000000001 R12: 00007ffcf183f470 2023-12-04T10:49:30.112Z In(05) vcpu-5 - Guest: <4>[39262699.851773] R13: 00007ffcf183f464 R14: 0000000000d1c967 R15: 00007f37f0882010 2023-12-04T10:49:30.112Z In(05) vcpu-5 - Guest: <6>[39262699.851801] Task in /kubepods/burstable/pod69a00a89-5b57-4b35-9456-34f1b9f1d46f/377a26f84172c63e53d4dae40296951ba7e622a6700f859bf092978be713be7d killed as a result of limit of /kubepods/burstable/pod69a00a89-5b57-4b35-9456-34f1b9f1d46f 2023-12-04T10:49:30.112Z In(05) vcpu-5 - Guest: <6>[39262699.851809] memory: usage 614400kB, limit 614400kB, failcnt 81892 2023-12-04T10:49:30.112Z In(05) vcpu-5 - Guest: <6>[39262699.851810] memory+swap: usage 614400kB, limit 9007199254740988kB, failcnt 0 2023-12-04T10:49:30.112Z In(05) vcpu-5 - Guest: <6>[39262699.851810] kmem: usage 0kB, limit 9007199254740988kB, failcnt 0 2023-12-04T10:49:30.112Z In(05) vcpu-5 - Guest: <6>[39262699.851811] Memory cgroup stats for /kubepods/burstable/pod69a00a89-5b57-4b35-9456-34f1b9f1d46f: cache:0KB rss:0KB rss_huge:0KB shmem:0KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB 2023-12-04T10:49:30.112Z In(05) vcpu-5 - Guest: <6>[39262699.851816] Memory cgroup stats for /kubepods/burstable/pod69a00a89-5b57-4b35-9456-34f1b9f1d46f/2a6484f8f136d41bf17b6925821421216e8b9aa524aff498a5efa1e7ac037f9c: cache:0KB rss:0KB rss_huge:0KB shmem:0KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:40KB inactive_file:0KB active_file:0KB unevictable:0KB 2023-12-04T10:49:30.112Z In(05) vcpu-5 - Guest: <6>[39262699.851823] Memory cgroup stats for /kubepods/burstable/pod69a00a89-5b57-4b35-9456-34f1b9f1d46f/377a26f84172c63e53d4dae40296951ba7e622a6700f859bf092978be713be7d: cache:0KB rss:613352KB rss_huge:81920KB shmem:0KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:614244KB inactive_file:0KB active_file:0KB unevictable:0KB 2023-12-04T10:49:30.112Z In(05) vcpu-5 - Guest: <6>[39262699.851829] Tasks state (memory values in pages): 2023-12-04T10:49:30.112Z In(05) vcpu-5 - Guest: <6>[39262699.851830] [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name 2023-12-04T10:49:30.112Z In(05) vcpu-5 - Guest: <6>[39262699.851971] [ 23437] 0 23437 242 1 28672 0 -998 pause 2023-12-04T10:49:30.112Z In(05) vcpu-5 - Guest: <6>[39262699.852008] [ 29868] 1000 29868 226474 30470 1294336 0 999 npm start 2023-12-04T10:49:30.113Z In(05) vcpu-5 - Guest: <6>[39262699.852009] [ 29919] 1000 29919 620 38 45056 0 999 sh 2023-12-04T10:49:30.113Z In(05) vcpu-5 - Guest: <6>[39262699.852011] [ 29920] 1000 29920 5592254 137574 7860224 0 999 node 2023-12-04T10:49:30.113Z In(05) vcpu-5 - Guest: <3>[39262699.852023] Memory cgroup out of memory: Kill process 29920 (node) score 1903 or sacrifice child 2023-12-04T10:49:30.113Z In(05) vcpu-5 - Guest: <3>[39262699.852074] Killed process 29920 (node) total-vm:22369016kB, anon-rss:515112kB, file-rss:35184kB, shmem-rss:0kB 2023-12-04T10:49:30.113Z In(05) vcpu-5 - Guest: <6>[39262699.869418] oom_reaper: reaped process 29920 (node), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB 2023-12-04T10:49:30.113Z In(05) vcpu-5 - Guest: <4>[39269349.678199] NFS: Server wrote zero bytes, expected 65536. 2023-12-04T10:49:30.113Z In(05) vcpu-5 - Guest: <4>[39269933.442151] node invoked oom-killer: gfp_mask=0x6000c0(GFP_KERNEL), nodemask=(null), order=0, oom_score_adj=999 2023-12-04T10:49:30.113Z In(05) vcpu-5 - Guest: <6>[39269933.442152] node cpuset=98dddd0cac39d3a7178efa0e84beb75a6d0cf01c747dc00a3c8ef53de7628a50 mems_allowed=0-1 2023-12-04T10:49:30.113Z In(05) vcpu-5 - Guest: <4>[39269933.442157] CPU: 12 PID: 4274 Comm: node Tainted: G W 4.19.189-5.ph3 #1-photon 2023-12-04T10:49:30.113Z In(05) vcpu-5 - Guest: <4>[39269933.442158] Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 12/12/2018 2023-12-04T10:49:30.113Z In(05) vcpu-5 - Guest: <4>[39269933.442158] Call Trace: 2023-12-04T10:49:30.113Z In(05) vcpu-5 - Guest: <4>[39269933.442163] dump_stack+0x6d/0x8b 2023-12-04T10:49:30.113Z In(05) vcpu-5 - Guest: <4>[39269933.442165] dump_header+0x6c/0x282 2023-12-04T10:49:30.113Z In(05) vcpu-5 - Guest: <4>[39269933.442169] oom_kill_process+0x243/0x270 2023-12-04T10:49:30.113Z In(05) vcpu-5 - Guest: <4>[39269933.442170] out_of_memory+0x100/0x4e0 2023-12-04T10:49:30.113Z In(05) vcpu-5 - Guest: <4>[39269933.442174] mem_cgroup_out_of_memory+0xa4/0xc0 2023-12-04T10:49:30.113Z In(05) vcpu-5 - Guest: <4>[39269933.442176] try_charge+0x700/0x740 2023-12-04T10:49:30.113Z In(05) vcpu-5 - Guest: <4>[39269933.442178] ? alloc_pages_nodemask+0xdc/0x250 2023-12-04T10:49:30.113Z In(05) vcpu-5 - Guest: <4>[39269933.442180] mem_cgroup_try_charge+0x86/0x190 2023-12-04T10:49:30.114Z In(05) vcpu-5 - Guest: <4>[39269933.442181] mem_cgroup_try_charge_delay+0x1d/0x40 2023-12-04T10:49:30.114Z In(05) vcpu-5 - Guest: <4>[39269933.442185] handle_mm_fault+0x823/0xee0 2023-12-04T10:49:30.114Z In(05) vcpu-5 - Guest: <4>[39269933.442186] handle_mm_fault+0xde/0x240 2023-12-04T10:49:30.114Z In(05) vcpu-5 - Guest: <4>[39269933.442188] do_page_fault+0x226/0x4b0 2023-12-04T10:49:30.114Z In(05) vcpu-5 - Guest: <4>[39269933.442189] do_page_fault+0x2d/0xf0 2023-12-04T10:49:30.114Z In(05) vcpu-5 - Guest: <4>[39269933.442192] ? page_fault+0x8/0x30 2023-12-04T10:49:30.114Z In(05) vcpu-5 - Guest: <4>[39269933.442193] page_fault+0x1e/0x30 2023-12-04T10:49:30.114Z In(05) vcpu-5 - Guest: <4>[39269933.442194] RIP: 0033:0x7f91b808ab97 2023-12-04T10:49:30.114Z In(05) vcpu-5 - Guest: <4>[39269933.442196] Code: 48 39 f7 72 17 74 25 4c 8d 0c 16 4c 39 cf 0f 82 2a 02 00 00 48 89 f9 48 29 f1 eb 06 48 89 f1 48 29 f9 83 f9 3f 76 7b 48 89 d1 a4 c3 80 fa 10 73 17 80 fa 08 73 2f 80 fa 04 73 3b 80 fa 01 77 2023-12-04T10:49:30.114Z In(05) vcpu-5 - Guest: <4>[39269933.442196] RSP: 002b:00007ffd56827458 EFLAGS: 00010216 2023-12-04T10:49:30.114Z In(05) vcpu-5 - Guest: <4>[39269933.442197] RAX: 000002a3b1be9228 RBX: 0000000000008010 RCX: 0000000000006230 2023-12-04T10:49:30.114Z In(05) vcpu-5 - Guest: <4>[39269933.442198] RDX: 0000000000008008 RSI: 00001333cce82f10 RDI: 000002a3b1beb000 2023-12-04T10:49:30.114Z In(05) vcpu-5 - Guest: <4>[39269933.442198] RBP: 00007ffd568274f0 R08: 000002a3b1be9221 R09: 000002a3b1be9220 2023-12-04T10:49:30.114Z In(05) vcpu-5 - Guest: <4>[39269933.442198] R10: 00007f91b424e7f8 R11: 000002a3b1be9200 R12: 00001333cce81131 2023-12-04T10:49:30.114Z In(05) vcpu-5 - Guest: <4>[39269933.442199] R13: 00001333cce81130 R14: 00003d04678c2be1 R15: 00007f91ad0dd460 2023-12-04T10:49:30.114Z In(05) vcpu-5 - Guest: <6>[39269933.442200] Task in /kubepods/burstable/podfe3f7a38-b3f7-48aa-8997-a697ae3d0741/98dddd0cac39d3a7178efa0e84beb75a6d0cf01c747dc00a3c8ef53de7628a50 killed as a result of limit of /kubepods/burstable/podfe3f7a38-b3f7-48aa-8997-a697ae3d0741 2023-12-04T10:49:30.114Z In(05) vcpu-5 - Guest: <6>[39269933.442205] memory: usage 614400kB, limit 614400kB, failcnt 83509 2023-12-04T10:49:30.114Z In(05) vcpu-5 - Guest: <6>[39269933.442205] memory+swap: usage 614400kB, limit 9007199254740988kB, failcnt 0 2023-12-04T10:49:30.115Z In(05) vcpu-5 - Guest: <6>[39269933.442206] kmem: usage 0kB, limit 9007199254740988kB, failcnt 0 2023-12-04T10:49:30.115Z In(05) vcpu-5 - Guest: <6>[39269933.442206] Memory cgroup stats for /kubepods/burstable/podfe3f7a38-b3f7-48aa-8997-a697ae3d0741: cache:0KB rss:0KB rss_huge:0KB shmem:0KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB 2023-12-04T10:49:30.115Z In(05) vcpu-5 - Guest: <6>[39269933.442210] Memory cgroup stats for /kubepods/burstable/podfe3f7a38-b3f7-48aa-8997-a697ae3d0741/baedd8c922d5616ded87a51c41ceb2dd9085c020077d15a06d95b5d7a033a73d: cache:0KB rss:0KB rss_huge:0KB shmem:0KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:40KB inactive_file:0KB active_file:0KB unevictable:0KB 2023-12-04T10:49:30.115Z In(05) vcpu-5 - Guest: <6>[39269933.442214] Memory cgroup stats for /kubepods/burstable/podfe3f7a38-b3f7-48aa-8997-a697ae3d0741/98dddd0cac39d3a7178efa0e84beb75a6d0cf01c747dc00a3c8ef53de7628a50: cache:324KB rss:613744KB rss_huge:133120KB shmem:0KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:614276KB inactive_file:0KB active_file:0KB unevictable:0KB 2023-12-04T10:49:30.115Z In(05) vcpu-5 - Guest: <6>[39269933.442217] Tasks state (memory values in pages): 2023-12-04T10:49:30.115Z In(05) vcpu-5 - Guest: <6>[39269933.442217] [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name 2023-12-04T10:49:30.115Z In(05) vcpu-5 - Guest: <6>[39269933.442354] [ 22921] 0 22921 242 1 28672 0 -998 pause 2023-12-04T10:49:30.115Z In(05) vcpu-5 - Guest: <6>[39269933.442389] [ 4216] 1000 4216 234279 37901 1372160 0 999 npm start 2023-12-04T10:49:30.115Z In(05) vcpu-5 - Guest: <6>[39269933.442390] [ 4273] 1000 4273 620 17 45056 0 999 sh 2023-12-04T10:49:30.115Z In(05) vcpu-5 - Guest: <6>[39269933.442391] [ 4274] 1000 4274 5579718 129764 7491584 0 999 node 2023-12-04T10:49:30.115Z In(05) vcpu-5 - Guest: <3>[39269933.442401] Memory cgroup out of memory: Kill process 4274 (node) score 1851 or sacrifice child 2023-12-04T10:49:30.115Z In(05) vcpu-5 - Guest: <3>[39269933.442454] Killed process 4274 (node) total-vm:22318872kB, anon-rss:486144kB, file-rss:32912kB, shmem-rss:0kB 2023-12-04T10:49:30.115Z In(05) vcpu-5 - Guest: <6>[39269933.459105] oom_reaper: reaped process 4274 (node), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB 2023-12-04T10:49:30.115Z In(05) vcpu-5 - Guest: <4>[39270239.118511] node invoked oom-killer: gfp_mask=0x6000c0(GFP_KERNEL), nodemask=(null), order=0, oom_score_adj=999 2023-12-04T10:49:30.115Z In(05) vcpu-5 - Guest: <6>[39270239.118512] node cpuset=d95d3d41b7c13b143c2ad30e4009282b7f13c4a61adb78c320b2426ccf15df5f mems_allowed=0-1 2023-12-04T10:49:30.115Z In(05) vcpu-5 - Guest: <4>[39270239.118516] CPU: 0 PID: 3243 Comm: node Tainted: G W 4.19.189-5.ph3 #1-photon 2023-12-04T10:49:30.115Z In(05) vcpu-5 - Guest: <4>[39270239.118517] Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 12/12/2018 2023-12-04T10:49:30.115Z In(05) vcpu-5 - Guest: <4>[39270239.118517] Call Trace: 2023-12-04T10:49:30.115Z In(05) vcpu-5 - Guest: <4>[39270239.118522] dump_stack+0x6d/0x8b 2023-12-04T10:49:30.116Z In(05) vcpu-5 - Guest: <4>[39270239.118525] dump_header+0x6c/0x282 2023-12-04T10:49:30.116Z In(05) vcpu-5 - Guest: <4>[39270239.118528] oom_kill_process+0x243/0x270 2023-12-04T10:49:30.116Z In(05) vcpu-5 - Guest: <4>[39270239.118529] out_of_memory+0x100/0x4e0 2023-12-04T10:49:30.116Z In(05) vcpu-5 - Guest: <4>[39270239.118532] mem_cgroup_out_of_memory+0xa4/0xc0 2023-12-04T10:49:30.116Z In(05) vcpu-5 - Guest: <4>[39270239.118534] try_charge+0x700/0x740 2023-12-04T10:49:30.116Z In(05) vcpu-5 - Guest: <4>[39270239.118536] ? alloc_pages_nodemask+0xdc/0x250 2023-12-04T10:49:30.116Z In(05) vcpu-5 - Guest: <4>[39270239.118538] mem_cgroup_try_charge+0x86/0x190 2023-12-04T10:49:30.116Z In(05) vcpu-5 - Guest: <4>[39270239.118539] mem_cgroup_try_charge_delay+0x1d/0x40 2023-12-04T10:49:30.116Z In(05) vcpu-5 - Guest: <4>[39270239.118542] handle_mm_fault+0x823/0xee0 2023-12-04T10:49:30.116Z In(05) vcpu-5 - Guest: <4>[39270239.118544] ? switch_to_asm+0x35/0x70 2023-12-04T10:49:30.116Z In(05) vcpu-5 - Guest: <4>[39270239.118545] handle_mm_fault+0xde/0x240 2023-12-04T10:49:30.116Z In(05) vcpu-5 - Guest: <4>[39270239.118548] do_page_fault+0x226/0x4b0 2023-12-04T10:49:30.116Z In(05) vcpu-5 - Guest: <4>[39270239.118549] do_page_fault+0x2d/0xf0 2023-12-04T10:49:30.116Z In(05) vcpu-5 - Guest: <4>[39270239.118550] ? page_fault+0x8/0x30 2023-12-04T10:49:30.116Z In(05) vcpu-5 - Guest: <4>[39270239.118550] page_fault+0x1e/0x30 2023-12-04T10:49:30.116Z In(05) vcpu-5 - Guest: <4>[39270239.118552] RIP: 0033:0xd709d1 2023-12-04T10:49:30.116Z In(05) vcpu-5 - Guest: <4>[39270239.118553] Code: 39 c7 0f 82 41 01 00 00 48 89 d1 31 c0 66 0f ef c9 48 83 e1 f0 0f 1f 40 00 f3 0f 6f 04 06 66 0f 6f d0 66 0f 68 c1 66 0f 60 d1 <0f> 11 44 47 10 0f 11 14 47 48 83 c0 10 48 39 c8 75 dd 49 89 d0 48 2023-12-04T10:49:30.116Z In(05) vcpu-5 - Guest: <4>[39270239.118554] RSP: 002b:00007ffebd7d3358 EFLAGS: 00010287 2023-12-04T10:49:30.117Z In(05) vcpu-5 - Guest: <4>[39270239.118555] RAX: 0000000000c13760 RBX: 00000ec4ae481140 RCX: 0000000000d006c0 2023-12-04T10:49:30.117Z In(05) vcpu-5 - Guest: <4>[39270239.118555] RDX: 0000000000d006c8 RSI: 00007f5cf6264010 RDI: 00000ec4ae481140 2023-12-04T10:49:30.117Z In(05) vcpu-5 - Guest: <4>[39270239.118556] RBP: 00007ffebd7d3390 R08: 0000000001a790f0 R09: 0000000000000000 2023-12-04T10:49:30.117Z In(05) vcpu-5 - Guest: <4>[39270239.118556] R10: 00007ffebd7d2fc0 R11: 0000000000000001 R12: 00007ffebd7d33b0 2023-12-04T10:49:30.117Z In(05) vcpu-5 - Guest: <4>[39270239.118557] R13: 00007ffebd7d33a4 R14: 0000000000d1c967 R15: 00007f5cf6264010 2023-12-04T10:49:30.117Z In(05) vcpu-5 - Guest: <6>[39270239.118558] Task in /kubepods/burstable/podfe3f7a38-b3f7-48aa-8997-a697ae3d0741/d95d3d41b7c13b143c2ad30e4009282b7f13c4a61adb78c320b2426ccf15df5f killed as a result of limit of /kubepods/burstable/podfe3f7a38-b3f7-48aa-8997-a697ae3d0741 2023-12-04T10:49:30.117Z In(05) vcpu-5 - Guest: <6>[39270239.118562] memory: usage 614400kB, limit 614400kB, failcnt 84144 2023-12-04T10:49:30.117Z In(05) vcpu-5 - Guest: <6>[39270239.118562] memory+swap: usage 614400kB, limit 9007199254740988kB, failcnt 0 2023-12-04T10:49:30.117Z In(05) vcpu-5 - Guest: <6>[39270239.118563] kmem: usage 0kB, limit 9007199254740988kB, failcnt 0 2023-12-04T10:49:30.117Z In(05) vcpu-5 - Guest: <6>[39270239.118563] Memory cgroup stats for /kubepods/burstable/podfe3f7a38-b3f7-48aa-8997-a697ae3d0741: cache:0KB rss:0KB rss_huge:0KB shmem:0KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB 2023-12-04T10:49:30.117Z In(05) vcpu-5 - Guest: <6>[39270239.118567] Memory cgroup stats for /kubepods/burstable/podfe3f7a38-b3f7-48aa-8997-a697ae3d0741/baedd8c922d5616ded87a51c41ceb2dd9085c020077d15a06d95b5d7a033a73d: cache:0KB rss:0KB rss_huge:0KB shmem:0KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:40KB inactive_file:0KB active_file:0KB unevictable:0KB 2023-12-04T10:49:30.117Z In(05) vcpu-5 - Guest: <6>[39270239.118571] Memory cgroup stats for /kubepods/burstable/podfe3f7a38-b3f7-48aa-8997-a697ae3d0741/d95d3d41b7c13b143c2ad30e4009282b7f13c4a61adb78c320b2426ccf15df5f: cache:0KB rss:613484KB rss_huge:126976KB shmem:0KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:614268KB inactive_file:4KB active_file:0KB unevictable:0KB 2023-12-04T10:49:30.117Z In(05) vcpu-5 - Guest: <6>[39270239.118573] Tasks state (memory values in pages): 2023-12-04T10:49:30.117Z In(05) vcpu-5 - Guest: <6>[39270239.118574] [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name 2023-12-04T10:49:30.117Z In(05) vcpu-5 - Guest: <6>[39270239.118713] [ 22921] 0 22921 242 1 28672 0 -998 pause 2023-12-04T10:49:30.117Z In(05) vcpu-5 - Guest: <6>[39270239.118758] [ 3155] 1000 3155 217063 40370 1355776 0 999 npm start 2023-12-04T10:49:30.117Z In(05) vcpu-5 - Guest: <6>[39270239.118759] [ 3242] 1000 3242 620 82 40960 0 999 sh 2023-12-04T10:49:30.117Z In(05) vcpu-5 - Guest: <6>[39270239.118760] [ 3243] 1000 3243 5582182 134435 7630848 0 999 node 2023-12-04T10:49:30.117Z In(05) vcpu-5 - Guest: <3>[39270239.118762] Memory cgroup out of memory: Kill process 3243 (node) score 1882 or sacrifice child 2023-12-04T10:49:30.118Z In(05) vcpu-5 - Guest: <3>[39270239.118813] Killed process 3243 (node) total-vm:22328728kB, anon-rss:489052kB, file-rss:48888kB, shmem-rss:0kB 2023-12-04T10:49:30.118Z In(05) vcpu-5 - Guest: <6>[39270239.141018] oom_reaper: reaped process 3243 (node), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB 2023-12-04T10:49:30.118Z In(05) vcpu-5 - Guest: <4>[39282370.415969] node invoked oom-killer: gfp_mask=0x6000c0(GFP_KERNEL), nodemask=(null), order=0, oom_score_adj=999 2023-12-04T10:49:30.118Z In(05) vcpu-5 - Guest: <6>[39282370.415970] node cpuset=04b88cc3d8db35564ee4bc592a03cee0faa32f377097694336bb646439b5892f mems_allowed=0-1 2023-12-04T10:49:30.118Z In(05) vcpu-5 - Guest: <4>[39282370.415974] CPU: 10 PID: 15616 Comm: node Tainted: G W 4.19.189-5.ph3 #1-photon 2023-12-04T10:49:30.118Z In(05) vcpu-5 - Guest: <4>[39282370.415975] Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 12/12/2018 2023-12-04T10:49:30.118Z In(05) vcpu-5 - Guest: <4>[39282370.415976] Call Trace: 2023-12-04T10:49:30.118Z In(05) vcpu-5 - Guest: <4>[39282370.415981] dump_stack+0x6d/0x8b 2023-12-04T10:49:30.118Z In(05) vcpu-5 - Guest: <4>[39282370.415984] dump_header+0x6c/0x282 2023-12-04T10:49:30.118Z In(05) vcpu-5 - Guest: <4>[39282370.415987] oom_kill_process+0x243/0x270 2023-12-04T10:49:30.118Z In(05) vcpu-5 - Guest: <4>[39282370.415988] out_of_memory+0x100/0x4e0 2023-12-04T10:49:30.118Z In(05) vcpu-5 - Guest: <4>[39282370.415991] mem_cgroup_out_of_memory+0xa4/0xc0 2023-12-04T10:49:30.118Z In(05) vcpu-5 - Guest: <4>[39282370.415993] try_charge+0x700/0x740 2023-12-04T10:49:30.118Z In(05) vcpu-5 - Guest: <4>[39282370.415995] ? __alloc_pages_nodemask+0xdc/0x250 2023-12-04T10:49:30.118Z In(05) vcpu-5 - Guest: <4>[39282370.415997] mem_cgroup_try_charge+0x86/0x190 2023-12-04T10:49:30.118Z In(05) vcpu-5 - Guest: <4>[39282370.415999] mem_cgroup_try_charge_delay+0x1d/0x40 2023-12-04T10:49:30.118Z In(05) vcpu-5 - Guest: <4>[39282370.416001] handle_mm_fault+0x823/0xee0 2023-12-04T10:49:30.118Z In(05) vcpu-5 - Guest: <4>[39282370.416003] ? switch_to_asm+0x35/0x70 2023-12-04T10:49:30.118Z In(05) vcpu-5 - Guest: <4>[39282370.416005] handle_mm_fault+0xde/0x240 2023-12-04T10:49:30.119Z In(05) vcpu-5 - Guest: <4>[39282370.416007] do_page_fault+0x226/0x4b0 2023-12-04T10:49:30.119Z In(05) vcpu-5 - Guest: <4>[39282370.416008] do_page_fault+0x2d/0xf0 2023-12-04T10:49:30.119Z In(05) vcpu-5 - Guest: <4>[39282370.416009] ? page_fault+0x8/0x30 2023-12-04T10:49:30.119Z In(05) vcpu-5 - Guest: <4>[39282370.416010] page_fault+0x1e/0x30 2023-12-04T10:49:30.119Z In(05) vcpu-5 - Guest: <4>[39282370.416011] RIP: 0033:0xd709d1 2023-12-04T10:49:30.119Z In(05) vcpu-5 - Guest: <4>[39282370.416013] Code: 39 c7 0f 82 41 01 00 00 48 89 d1 31 c0 66 0f ef c9 48 83 e1 f0 0f 1f 40 00 f3 0f 6f 04 06 66 0f 6f d0 66 0f 68 c1 66 0f 60 d1 <0f> 11 44 47 10 0f 11 14 47 48 83 c0 10 48 39 c8 75 dd 49 89 d0 48 2023-12-04T10:49:30.119Z In(05) vcpu-5 - Guest: <4>[39282370.416014] RSP: 002b:00007ffda6bc32a8 EFLAGS: 00010287 2023-12-04T10:49:30.119Z In(05) vcpu-5 - Guest: <4>[39282370.416014] RAX: 0000000000495760 RBX: 00001b25febc1140 RCX: 0000000000d006c0 2023-12-04T10:49:30.119Z In(05) vcpu-5 - Guest: <4>[39282370.416015] RDX: 0000000000d006c8 RSI: 00007f6775df7010 RDI: 00001b25febc1140 2023-12-04T10:49:30.119Z In(05) vcpu-5 - Guest: <4>[39282370.416015] RBP: 00007ffda6bc32e0 R08: 0000000001a790f0 R09: 0000000000000000 2023-12-04T10:49:30.119Z In(05) vcpu-5 - Guest: <4>[39282370.416016] R10: 00007ffda6bc2f10 R11: 0000000000000001 R12: 00007ffda6bc3300 2023-12-04T10:49:30.119Z In(05) vcpu-5 - Guest: <4>[39282370.416016] R13: 00007ffda6bc32f4 R14: 0000000000d1c967 R15: 00007f6775df7010 2023-12-04T10:49:30.119Z In(05) vcpu-5 - Guest: <6>[39282370.416017] Task in /kubepods/burstable/pod69a00a89-5b57-4b35-9456-34f1b9f1d46f/04b88cc3d8db35564ee4bc592a03cee0faa32f377097694336bb646439b5892f killed as a result of limit of /kubepods/burstable/pod69a00a89-5b57-4b35-9456-34f1b9f1d46f 2023-12-04T10:49:30.119Z In(05) vcpu-5 - Guest: <6>[39282370.416021] memory: usage 614400kB, limit 614400kB, failcnt 82503 2023-12-04T10:49:30.119Z In(05) vcpu-5 - Guest: <6>[39282370.416022] memory+swap: usage 614400kB, limit 9007199254740988kB, failcnt 0 2023-12-04T10:49:30.119Z In(05) vcpu-5 - Guest: <6>[39282370.416022] kmem: usage 0kB, limit 9007199254740988kB, failcnt 0 2023-12-04T10:49:30.119Z In(05) vcpu-5 - Guest: <6>[39282370.416023] Memory cgroup stats for /kubepods/burstable/pod69a00a89-5b57-4b35-9456-34f1b9f1d46f: cache:0KB rss:0KB rss_huge:0KB shmem:0KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB 2023-12-04T10:49:30.119Z In(05) vcpu-5 - Guest: <6>[39282370.416027] Memory cgroup stats for /kubepods/burstable/pod69a00a89-5b57-4b35-9456-34f1b9f1d46f/2a6484f8f136d41bf17b6925821421216e8b9aa524aff498a5efa1e7ac037f9c: cache:0KB rss:0KB rss_huge:0KB shmem:0KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:40KB inactive_file:0KB active_file:0KB unevictable:0KB 2023-12-04T10:49:30.120Z In(05) vcpu-5 - Guest: <6>[39282370.416031] Memory cgroup stats for /kubepods/burstable/pod69a00a89-5b57-4b35-9456-34f1b9f1d46f/04b88cc3d8db35564ee4bc592a03cee0faa32f377097694336bb646439b5892f: cache:0KB rss:613848KB rss_huge:251904KB shmem:0KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:614148KB inactive_file:4KB active_file:0KB unevictable:0KB 2023-12-04T10:49:30.120Z In(05) vcpu-5 - Guest: <6>[39282370.416033] Tasks state (memory values in pages): 2023-12-04T10:49:30.120Z In(05) vcpu-5 - Guest: <6>[39282370.416034] [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name 2023-12-04T10:49:30.120Z In(05) vcpu-5 - Guest: <6>[39282370.416168] [ 23437] 0 23437 242 1 28672 0 -998 pause 2023-12-04T10:49:30.120Z In(05) vcpu-5 - Guest: <6>[39282370.416203] [ 15585] 1000 15585 226026 30322 1318912 0 999 npm start 2023-12-04T10:49:30.120Z In(05) vcpu-5 - Guest: <6>[39282370.416204] [ 15615] 1000 15615 620 39 45056 0 999 sh 2023-12-04T10:49:30.120Z In(05) vcpu-5 - Guest: <6>[39282370.416205] [ 15616] 1000 15616 5591087 137754 7864320 0 999 node 2023-12-04T10:49:30.120Z In(05) vcpu-5 - Guest: <3>[39282370.416216] Memory cgroup out of memory: Kill process 15616 (node) score 1904 or sacrifice child 2023-12-04T10:49:30.120Z In(05) vcpu-5 - Guest: <3>[39282370.416256] Killed process 15616 (node) total-vm:22364348kB, anon-rss:516196kB, file-rss:34820kB, shmem-rss:0kB 2023-12-04T10:49:30.120Z In(05) vcpu-5 - Guest: <6>[39282370.435578] oom_reaper: reaped process 15616 (node), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB 2023-12-04T10:49:30.120Z In(05) vcpu-5 - Guest: <4>[39287271.318268] node invoked oom-killer: gfp_mask=0x6000c0(GFP_KERNEL), nodemask=(null), order=0, oom_score_adj=999 2023-12-04T10:49:30.120Z In(05) vcpu-5 - Guest: <6>[39287271.318269] node cpuset=43d479743f93e2349a7c13fd63ac9f562839791a875a5fb81e90568d53ae5dfd mems_allowed=0-1 2023-12-04T10:49:30.120Z In(05) vcpu-5 - Guest: <4>[39287271.318274] CPU: 11 PID: 9308 Comm: node Tainted: G W 4.19.189-5.ph3 #1-photon 2023-12-04T10:49:30.120Z In(05) vcpu-5 - Guest: <4>[39287271.318275] Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 12/12/2018 2023-12-04T10:49:30.120Z In(05) vcpu-5 - Guest: <4>[39287271.318276] Call Trace: 2023-12-04T10:49:30.120Z In(05) vcpu-5 - Guest: <4>[39287271.318280] dump_stack+0x6d/0x8b 2023-12-04T10:49:30.120Z In(05) vcpu-5 - Guest: <4>[39287271.318283] dump_header+0x6c/0x282 2023-12-04T10:49:30.120Z In(05) vcpu-5 - Guest: <4>[39287271.318286] oom_kill_process+0x243/0x270 2023-12-04T10:49:30.120Z In(05) vcpu-5 - Guest: <4>[39287271.318287] out_of_memory+0x100/0x4e0 2023-12-04T10:49:30.121Z In(05) vcpu-5 - Guest: <4>[39287271.318290] mem_cgroup_out_of_memory+0xa4/0xc0 2023-12-04T10:49:30.121Z In(05) vcpu-5 - Guest: <4>[39287271.318291] try_charge+0x700/0x740 2023-12-04T10:49:30.121Z In(05) vcpu-5 - Guest: <4>[39287271.318293] ? alloc_pages_nodemask+0xdc/0x250 2023-12-04T10:49:30.121Z In(05) vcpu-5 - Guest: <4>[39287271.318295] mem_cgroup_try_charge+0x86/0x190 2023-12-04T10:49:30.121Z In(05) vcpu-5 - Guest: <4>[39287271.318296] mem_cgroup_try_charge_delay+0x1d/0x40 2023-12-04T10:49:30.121Z In(05) vcpu-5 - Guest: <4>[39287271.318299] handle_mm_fault+0x823/0xee0 2023-12-04T10:49:30.121Z In(05) vcpu-5 - Guest: <4>[39287271.318301] ? switch_to_asm+0x35/0x70 2023-12-04T10:49:30.121Z In(05) vcpu-5 - Guest: <4>[39287271.318303] handle_mm_fault+0xde/0x240 2023-12-04T10:49:30.121Z In(05) vcpu-5 - Guest: <4>[39287271.318305] __do_page_fault+0x226/0x4b0 2023-12-04T10:49:30.121Z In(05) vcpu-5 - Guest: <4>[39287271.318306] do_page_fault+0x2d/0xf0 2023-12-04T10:49:30.121Z In(05) vcpu-5 - Guest: <4>[39287271.318307] ? page_fault+0x8/0x30 2023-12-04T10:49:30.121Z In(05) vcpu-5 - Guest: <4>[39287271.318308] page_fault+0x1e/0x30 2023-12-04T10:49:30.121Z In(05) vcpu-5 - Guest: <4>[39287271.318309] RIP: 0033:0xd709d1 2023-12-04T10:49:30.121Z In(05) vcpu-5 - Guest: <4>[39287271.318310] Code: 39 c7 0f 82 41 01 00 00 48 89 d1 31 c0 66 0f ef c9 48 83 e1 f0 0f 1f 40 00 f3 0f 6f 04 06 66 0f 6f d0 66 0f 68 c1 66 0f 60 d1 <0f> 11 44 47 10 0f 11 14 47 48 83 c0 10 48 39 c8 75 dd 49 89 d0 48 2023-12-04T10:49:30.121Z In(05) vcpu-5 - Guest: <4>[39287271.318311] RSP: 002b:00007ffdf6603d68 EFLAGS: 00010287 2023-12-04T10:49:30.121Z In(05) vcpu-5 - Guest: <4>[39287271.318312] RAX: 0000000000c9cf60 RBX: 000009ad40b01140 RCX: 0000000000d006c0 2023-12-04T10:49:30.121Z In(05) vcpu-5 - Guest: <4>[39287271.318313] RDX: 0000000000d006c8 RSI: 00007f27f1e73010 RDI: 000009ad40b01140 2023-12-04T10:49:30.121Z In(05) vcpu-5 - Guest: <4>[39287271.318313] RBP: 00007ffdf6603da0 R08: 0000000001a790f0 R09: 0000000000000000 2023-12-04T10:49:30.121Z In(05) vcpu-5 - Guest: <4>[39287271.318314] R10: 00007ffdf66039d0 R11: 0000000000000001 R12: 00007ffdf6603dc0 2023-12-04T10:49:30.122Z In(05) vcpu-5 - Guest: <4>[39287271.318314] R13: 00007ffdf6603db4 R14: 0000000000d1c967 R15: 00007f27f1e73010 2023-12-04T10:49:30.122Z In(05) vcpu-5 - Guest: <6>[39287271.318315] Task in /kubepods/burstable/pod69a00a89-5b57-4b35-9456-34f1b9f1d46f/43d479743f93e2349a7c13fd63ac9f562839791a875a5fb81e90568d53ae5dfd killed as a result of limit of /kubepods/burstable/pod69a00a89-5b57-4b35-9456-34f1b9f1d46f 2023-12-04T10:49:30.122Z In(05) vcpu-5 - Guest: <6>[39287271.318319] memory: usage 614400kB, limit 614400kB, failcnt 83387 2023-12-04T10:49:30.122Z In(05) vcpu-5 - Guest: <6>[39287271.318320] memory+swap: usage 614400kB, limit 9007199254740988kB, failcnt 0 2023-12-04T10:49:30.122Z In(05) vcpu-5 - Guest: <6>[39287271.318320] kmem: usage 0kB, limit 9007199254740988kB, failcnt 0 2023-12-04T10:49:30.122Z In(05) vcpu-5 - Guest: <6>[39287271.318320] Memory cgroup stats for /kubepods/burstable/pod69a00a89-5b57-4b35-9456-34f1b9f1d46f: cache:0KB rss:0KB rss_huge:0KB shmem:0KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB 2023-12-04T10:49:30.122Z In(05) vcpu-5 - Guest: <6>[39287271.318324] Memory cgroup stats for /kubepods/burstable/pod69a00a89-5b57-4b35-9456-34f1b9f1d46f/2a6484f8f136d41bf17b6925821421216e8b9aa524aff498a5efa1e7ac037f9c: cache:0KB rss:0KB rss_huge:0KB shmem:0KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:40KB inactive_file:0KB active_file:0KB unevictable:0KB 2023-12-04T10:49:30.122Z In(05) vcpu-5 - Guest: <6>[39287271.318328] Memory cgroup stats for /kubepods/burstable/pod69a00a89-5b57-4b35-9456-34f1b9f1d46f/43d479743f93e2349a7c13fd63ac9f562839791a875a5fb81e90568d53ae5dfd: cache:0KB rss:614004KB rss_huge:116736KB shmem:0KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:614236KB inactive_file:0KB active_file:0KB unevictable:0KB 2023-12-04T10:49:30.122Z In(05) vcpu-5 - Guest: <6>[39287271.318331] Tasks state (memory values in pages): 2023-12-04T10:49:30.122Z In(05) vcpu-5 - Guest: <6>[39287271.318331] [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name 2023-12-04T10:49:30.122Z In(05) vcpu-5 - Guest: <6>[39287271.318464] [ 23437] 0 23437 242 1 28672 0 -998 pause 2023-12-04T10:49:30.122Z In(05) vcpu-5 - Guest: <6>[39287271.318502] [ 9261] 1000 9261 227818 31503 1314816 0 999 npm start 2023-12-04T10:49:30.122Z In(05) vcpu-5 - Guest: <6>[39287271.318503] [ 9307] 1000 9307 620 66 40960 0 999 sh 2023-12-04T10:49:30.122Z In(05) vcpu-5 - Guest: <6>[39287271.318504] [ 9308] 1000 9308 5586299 136571 7811072 0 999 node 2023-12-04T10:49:30.122Z In(05) vcpu-5 - Guest: <3>[39287271.318511] Memory cgroup out of memory: Kill process 9308 (node) score 1896 or sacrifice child 2023-12-04T10:49:30.122Z In(05) vcpu-5 - Guest: <3>[39287271.318552] Killed process 9308 (node) total-vm:22345196kB, anon-rss:512480kB, file-rss:33804kB, shmem-rss:0kB 2023-12-04T10:49:30.122Z In(05) vcpu-5 - Guest: <6>[39287271.334792] oom_reaper: reaped process 9308 (node), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB 2023-12-04T10:49:30.122Z In(05) vcpu-5 - Guest: <4>[39290745.072205] node invoked oom-killer: gfp_mask=0x6000c0(GFP_KERNEL), nodemask=(null), order=0, oom_score_adj=999 2023-12-04T10:49:30.123Z In(05) vcpu-5 - Guest: <6>[39290745.072206] node cpuset=83ade1d94ee1b2e144310c7a803f82c231ea99b36e55776219b177b6ac8e35f6 mems_allowed=0-1 2023-12-04T10:49:30.123Z In(05) vcpu-5 - Guest: <4>[39290745.072210] CPU: 14 PID: 2625 Comm: node Tainted: G W 4.19.189-5.ph3 #1-photon 2023-12-04T10:49:30.123Z In(05) vcpu-5 - Guest: <4>[39290745.072211] Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 12/12/2018 2023-12-04T10:49:30.123Z In(05) vcpu-5 - Guest: <4>[39290745.072211] Call Trace: 2023-12-04T10:49:30.123Z In(05) vcpu-5 - Guest: <4>[39290745.072216] dump_stack+0x6d/0x8b 2023-12-04T10:49:30.123Z In(05) vcpu-5 - Guest: <4>[39290745.072229] dump_header+0x6c/0x282 2023-12-04T10:49:30.123Z In(05) vcpu-5 - Guest: <4>[39290745.072233] oom_kill_process+0x243/0x270 2023-12-04T10:49:30.123Z In(05) vcpu-5 - Guest: <4>[39290745.072234] out_of_memory+0x100/0x4e0 2023-12-04T10:49:30.123Z In(05) vcpu-5 - Guest: <4>[39290745.072237] mem_cgroup_out_of_memory+0xa4/0xc0 2023-12-04T10:49:30.123Z In(05) vcpu-5 - Guest: <4>[39290745.072238] try_charge+0x700/0x740 2023-12-04T10:49:30.123Z In(05) vcpu-5 - Guest: <4>[39290745.072240] ? alloc_pages_nodemask+0xdc/0x250 2023-12-04T10:49:30.123Z In(05) vcpu-5 - Guest: <4>[39290745.072242] mem_cgroup_try_charge+0x86/0x190 2023-12-04T10:49:30.123Z In(05) vcpu-5 - Guest: <4>[39290745.072243] mem_cgroup_try_charge_delay+0x1d/0x40 2023-12-04T10:49:30.123Z In(05) vcpu-5 - Guest: <4>[39290745.072246] handle_mm_fault+0x823/0xee0 2023-12-04T10:49:30.123Z In(05) vcpu-5 - Guest: <4>[39290745.072248] ? switch_to_asm+0x35/0x70 2023-12-04T10:49:30.123Z In(05) vcpu-5 - Guest: <4>[39290745.072249] handle_mm_fault+0xde/0x240 2023-12-04T10:49:30.123Z In(05) vcpu-5 - Guest: <4>[39290745.072252] do_page_fault+0x226/0x4b0 2023-12-04T10:49:30.123Z In(05) vcpu-5 - Guest: <4>[39290745.072253] do_page_fault+0x2d/0xf0 2023-12-04T10:49:30.123Z In(05) vcpu-5 - Guest: <4>[39290745.072254] ? page_fault+0x8/0x30 2023-12-04T10:49:30.124Z In(05) vcpu-5 - Guest: <4>[39290745.072255] page_fault+0x1e/0x30 2023-12-04T10:49:30.124Z In(05) vcpu-5 - Guest: <4>[39290745.072256] RIP: 0033:0xd709d1 2023-12-04T10:49:30.124Z In(05) vcpu-5 - Guest: <4>[39290745.072258] Code: 39 c7 0f 82 41 01 00 00 48 89 d1 31 c0 66 0f ef c9 48 83 e1 f0 0f 1f 40 00 f3 0f 6f 04 06 66 0f 6f d0 66 0f 68 c1 66 0f 60 d1 <0f> 11 44 47 10 0f 11 14 47 48 83 c0 10 48 39 c8 75 dd 49 89 d0 48 2023-12-04T10:49:30.124Z In(05) vcpu-5 - Guest: <4>[39290745.072258] RSP: 002b:00007ffd558883d8 EFLAGS: 00010287 2023-12-04T10:49:30.124Z In(05) vcpu-5 - Guest: <4>[39290745.072259] RAX: 0000000000c4f760 RBX: 00001da2501c1140 RCX: 0000000000d006c0 2023-12-04T10:49:30.124Z In(05) vcpu-5 - Guest: <4>[39290745.072260] RDX: 0000000000d006c8 RSI: 00007f50f0b3e010 RDI: 00001da2501c1140 2023-12-04T10:49:30.124Z In(05) vcpu-5 - Guest: <4>[39290745.072261] RBP: 00007ffd55888410 R08: 0000000001a790f0 R09: 0000000000000000 2023-12-04T10:49:30.124Z In(05) vcpu-5 - Guest: <4>[39290745.072261] R10: 00007ffd55888040 R11: 0000000000000001 R12: 00007ffd55888430 2023-12-04T10:49:30.124Z In(05) vcpu-5 - Guest: <4>[39290745.072262] R13: 00007ffd55888424 R14: 0000000000d1c967 R15: 00007f50f0b3e010 2023-12-04T10:49:30.124Z In(05) vcpu-5 - Guest: <6>[39290745.072262] Task in /kubepods/burstable/pod9d3270ac-3765-4280-b195-ab3703f79147/83ade1d94ee1b2e144310c7a803f82c231ea99b36e55776219b177b6ac8e35f6 killed as a result of limit of /kubepods/burstable/pod9d3270ac-3765-4280-b195-ab3703f79147 2023-12-04T10:49:30.124Z In(05) vcpu-5 - Guest: <6>[39290745.072268] memory: usage 614400kB, limit 614400kB, failcnt 59345 2023-12-04T10:49:30.124Z In(05) vcpu-5 - Guest: <6>[39290745.072268] memory+swap: usage 614400kB, limit 9007199254740988kB, failcnt 0 2023-12-04T10:49:30.124Z In(05) vcpu-5 - Guest: <6>[39290745.072269] kmem: usage 0kB, limit 9007199254740988kB, failcnt 0 2023-12-04T10:49:30.124Z In(05) vcpu-5 - Guest: <6>[39290745.072269] Memory cgroup stats for /kubepods/burstable/pod9d3270ac-3765-4280-b195-ab3703f79147: cache:0KB rss:0KB rss_huge:0KB shmem:0KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB 2023-12-04T10:49:30.124Z In(05) vcpu-5 - Guest: <6>[39290745.072273] Memory cgroup stats for /kubepods/burstable/pod9d3270ac-3765-4280-b195-ab3703f79147/568ee417d5ffa86f21097fe3884f68c86423420004e79a573d2f93595cfcb2b3: cache:0KB rss:24KB rss_huge:0KB shmem:0KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:36KB inactive_file:0KB active_file:0KB unevictable:0KB 2023-12-04T10:49:30.124Z In(05) vcpu-5 - Guest: <6>[39290745.072277] Memory cgroup stats for /kubepods/burstable/pod9d3270ac-3765-4280-b195-ab3703f79147/83ade1d94ee1b2e144310c7a803f82c231ea99b36e55776219b177b6ac8e35f6: cache:0KB rss:613596KB rss_huge:260096KB shmem:0KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:614272KB inactive_file:0KB active_file:0KB unevictable:0KB 2023-12-04T10:49:30.124Z In(05) vcpu-5 - Guest: <6>[39290745.072280] Tasks state (memory values in pages): 2023-12-04T10:49:30.124Z In(05) vcpu-5 - Guest: <6>[39290745.072280] [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name 2023-12-04T10:49:30.124Z In(05) vcpu-5 - Guest: <6>[39290745.072434] [ 22759] 0 22759 242 1 28672 0 -998 pause 2023-12-04T10:49:30.125Z In(05) vcpu-5 - Guest: <6>[39290745.072505] [ 2564] 1000 2564 210154 30064 1306624 0 999 npm start 2023-12-04T10:49:30.125Z In(05) vcpu-5 - Guest: <6>[39290745.072507] [ 2623] 1000 2623 620 25 45056 0 999 sh 2023-12-04T10:49:30.125Z In(05) vcpu-5 - Guest: <6>[39290745.072508] [ 2625] 1000 2625 5588903 137935 8011776 0 999 node 2023-12-04T10:49:30.125Z In(05) vcpu-5 - Guest: <3>[39290745.072552] Memory cgroup out of memory: Kill process 2625 (node) score 1905 or sacrifice child 2023-12-04T10:49:30.125Z In(05) vcpu-5 - Guest: <3>[39290745.072599] Killed process 2625 (node) total-vm:22355612kB, anon-rss:517580kB, file-rss:34160kB, shmem-rss:0kB 2023-12-04T10:49:30.125Z In(05) vcpu-5 - Guest: <6>[39290745.088498] oom_reaper: reaped process 2625 (node), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB 2023-12-04T10:49:30.125Z In(05) vcpu-5 - Guest: <4>[39291577.009300] node invoked oom-killer: gfp_mask=0x6000c0(GFP_KERNEL), nodemask=(null), order=0, oom_score_adj=999 2023-12-04T10:49:30.125Z In(05) vcpu-5 - Guest: <6>[39291577.009302] node cpuset=c7ebf4d756b284f6226cce820771afcf706a4e37afb77f052c61ece86d3b61f0 mems_allowed=0-1 2023-12-04T10:49:30.125Z In(05) vcpu-5 - Guest: <4>[39291577.009306] CPU: 15 PID: 19388 Comm: node Tainted: G W 4.19.189-5.ph3 #1-photon 2023-12-04T10:49:30.125Z In(05) vcpu-5 - Guest: <4>[39291577.009307] Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 12/12/2018 2023-12-04T10:49:30.125Z In(05) vcpu-5 - Guest: <4>[39291577.009307] Call Trace: 2023-12-04T10:49:30.125Z In(05) vcpu-5 - Guest: <4>[39291577.009312] dump_stack+0x6d/0x8b 2023-12-04T10:49:30.125Z In(05) vcpu-5 - Guest: <4>[39291577.009315] dump_header+0x6c/0x282 2023-12-04T10:49:30.125Z In(05) vcpu-5 - Guest: <4>[39291577.009318] oom_kill_process+0x243/0x270 2023-12-04T10:49:30.125Z In(05) vcpu-5 - Guest: <4>[39291577.009319] out_of_memory+0x100/0x4e0 2023-12-04T10:49:30.125Z In(05) vcpu-5 - Guest: <4>[39291577.009322] mem_cgroup_out_of_memory+0xa4/0xc0 2023-12-04T10:49:30.125Z In(05) vcpu-5 - Guest: <4>[39291577.009323] try_charge+0x700/0x740 2023-12-04T10:49:30.125Z In(05) vcpu-5 - Guest: <4>[39291577.009326] ? __alloc_pages_nodemask+0xdc/0x250 2023-12-04T10:49:30.125Z In(05) vcpu-5 - Guest: <4>[39291577.009327] mem_cgroup_try_charge+0x86/0x190 2023-12-04T10:49:30.126Z In(05) vcpu-5 - Guest: <4>[39291577.009329] mem_cgroup_try_charge_delay+0x1d/0x40 2023-12-04T10:49:30.126Z In(05) vcpu-5 - Guest: <4>[39291577.009331] handle_mm_fault+0x823/0xee0 2023-12-04T10:49:30.126Z In(05) vcpu-5 - Guest: <4>[39291577.009333] ? switch_to_asm+0x35/0x70 2023-12-04T10:49:30.126Z In(05) vcpu-5 - Guest: <4>[39291577.009335] handle_mm_fault+0xde/0x240 2023-12-04T10:49:30.126Z In(05) vcpu-5 - Guest: <4>[39291577.009337] do_page_fault+0x226/0x4b0 2023-12-04T10:49:30.126Z In(05) vcpu-5 - Guest: <4>[39291577.009338] do_page_fault+0x2d/0xf0 2023-12-04T10:49:30.126Z In(05) vcpu-5 - Guest: <4>[39291577.009339] ? page_fault+0x8/0x30 2023-12-04T10:49:30.126Z In(05) vcpu-5 - Guest: <4>[39291577.009339] page_fault+0x1e/0x30 2023-12-04T10:49:30.126Z In(05) vcpu-5 - Guest: <4>[39291577.009341] RIP: 0033:0x7ff83129db97 2023-12-04T10:49:30.126Z In(05) vcpu-5 - Guest: <4>[39291577.009342] Code: 48 39 f7 72 17 74 25 4c 8d 0c 16 4c 39 cf 0f 82 2a 02 00 00 48 89 f9 48 29 f1 eb 06 48 89 f1 48 29 f9 83 f9 3f 76 7b 48 89 d1 a4 c3 80 fa 10 73 17 80 fa 08 73 2f 80 fa 04 73 3b 80 fa 01 77 2023-12-04T10:49:30.126Z In(05) vcpu-5 - Guest: <4>[39291577.009343] RSP: 002b:00007ffec233f168 EFLAGS: 00010292 2023-12-04T10:49:30.126Z In(05) vcpu-5 - Guest: <4>[39291577.009344] RAX: 000008d3e6fc90f4 RBX: 0000000000004000 RCX: 00000000000020f4 2023-12-04T10:49:30.126Z In(05) vcpu-5 - Guest: <4>[39291577.009344] RDX: 0000000000008000 RSI: 000039c6ba26710c RDI: 000008d3e6fcf000 2023-12-04T10:49:30.126Z In(05) vcpu-5 - Guest: <4>[39291577.009345] RBP: 00007ffec233f1d0 R08: 00007ffec233f250 R09: 000039c6ba2611d1 2023-12-04T10:49:30.126Z In(05) vcpu-5 - Guest: <4>[39291577.009345] R10: 000039c6ba2611f1 R11: 000008d3e6fc90f4 R12: 0000000000000000 2023-12-04T10:49:30.126Z In(05) vcpu-5 - Guest: <4>[39291577.009346] R13: 000008d3e6fc90f4 R14: 00007ffec233f250 R15: 0000000000004000 2023-12-04T10:49:30.126Z In(05) vcpu-5 - Guest: <6>[39291577.009346] Task in /kubepods/burstable/podfe3f7a38-b3f7-48aa-8997-a697ae3d0741/c7ebf4d756b284f6226cce820771afcf706a4e37afb77f052c61ece86d3b61f0 killed as a result of limit of /kubepods/burstable/podfe3f7a38-b3f7-48aa-8997-a697ae3d0741 2023-12-04T10:49:30.126Z In(05) vcpu-5 - Guest: <6>[39291577.009351] memory: usage 614400kB, limit 614400kB, failcnt 84646 2023-12-04T10:49:30.127Z In(05) vcpu-5 - Guest: <6>[39291577.009351] memory+swap: usage 614400kB, limit 9007199254740988kB, failcnt 0 2023-12-04T10:49:30.127Z In(05) vcpu-5 - Guest: <6>[39291577.009352] kmem: usage 0kB, limit 9007199254740988kB, failcnt 0 2023-12-04T10:49:30.127Z In(05) vcpu-5 - Guest: <6>[39291577.009352] Memory cgroup stats for /kubepods/burstable/podfe3f7a38-b3f7-48aa-8997-a697ae3d0741: cache:0KB rss:0KB rss_huge:0KB shmem:0KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB 2023-12-04T10:49:30.127Z In(05) vcpu-5 - Guest: <6>[39291577.009356] Memory cgroup stats for /kubepods/burstable/podfe3f7a38-b3f7-48aa-8997-a697ae3d0741/baedd8c922d5616ded87a51c41ceb2dd9085c020077d15a06d95b5d7a033a73d: cache:0KB rss:0KB rss_huge:0KB shmem:0KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:40KB inactive_file:0KB active_file:0KB unevictable:0KB 2023-12-04T10:49:30.127Z In(05) vcpu-5 - Guest: <6>[39291577.009360] Memory cgroup stats for /kubepods/burstable/podfe3f7a38-b3f7-48aa-8997-a697ae3d0741/c7ebf4d756b284f6226cce820771afcf706a4e37afb77f052c61ece86d3b61f0: cache:0KB rss:613460KB rss_huge:256000KB shmem:0KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:614340KB inactive_file:4KB active_file:0KB unevictable:0KB 2023-12-04T10:49:30.127Z In(05) vcpu-5 - Guest: <6>[39291577.009363] Tasks state (memory values in pages): 2023-12-04T10:49:30.127Z In(05) vcpu-5 - Guest: <6>[39291577.009363] [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name 2023-12-04T10:49:30.127Z In(05) vcpu-5 - Guest: <6>[39291577.009498] [ 22921] 0 22921 242 1 28672 0 -998 pause 2023-12-04T10:49:30.127Z In(05) vcpu-5 - Guest: <6>[39291577.009528] [ 19331] 1000 19331 226026 29603 1314816 0 999 npm start 2023-12-04T10:49:30.127Z In(05) vcpu-5 - Guest: <6>[39291577.009529] [ 19387] 1000 19387 620 36 40960 0 999 sh 2023-12-04T10:49:30.127Z In(05) vcpu-5 - Guest: <6>[39291577.009530] [ 19388] 1000 19388 5588434 138005 8069120 0 999 node 2023-12-04T10:49:30.127Z In(05) vcpu-5 - Guest: <3>[39291577.009543] Memory cgroup out of memory: Kill process 19388 (node) score 1906 or sacrifice child 2023-12-04T10:49:30.127Z In(05) vcpu-5 - Guest: <3>[39291577.009604] Killed process 19388 (node) total-vm:22353736kB, anon-rss:519368kB, file-rss:32960kB, shmem-rss:0kB 2023-12-04T10:49:30.127Z In(05) vcpu-5 - Guest: <6>[39291577.025633] oom_reaper: reaped process 19388 (node), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB 2023-12-04T10:49:30.127Z In(05) vcpu-5 - Guest: <4>[39292270.338520] node invoked oom-killer: gfp_mask=0x6000c0(GFP_KERNEL), nodemask=(null), order=0, oom_score_adj=999 2023-12-04T10:49:30.127Z In(05) vcpu-5 - Guest: <6>[39292270.338521] node cpuset=689c8576975e3719648ce3cc70214570ebdd91aa8f24c471fb740772a832a4df mems_allowed=0-1 2023-12-04T10:49:30.127Z In(05) vcpu-5 - Guest: <4>[39292270.338525] CPU: 6 PID: 13543 Comm: node Tainted: G W 4.19.189-5.ph3 #1-photon

dcasota commented 10 months ago

Here some thoughts.

You have a pod with memory limit set to 614400kB and total-vm of 22318872kB. The memory limit is reached (mem_cgroup_out_of_memory), and for higher memory consumption oom-killer (out of memory) kicks in. The first process eligible for this is 29860. Problematic is the fact that afterwards, oom_reaper reports that it didn't gain anything with that, see "now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB". This happens for the cascade of processes 29860, 30411, 26868, 29920, 4274, 3243, 15616, 9308, 2625, 19388. So, why just pain and no gain? Unfortunately I'm not skilled enough to read the slabtop output. The kubernetes case Container Limit cgroup causing OOMkilled still is open. 'Tasks state (memory values in pages)' doesn't list RssAnon (Size of resident anonymous memory), RssFile (Size of resident file mappings) and RssShmem (Size of resident shared memory). This has been addressed lately in a commit. In addition, this happens for higher kernel versions in constellations with cgroupv1 as well, see bugzilla bug 207273. Btw. cgroupv2 has been introduced a while ago, see https://kubernetes.io/docs/concepts/architecture/cgroups/#using-cgroupv2. Ph4 + Ph5 support cgroupv2.