intel / helm-charts

Apache License 2.0
12 stars 15 forks source link

Please extend device-plugin-operator template to specify container resources (requests & limits) #34

Closed MustDie95 closed 1 year ago

MustDie95 commented 1 year ago

inteldeviceplugin-controller container is periodically OOM-killed due to exceeding the memory limits:

intel-device-plugins-operator-oom

This time it was in the same time with launching of the enclave with a 20GB EPC. I don't know if these events are related. We have enclaves with 256GB but usually there were no such OOMKills.

Typical memory usage looks like this:

intel-device-plugins-operator-7day

Now container resources are hardcoded as:

        resources:
          limits:
            cpu: 100m
            memory: 50Mi
          requests:
            cpu: 100m
            memory: 20Mi

Please extend device-plugin-operator template to specify container resources (requests & limits) in values.

mythi commented 1 year ago

Interesting data, thanks! The operator services a webhook that is called when the SGX pod is created so that might explain the sudden spike. It clearly exceeds the limit but that should not trigger the OOM right away. Maybe the node gets low memory pressure when that huge enclave is launched. Anyway, the data clearly suggest better/configurable limits make sense.

MustDie95 commented 1 year ago

@mythi, I don't see memory issues at the moment. Maybe the graph below will be helpful.

memory_3

MustDie95 commented 1 year ago
May  5 11:18:03 <servername> conmon[1270216]: conmon c0bf1f72995b7ae015fd <ninfo>: OOM event received
May  5 11:18:03 <servername> conmon[1270216]: conmon c0bf1f72995b7ae015fd <ninfo>: OOM received
May  5 11:18:03 <servername> kernel: intel_deviceplu invoked oom-killer: gfp_mask=0xc40(GFP_NOFS), order=0, oom_score_adj=999
May  5 11:18:03 <servername> kernel: CPU: 127 PID: 1270271 Comm: intel_deviceplu Kdump: loaded Not tainted 5.4.17-2136.302.7.2.1.sgx.el8uek.x86_64 #2
May  5 11:18:03 <servername> kernel: Hardware name: Intel Corporation M50CYP2SB1U/M50CYP2SB1U, BIOS SE5C620.86B.01.01.0003.2104260124 04/26/2021
May  5 11:18:03 <servername> kernel: Call Trace:
May  5 11:18:03 <servername> kernel: dump_stack+0x6d/0x8b
May  5 11:18:03 <servername> kernel: dump_header+0x4f/0x1e1
May  5 11:18:03 <servername> kernel: oom_kill_process.cold.33+0xb/0x10
May  5 11:18:03 <servername> kernel: out_of_memory+0x1bf/0x551
May  5 11:18:03 <servername> kernel: mem_cgroup_out_of_memory+0xc4/0xce
May  5 11:18:03 <servername> kernel: try_charge+0x6ad/0x706
May  5 11:18:03 <servername> kernel: ? mem_cgroup_commit_charge+0x63/0x4a2
May  5 11:18:03 <servername> kernel: mem_cgroup_try_charge+0x71/0x149
May  5 11:18:03 <servername> kernel: __add_to_page_cache_locked+0x2f8/0x3e0
May  5 11:18:03 <servername> kernel: ? bio_add_page+0x67/0x8b
May  5 11:18:03 <servername> kernel: ? scan_shadow_nodes+0x30/0x2f
May  5 11:18:03 <servername> kernel: add_to_page_cache_lru+0x4f/0xc2
May  5 11:18:03 <servername> kernel: iomap_readpages_actor+0x107/0x228
May  5 11:18:03 <servername> kernel: ? iomap_page_mkwrite_actor+0x80/0x74
May  5 11:18:03 <servername> kernel: iomap_apply+0xb8/0x131
May  5 11:18:03 <servername> kernel: ? iomap_page_mkwrite_actor+0x80/0x74
May  5 11:18:03 <servername> kernel: iomap_readpages+0xa7/0x1b1
May  5 11:18:03 <servername> kernel: ? iomap_page_mkwrite_actor+0x80/0x74
May  5 11:18:03 <servername> kernel: xfs_vm_readpages+0x35/0x90 [xfs]
May  5 11:18:03 <servername> kernel: read_pages+0x6b/0x18c
May  5 11:18:03 <servername> kernel: __do_page_cache_readahead+0x16d/0x1d2
May  5 11:18:03 <servername> kernel: filemap_fault+0x795/0xa87
May  5 11:18:03 <servername> kernel: ? enqueue_task_fair+0x144/0x4f3
May  5 11:18:03 <servername> kernel: ? __mod_memcg_lruvec_state+0x27/0x102
May  5 11:18:03 <servername> kernel: ? unlock_page_memcg+0x12/0x14
May  5 11:18:03 <servername> kernel: ? _cond_resched+0x19/0x29
May  5 11:18:03 <servername> kernel: ? down_read+0x12/0x98
May  5 11:18:03 <servername> kernel: __xfs_filemap_fault+0x6f/0x200 [xfs]
May  5 11:18:03 <servername> kernel: ? filemap_map_pages+0x28d/0x3a7
May  5 11:18:03 <servername> kernel: xfs_filemap_fault+0x37/0x40 [xfs]
May  5 11:18:03 <servername> kernel: __do_fault+0x3c/0xd6
May  5 11:18:03 <servername> kernel: __handle_mm_fault+0xa71/0xd5d
May  5 11:18:03 <servername> kernel: handle_mm_fault+0xc9/0x1f0
May  5 11:18:03 <servername> kernel: __do_page_fault+0x1f7/0x4b7
May  5 11:18:03 <servername> kernel: do_page_fault+0x36/0x11a
May  5 11:18:03 <servername> kernel: page_fault+0x13d/0x142
May  5 11:18:03 <servername> kernel: RIP: 0033:0x4575dc
May  5 11:18:03 <servername> kernel: Code: cc cc cc cc cc cc cc cc cc cc cc cc cc cc 48 83 ec 18 48 89 6c 24 10 48 8d 6c 24 10 48 89 44 24 20 48 85 db 0f 86 5c 01 00 00 <0f> b6 10 90 85 d2 75 18 45 84 c0 75 13 31 c0 31 db 48 89 d9 31 ff
May  5 11:18:03 <servername> kernel: RSP: 002b:000000c0008957b8 EFLAGS: 00010202
May  5 11:18:03 <servername> kernel: RAX: 000000000201b939 RBX: 00000000002db4a7 RCX: 00000000002db4a7
May  5 11:18:03 <servername> kernel: RDX: 0000000000401000 RSI: 000000c00089580c RDI: 000000c000895820
May  5 11:18:03 <servername> kernel: RBP: 000000c0008957c8 R08: 0000000001fec901 R09: 0000000001fec980
May  5 11:18:03 <servername> kernel: R10: 000000000030a460 R11: 000000000002efb9 R12: 0000000000000000
May  5 11:18:03 <servername> kernel: R13: 0000000000000000 R14: 000000c000900340 R15: 0000000000000000
May  5 11:18:03 <servername> kernel: memory: usage 51200kB, limit 51200kB, failcnt 1
May  5 11:18:03 <servername> kernel: memory+swap: usage 51200kB, limit 51200kB, failcnt 669529
May  5 11:18:03 <servername> kernel: kmem: usage 1124kB, limit 9007199254740988kB, failcnt 0
May  5 11:18:03 <servername> kernel: Memory cgroup stats for /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode4b0de3f_a367_40e0_a218_5c23f65cf843.slice/crio-c0bf1f72995b7ae015fd03b1f6d893495731147d0950c70944c4852f5f7297ca.scope:
May  5 11:18:03 <servername> kernel: anon 41463808#012file 7102464#012kernel_stack 184320#012slab 131784#012sock 0#012shmem 0#012file_mapped 0#012file_dirty 0#012file_writeback 0#012anon_thp 12582912#012inactive_anon 0#012active_anon 40894464#012inactive_file 3579904#012active_file 4038656#012unevictable 0#012slab_reclaimable 131784#012slab_unreclaimable 0#012pgfault 1083951#012pgmajfault 1716#012workingset_refault 1489884#012workingset_activate 387618#012workingset_nodereclaim 0#012pgrefill 430687#012pgscan 1798236#012pgsteal 1496321#012pgactivate 25542#012pgdeactivate 414448#012pglazyfree 0#012pglazyfreed
0#012thp_fault_alloc 0#012thp_collapse_alloc 0
May  5 11:18:03 <servername> kernel: Tasks state (memory values in pages):
May  5 11:18:03 <servername> kernel: [  pid  ]   uid  tgid total_vm      rss pgtables_bytes swapents oom_score_adj name
May  5 11:18:03 <servername> kernel: [1270227] 65532 1270227   188319    11434   266240        0           999 intel_deviceplu
May  5 11:18:03 <servername> kernel: oom-kill:constraint=CONSTRAINT_MEMCG,nodemask=(null),cpuset=crio-c0bf1f72995b7ae015fd03b1f6d893495731147d0950c70944c4852f5f7297ca.scope,mems_allowed=0-1,oom_memcg=/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode4b0de3f_a367_40e0_a218_5c23f65cf843.slice/crio-c0bf1f72995b7ae015fd03b1f6d893495731147d0950c70944c4852f5f7297ca.scope,task_memcg=/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode4b0de3f_a367_40e0_a218_5c23f65cf843.slice/crio-c0bf1f72995b7ae015fd03b1f6d893495731147d0950c70944c4852f5f7297ca.scope,task=intel_deviceplu,pid=1270227,uid=65532
May  5 11:18:03 <servername> kernel: Memory cgroup out of memory: Killed process 1270227 (intel_deviceplu) total-vm:753276kB, anon-rss:43212kB, file-rss:2536kB, shmem-rss:0kB, UID:65532 pgtables:260kB oo
m_score_adj:999
May  5 11:18:03 <servername> kernel: oom_reaper: reaped process 1270227 (intel_deviceplu), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
May  5 11:18:03 <servername> conmon[1270216]: conmon c0bf1f72995b7ae015fd <ninfo>: container 1270227 exited with status 137
mythi commented 1 year ago

@mythi, I don't see memory issues at the moment. Maybe the graph below will be helpful.

The fact that the operator clearly exceeds the configured mem limit should be addressed but I wonder if the EPC page reclaimer kicks in at the same time and this just becomes a victim. The image you shared suggests that the system temporarily runs out of memory which cannot be explained by the operator +30MiB spike.

5.4.17-2136.302.7.2.1.sgx.el8uek.x86_64

are you using the OOT DCAP driver? The in-tree driver has had a few EPC page reclaimer fixes which might help to mitigate the issue.

MustDie95 commented 1 year ago

@mythi, yes, we had problems destroying huge enclaves - the system reported something like "watchdog: BUG: soft lockup - CPU#11 stuck for 22s!" and I am aware of kernel fixes for this situation. But Oracle still hasn't made these fixes to their UEK kernel, so we had to fix the sources by hand and then rebuild and install kernel package on worker nodes.

mythi commented 1 year ago

We have updated the requests/limits but we did not make them configurable.

MustDie95 commented 1 year ago

Ok, thanks