Closed cirocosta closed 4 years ago
cc @topherbullock the thing I mentioned yesterday after standup cc @ddadlani @kcmannem (after investigating this, it might touch runtime at some point?)
Another interesting fact: despite having provisioned what could be the best combination of cpu
count + disk size to max out the possible amount of IOPS and consistent throughput we could get, it seems like we're still being throttled:
Which is interesting, given that in those moments, we've never gotten close to what should be our quota:
I've been wondering whether this is related to overlay
(the driver we use there) - in theory, we should be able to see that by taking a profile of the machine once Strabo kicks in and then derive some more data around what's the workload that makes the system cpu usage to grow as much, then, with that, create a specialized workload that allows us to better understand what in Overlay causes that π€
I don't know the specifics of overlay but we do know one situation where overlay struggles is when using privileged containers. In the case of big resources with Strabo, using privileged will cause overlay to apply a real write on the upper dir with the correct permissions instead of using the read only version in the lower dir.
If I understand correctly, overlay
ends up being a problem during the beginning of a step when all of those chown
s need to happen, but, for what happens after that, would it be anyhow worse than btrfs? π€
In the case of strabo
there are no privileged containers π€
It turns out that sys
cpu is pretty tricky π€·ββοΈ
Running chicken-gun
's net stress we can drive sys util to 32% constantly w pretty much ~1% user CPU usage, which is quite interesting! Next question is - does sys util count when cgroups throttles processes? π€
Hey,
I was giving a look at what goes on with the machine when it's in such state, and it seems that such performance trashing is coming from the memory subsystem.
# Children Self Samples Command Shared Object Symbol
# ........ ........ ............ ............... ........................ ...........................................................................................................................................................................................
#
24.91% 0.00% 0 exe [kernel.kallsyms] [k] try_charge
13.63% try_charge;try_to_free_mem_cgroup_pages;do_try_to_free_pages;shrink_node;shrink_slab.part.49;super_cache_count;list_lru_count_one
3.57% try_charge;try_to_free_mem_cgroup_pages;do_try_to_free_pages;shrink_node;shrink_slab.part.49;super_cache_count;list_lru_count_one;do_raw_spin_lock;native_queued_spin_lock_slowpath
3.52% try_charge;try_to_free_mem_cgroup_pages;do_try_to_free_pages;shrink_node;shrink_slab.part.49;super_cache_count;list_lru_count_one;do_raw_spin_lock
1.17% try_charge;try_to_free_mem_cgroup_pages;do_try_to_free_pages;shrink_node;shrink_slab.part.49;super_cache_count
1.15% try_charge;try_to_free_mem_cgroup_pages;do_try_to_free_pages;shrink_node;shrink_slab.part.49;super_cache_count;shmem_unused_huge_count
0.98% try_charge;try_to_free_mem_cgroup_pages;do_try_to_free_pages;shrink_node;shrink_slab.part.49
24.91% 0.00% 0 exe [kernel.kallsyms] [k] try_to_free_mem_cgroup_pages
13.63% try_to_free_mem_cgroup_pages;do_try_to_free_pages;shrink_node;shrink_slab.part.49;super_cache_count;list_lru_count_one
3.57% try_to_free_mem_cgroup_pages;do_try_to_free_pages;shrink_node;shrink_slab.part.49;super_cache_count;list_lru_count_one;do_raw_spin_lock;native_queued_spin_lock_slowpath
3.52% try_to_free_mem_cgroup_pages;do_try_to_free_pages;shrink_node;shrink_slab.part.49;super_cache_count;list_lru_count_one;do_raw_spin_lock
1.17% try_to_free_mem_cgroup_pages;do_try_to_free_pages;shrink_node;shrink_slab.part.49;super_cache_count
1.15% try_to_free_mem_cgroup_pages;do_try_to_free_pages;shrink_node;shrink_slab.part.49;super_cache_count;shmem_unused_huge_count
0.98% try_to_free_mem_cgroup_pages;do_try_to_free_pages;shrink_node;shrink_slab.part.49
24.91% 0.00% 0 exe [kernel.kallsyms] [k] do_try_to_free_pages
13.63% do_try_to_free_pages;shrink_node;shrink_slab.part.49;super_cache_count;list_lru_count_one
3.57% do_try_to_free_pages;shrink_node;shrink_slab.part.49;super_cache_count;list_lru_count_one;do_raw_spin_lock;native_queued_spin_lock_slowpath
3.52% do_try_to_free_pages;shrink_node;shrink_slab.part.49;super_cache_count;list_lru_count_one;do_raw_spin_lock
1.17% do_try_to_free_pages;shrink_node;shrink_slab.part.49;super_cache_count
1.15% do_try_to_free_pages;shrink_node;shrink_slab.part.49;super_cache_count;shmem_unused_huge_count
0.98% do_try_to_free_pages;shrink_node;shrink_slab.part.49
24.77% 0.00% 2 exe [kernel.kallsyms] [k] shrink_node
13.63% shrink_node;shrink_slab.part.49;super_cache_count;list_lru_count_one
3.57% shrink_node;shrink_slab.part.49;super_cache_count;list_lru_count_one;do_raw_spin_lock;native_queued_spin_lock_slowpath
3.52% shrink_node;shrink_slab.part.49;super_cache_count;list_lru_count_one;do_raw_spin_lock
1.17% shrink_node;shrink_slab.part.49;super_cache_count
1.15% shrink_node;shrink_slab.part.49;super_cache_count;shmem_unused_huge_count
0.98% shrink_node;shrink_slab.part.49
24.58% 0.98% 464 exe [kernel.kallsyms] [k] shrink_slab.part.49
13.63% shrink_slab.part.49;super_cache_count;list_lru_count_one
3.57% shrink_slab.part.49;super_cache_count;list_lru_count_one;do_raw_spin_lock;native_queued_spin_lock_slowpath
3.52% shrink_slab.part.49;super_cache_count;list_lru_count_one;do_raw_spin_lock
1.17% shrink_slab.part.49;super_cache_count
1.15% shrink_slab.part.49;super_cache_count;shmem_unused_huge_count
24.06% 0.00% 0 exe [kernel.kallsyms] [k] handle_mm_fault
I remember reading that per-cgroup memory accounting is not free, but that seems quite expensive. Maybe there's something we're missing here.
Here are some other interesting stats:
At the moment that we're in such state, we have a bunch of small IOPS, having a quite high IO completion time π€
Supporting the theory that it's indeed something related to paging and not the fact that our IOPS are being throttled:
We're pretty much never being throttled for reads π€ (the stats for throttling come from Stackdriver, i.e., from the IaaS itself).
Reading https://engineering.linkedin.com/blog/2016/08/don_t-let-linux-control-groups-uncontrolled, it feels like we're hitting case 2
:
A cgroupβs memory limit (e.g., 10GB) includes all memory usage of the processes running in itβboth the anonymous memory and page cache of the cgroup are counted towards the memory limit. In particular, when the application running in a cgroup reads or writes files, the corresponding page cache allocated by OS is counted as part of the cgroupβs memory limit.
For some applications, the starvation of page cache (and corresponding low page cache hit rate) has two effects: it degrades the performance of the application and increases workload on the root disk drive, and it could also severely degrade the performance of all applications on the machine that perform disk IO.
It'd be nice to create a reproducible for this. I'll see if I can come with something up with chicken-gun.
I was under the impression that it could be that overlay
could potentially be part of the issue, but it seems like that's not the case - we just hit the same kind of CPU usage curve with Ubuntu-based machines (thus, not overlay
related):
maybe some good news!
As a way of trying to get rid of some of the iptables memory allocation problems
(see https://github.com/concourse/concourse/issues/3127) that could be just a symptom of memory pressure, I tried raising /proc/sys/vm/min_free_kbytes
in order to ensure that we always leave some more room for memory for the kernel, and it seems like the behavior is better - not only no more of those iptables problems, but in general, it seems like we have a better CPU usage distribution in overall.
I'll perform some more runs tomorrow and see how it goes π
sys
cpu continued consistently low compared to how it was before the tuning π iptables mem allocation problem
still happens though.
Actually ... this didn't really help much π€·ββ
It's quite unfortunate that when tracing unzip
we get the following:
Looking at https://lore.kernel.org/lkml/153063036670.1818.16010062622751502.stgit@localhost.localdomain/, which suggests that 4.19+ ships with an improvement to the whole list_lru_count_one
having such high sys CPU utilization and knowing that we'd like to try out the iptables
problem in 4.19+, I'll go ahead with having hush-house
workers running on top of Ubuntu Disco to see how that compares.
That'd still not be a 100%-valid comparison as we'd not be running on top of a GKE-managed instance type (PKS wouldn't help us here either as the stemcells are 4.14-based just like the Ubuntu family from GKE, as well as the COS-family - see https://cloud.google.com/container-optimized-os/docs/release-notes#current_active_releases).
Also, this is quite interesting:
Every mount introduce a new shrinker to the system, so it's easy to see a system with 100 or ever 1000 shrinkers.
ooooh yeah, strabo
has "just" 100s of those π
not counting all of those other 100s of containers that are usually already there in the machine π
Knowing those, we can also prepare something with chicken-gun
π€
side note: if this really helps, the next step will be seeing if it's possible to get strabo
doing its thing with configurable concurrency π
(not completely related, but something else to keep an eye on - cpu.cfs_quota
https://lkml.org/lkml/2019/5/17/581)
I spent some time trying to see how a large number of cgroups
affect the performance of reading files from disk, and it turns out that it's quite significant:
no extra cgroups:
0.00user 9.89system 0:13.25elapsed 74%CPU (0avgtext+0avgdata 3336maxresident)k
10486016inputs+8outputs (1major+130minor)pagefaults 0swaps
100
0.02user 18.96system 0:22.10elapsed 85%CPU (0avgtext+0avgdata 3220maxresident)k
10486152inputs+8outputs (2major+131minor)pagefaults 0swaps
300
0.06user 80.90system 1:24.64elapsed 95%CPU (0avgtext+0avgdata 3360maxresident)k
10486024inputs+8outputs (1major+132minor)pagefaults 0swaps
which seems to be quite on point with the comments in the lkml
articles regarding the quadratic times of shrinkers π€
The reproducible consists of:
having a bunch of large files to read (see https://github.com/cirocosta/chicken-gun/blob/3c5fe02045e784c7da47575a415bb98a6bbf9e71/scenarios/page-cache/setup.sh#L11-L16)
setting up a number of cgroups (see https://github.com/cirocosta/chicken-gun/blob/3c5fe02045e784c7da47575a415bb98a6bbf9e71/scenarios/page-cache/containers.sh#L5-L10), then
reading all of those files (see https://github.com/cirocosta/chicken-gun/blob/3c5fe02045e784c7da47575a415bb98a6bbf9e71/src/page_cache.rs)
Hey,
Looking at the CPU utilization of a node that runs the worker that got assigned the final step of the
strabo
pipeline, we can see that the CPU utilization gets to a quite interesting place:From the perspective of the container, we can see similar results, indicating that at least a great part of that usage indeed comes from processes within that worker cgroup:
Doing a
pidstat
on the pid ofconcourse worker
inside the container:To help troubleshoot that, it might be useful to generate flamegraphs from the perspective of the host to see from the traces where it's spending all of that time.