JuliaLang / julia

The Julia Programming Language
https://julialang.org/
MIT License
44.99k stars 5.42k forks source link

`jl_effective_threads` not aware of CFS cpu limit (inside Docker container) #46226

Open Moelf opened 1 year ago

Moelf commented 1 year ago

c.f. https://stackoverflow.com/questions/65551215/get-docker-cpu-memory-limit-inside-container

Should "auto" be aware of the fact that the container has a limited CPU? for example if we see

/ # cat /sys/fs/cgroup/cpu/cpu.cfs_quota_us
400000
/ # cat /sys/fs/cgroup/cpu/cpu.cfs_period_us
100000

we should set number of threads to 4 instead of, in my case, 128

Moelf commented 1 year ago

bump

Moelf commented 1 year ago

this was fixed in:

Moelf commented 1 year ago

nvm, @gbaraldi tried checking with https://github.com/rust-lang/rust/pull/92697/

using this rust snippet:

fn main() {
    println!("{:?}", std::thread::available_parallelism()); // prints Ok(3)
}

and I get 32 as opposed to what Julia reports 48, given

[18:57] jiling-notebook-1:~/research_notebooks $ cat /sys/fs/cgroup/cpu/cpu.cfs_quota_us
3200000
[18:57] jiling-notebook-1:~/research_notebooks $ cat /sys/fs/cgroup/cpu/cpu.cfs_period_us
100000
Moelf commented 1 year ago

Maybe it's better to follow .NET: https://github.com/dotnet/runtime/issues/11933#issuecomment-475032889

in this case, Rust is doing something wrong so closing the issue again...

gbaraldi commented 1 year ago

For reference Java also follows the quota/period variables. So there might be some discussion warranted, but I don't think it's a big priority.

Seelengrab commented 1 year ago

For posterity sake, this is what man 5 systemd.resource-control has to say on my machine:

    CPUQuota=
       Assign the specified CPU time quota to the processes executed. Takes a percentage value, suffixed
       with "%". The percentage specifies how much CPU time the unit shall get at maximum, relative to the
       total CPU time available on one CPU. Use values > 100% for allotting CPU time on more than one CPU.
       This controls the "cpu.max" attribute on the unified control group hierarchy and "cpu.cfs_quota_us"
       on legacy. For details about these control group attributes, see Control Groups v2[2] and CFS
       Bandwidth Control[4]. Setting CPUQuota= to an empty value unsets the quota.

       Example: CPUQuota=20% ensures that the executed processes will never get more than 20% CPU time on
       one CPU.

   CPUQuotaPeriodSec=
       Assign the duration over which the CPU time quota specified by CPUQuota= is measured. Takes a time
       duration value in seconds, with an optional suffix such as "ms" for milliseconds (or "s" for
       seconds.) The default setting is 100ms. The period is clamped to the range supported by the kernel,
       which is [1ms, 1000ms]. Additionally, the period is adjusted up so that the quota interval is also at
       least 1ms. Setting CPUQuotaPeriodSec= to an empty value resets it to the default.

       This controls the second field of "cpu.max" attribute on the unified control group hierarchy and
       "cpu.cfs_period_us" on legacy. For details about these control group attributes, see Control Groups
       v2[2] and CFS Scheduler[3].

       Example: CPUQuotaPeriodSec=10ms to request that the CPU quota is measured in periods of 10ms.

So if we calculate an example with a period of 100ms and a quota of 320ms, on a machine with 48 physical cores, we can chose to run

The reason for these numbers is simple - since the quotas are measured in cpu time and the period is 100ms walltime, that means in 100ms walltime a 48 core cpu has a budget of 480ms cputime. With the above setting, we get up to 320ms of that budget which we can share across cores as we see fit. Hence, utilizing 32 cores completely already uses the 320ms of the budget and similarly, utilizing 48 cores to 66.6% also uses 320ms.

Ultimately, the difference in threads and whether it's beneficial for performance depends on the workload. Numerical work that does a lot of work will see a benefit from running with 100% utilization, due to caching effects. On the other hand, a web server will likely see a benefit from running with 66% utilization, since there is likely going to be a lot of I/O waiting per thread/task, so running with more threads that get blocked can mean more work getting done in the same amount of time, due to being able to take advantage of more physical cores.

So if there is something to be done here, I'd say a switch between "try to utilize cores fully" and "give me the number of physical threads available" would be the most appropriate, although I question the usefulness since we can already set the number of threads explicitly anyway and if you're able to set the cpu quota, you should also be in the position to set the startup threads of julia. The "safe" default is though to follow the formula ceil(Int, quota/period) (since a ratio of e.g. 1.7 would mean that you need more than one thread for full utilization), given a consistent ability to query that.

vtjnash commented 5 months ago

https://github.com/libuv/libuv/pull/4278

Moelf commented 2 months ago

looks like the upstream has merged, what should we do here?

Seelengrab commented 2 months ago

Looks like libuv now reports the correct available parallelism through uv_available_parallelism, but this doesn't seem to be used by us (yet?). With that in mind, I'd still just default to ceil(Int, uv_available_parallelism()).

Moelf commented 2 months ago

Would a PR switching to using that be potentially accepted?