scala / scala-dev

Scala 2 team issues. Not for user-facing bugs or directly actionable user-facing improvements. For build/test/infra and for longer-term planning and idea tracking. Our bug tracker is at https://github.com/scala/bug/issues
Apache License 2.0
130 stars 15 forks source link

Configure benchmark machine for maximal stability #338

Closed lrytz closed 7 years ago

lrytz commented 7 years ago

Disable hyper-threading

NUMA

The machine only has a single NUMA node, so we don't need to worry about it.

http://stackoverflow.com/questions/11126093/how-do-i-know-if-my-server-has-numa

scala@scalabench:~$ sudo dmesg | grep -i numa
[    0.000000] No NUMA configuration found
scala@scalabench:~$ numactl --hardware
available: 1 nodes (0)
node 0 cpus: 0 1 2 3

Use cpu sets

Install cset: sudo apt-get install cpuset. (On NUMA machines, cset also handles sets of memory nodes, but we only have one.)

Shielding

References

Use isolated CPUs

NOTE: Using isolated CPUs for running the JVM is not a good idea. The kernel doesn't do any load balancing across isolated CPUs. https://groups.google.com/forum/#!topic/mechanical-sympathy/Tkcd2I6kG-s, https://www.novell.com/support/kb/doc.php?id=7009596. Use cset instead of isolcpus and taskset.

lscpu --all --extended lists CPUs, also logical cores (if hyper-threading is enabled). The CORE column shows the physical core.

Kernel parameter isolcpus=2,3 removes CPUs 2 and 3 from the kernel's scheduler.

Verify

Use taskset -c 2,3 <cmd> to run cmd (and child processes) only on CPUs 2 and 3.

Questions

$ taskset -c 0,1 ~/scala/scala-2.11.8/bin/scala -e 'println(Runtime.getRuntime().availableProcessors())'
2
$ taskset -c 1 ~/scala/scala-2.11.8/bin/scala -e 'println(Runtime.getRuntime().availableProcessors())'
2

References

Tickless / NOHZ

Disable scheduling clock interrupts on the CPUs used for benchmarking, add the nohz_full=2,3 kernel parameter if there's a single task (thread) on the CPU.

Verify

NOTE: disabling interrupts has some effect on CPU frequency, see https://fosdem.org/2017/schedule/event/python_stable_benchmark/ (24:45). Make sure to use a fixed CPU frequency. I don't have the full picture yet, but its something like that: the intel_pstate driver is no longer notified and does not update the CPU frequency.

(Some more advanced stuff in http://www.breakage.org/2013/11, pin some regular tasks to specific CPUs, writeback/cpumask, writeback/numa).

References

rcu_nocbs

RCU is a thread synchronization mechanism. RCU callbacks may prevent a cpu from entering adaptive-tick mode (tickless with 0/1 tasks). https://www.kernel.org/doc/Documentation/timers/NO_HZ.txt

The rcu_nocbs=2,3 kernel param prevents CPUs 2 and 3 from queuing RCU callbacks.

References

Interrupt handlers

Avoid running interrupt handlers on certain CPUs

Verify

There's an irqbalance service (systemctl status irqbalance)

References

CPU Frequency

Disable Turbo Boost

There seem to be two linux tools

Intel can run in different P-States, voltage-frequency pairs when running a process. C-States are idle / power saving states. The intel_pstate driver handles this.

The intel_pstate=disable kernel argument disables the intel_pstate driver and uses acpi-cpufreq instead (see redhad reference).

CPU Info

CPUfreq Governors

Set a specific frequency:

The intel_pstate driver has /sys/devices/system/cpu/intel_pstate/min_perf_pct and max_perf_pct, maybe these can be used if we stick with that driver?

References

Disable git gc

https://stackoverflow.com/questions/28092485/how-to-prevent-garbage-collection-in-git

Disable hpet

Suggested by Dmitry, I haven't found any other references.

hpet is a hardware timer with a frequency of at least 10 MHz (higher than older timer circuits).

Change using a kernel parameter clocksource=acpi_pm

Explanation of clock sources: https://access.redhat.com/solutions/18627

References

Ramdisk

tmpfs vs ramfs

Added to /etc/fstab

Disable "transparent hugepages"

There are some recommendations out there to disable "transparent hugepages", mostly for database servers

Disable khungtaskd

Probably not useful, runs every 120 seconds only. Detects hung tasks.

Cron jobs

https://help.ubuntu.com/community/CronHowto

Disable / enable cron

Disable / enable at

Run under perf stat

Suggestion by Dmitry, discard benchmarks with too many cpu-migrations, context-switches. Would need to keep track of expected values.

References

Build custom kernel

Ah well, probably have to figure out some more details how to do this correctly.

apt-get install linux-source-4.9
tar xaf /usr/src/linux-source-4.9.tar.xz

apt-get install build-essential fakeroot libncurses5-dev

cd linux-source-4.9
cp /boot/config-4.9.0-0.bpo.2-amd64 .config
make menuconfig
  - General setup->Timers subsystem->Timer tick handling -> Full dynticks system (tickless)
  - Up one level -> Full dynticks system on all CPUs by default (except CPU 0)
  - General setup->Local Version, enter a simple string
nano .config
  - comment out CONFIG_SYSTEM_TRUSTED_KEYS
    https://unix.stackexchange.com/questions/293642/attempting-to-compile-any-kernel-yields-a-certification-error

make deb-pkg

cd ..
sudo dpkg -i linux-image-4.9.18_4.9.18-1_amd64.deb

Scripting all of that

It seems that python3's "perf" package will do most configurations:

pip3 install perf
python3 -m perf system show
python3 -m perf system tune
python3 -m perf system reset

Important: check all settings before starting a benchmark.

Check load

Find a way to ensure that the benchmark machine is idle before starting a job.

Machine Specs

NX236-S2HD (http://www.nixsys.com/nx236-s2hd.html)

retronym commented 7 years ago

I seem to remember someone (@adriaanm?) suggesting our script could trigger a reboot and then run the actual benchmark during the shutdown or startup sequence, at a point when superfluous services aren't running and when other users can't log in.

We could still use the Jenkins SSH Slave functionality to set all this up, but we'd have to add a custom build step to poll for completion.

lrytz commented 7 years ago

I could imagine that during startup / shutdown or right after startup the system might schedule maintenance tasks and not be the most stable either.

We should definitely check if there's a difference if we don't use a jenkins slave / ssh connection.

DarkDimius commented 7 years ago

Several more suggestions based on my experience:

DarkDimius commented 7 years ago

Since I've switched to ssd, they can have periodic maintenance that may slow down stuff. Because of this I now use ram-disk for entire OS during benchmarking. I don't think you need to go so extreme as I did, but moving the working directory & ivy cache into ramdisk may be a good idea.

DarkDimius commented 7 years ago

one more idea, that I came up with but didn't have time to try out: always run the entire vm under perf stat java .... and disqualify the tests if there has been to many cpu-migrations\context-switches.

retronym commented 7 years ago

I've added a script (~/bin/setup-benchmark.sh) that is run before the benchmarks (with sudo) that:

The last part appears to be ignored, though, running:

% watch grep \"cpu MHz\" /proc/cpuinfo

Shows the frequencies scaling back and forth between 1200 and 2400.

I'm still seeing larger-than-expected variance in the runs.

Given: https://serverfault.com/questions/716317/linux-why-does-the-cpu-frequency-fluctuate-when-using-the-performance-governor https://wiki.archlinux.org/index.php/CPU_frequency_scaling

Another step might be to disable the pstate driver, but this gets a little beyond my comfort zone on a box that I don't have a keyboard and monitor attached too...

retronym commented 7 years ago

This appears to be a pretty comprehensive guide to setting up stable benchmark environments:

https://perf.readthedocs.io/en/latest/system.html#system https://haypo.github.io/journey-to-stable-benchmark-system.html

retronym commented 7 years ago

Also interesting, Virtual Machine Warmup Blows Hot and Cold

In order to control as many of these as possible, we wrote Krun, a new benchmark runner. Krun itself is a ‘supervisor’ which, given a configuration file specifying VMs, benchmarks (etc.) configures a Linux or OpenBSD system, runs benchmarks, and collects the results.

Krun uses cpufreq-set to set the CPU governor to performance mode (i.e. the highest non-overclocked frequency possible). To prevent the kernel overriding this setting, Krun verifies that the user has disabled Intel P-state support in the kernel by passing intel pstate=disable as a kernel argument

retronym commented 7 years ago

Therefore, before each process execution (including before the first), Krun reboots the system, ensuring that the benchmark runs with the machine in a (largely) known state. After each reboot, Krun is executed by the init subsystem; Krun then pauses for 3 minutes to allow the system to fully initialise; calls sync (to flush any remaining files to disk) followed by a 30 second wait; before finally running the next process execution.

lrytz commented 7 years ago

I did a few experiments with isolcpus and taskset. I ran hot -p source=scalap -wi 20 -i 10 -f 1 across various configurations.

Without isolcpus:

One possible explanation could be that GC causes jitter when there's only one processor available, as it cannot run in parallel.

With isolcpus=1-3

With isolcpus=2,3

The large variances when using taskset on the isolated CPUs are surprising.

lrytz commented 7 years ago

I added -prof perfnorm to the jmh command for the isolcpus=2,3 case.

lrytz commented 7 years ago

It makes sense now: when using taskset to move a process on an isolated cpu, the kernel doesn't do any load balancing across CPUs. https://groups.google.com/forum/#!topic/mechanical-sympathy/Tkcd2I6kG-s, https://www.novell.com/support/kb/doc.php?id=7009596. started reading about cpuset, will experiment.

lrytz commented 7 years ago

Added a script that checks the machine state and sets some of the configurations discussed in the main description of this issue (https://github.com/scala/compiler-benchmark/blob/master/scripts/benv)

I ran some experiments in various configurations

$ sbt 'export compilation/jmh:fullClasspath' | tail -1 | tee compilation/cp'
$ cd compilation
$ java -cp $(cat cp) org.openjdk.jmh.Main HotScalacBenchmark -p source=scalap

I didn't do multiple runs to see the how much the error values vary. The error numbers are probably too close together / jittery to make a meaningful comparison, but I'm trying anyway.

Config Result Error/Score*1000
clean 1242.208 ± 5.331 4.29
clean, through sbt (sbt 'hot -p source=scalap') 1256.471 ± 4.734 3.77
some services stopped (atd, acpid, dbus, irqbalance, rsyslogd) 1235.294 ± 3.799 3.08
CPU frequency fixed to 3400 MHz 1259.373 ± 5.872 4.66
CPU frequency fixed to 2000 MHz 2089.546 ± 9.279 4.44
CPU shield (1-3) (*) 1274.204 ± 5.806 4.56
interrupt affinities set to 1 1242.420 ± 4.473 3.60

(*) sudo cset shield sudo -- -u scala java -cp $(cat cp) org.openjdk.jmh.Main HotScalacBenchmark -p source=scalap

In combination

Again, the error numbers are not stable enough to make a useful conclusion.

lrytz commented 7 years ago

For comparison I ran a simple benchmark that creates a new Global (https://github.com/scala/compiler-benchmark/compare/master...lrytz:newGlobal?expand=1).

sbt 'compilation/jmh:run NewGlobalBenchmark -wi 5 -i 10 -f 3'

One thing that jumps out is that variances are much more stable between iterations than what we're seeing when running the entire compiler. In the compiler we always see things like

Iteration   1: 1253.048 ±(99.9%) 7.425 ms/op
Iteration   2: 1243.611 ±(99.9%) 38.322 ms/op
Iteration   3: 1232.193 ±(99.9%) 26.320 ms/op
...

For NewGlobalBenchmark,

[info] Iteration   1: 187.737 ±(99.9%) 1.405 us/op
[info] Iteration   2: 187.800 ±(99.9%) 1.408 us/op
[info] Iteration   3: 187.975 ±(99.9%) 1.648 us/op
[info] Iteration   4: 187.794 ±(99.9%) 1.381 us/op
...

Maybe the IO has an impact here. I'll experiment a bit with -Ystop-after and with using a ramdisk.

lrytz commented 7 years ago

Actually, of course the number of benchmarks invocations is much higher for NewGlobalBenchmark (I got 789350) compared to HotScalacBenchmark (260).

lrytz commented 7 years ago

Using a ramdisk (for the compiler-benchmark checkout, the benchmarked compiler's output directory, and the ivy cache containing all jars, including the compiler), and with the benchmark config (stop services, 3400 MHz, interrupt affinity, but without the CPU shield): 1223.810 ± 5.396 ms/op. This is a bit faster than what I saw on the SSD (1270.649 ± 5.161), but the variance is the same.

I also ran with -Ystop-before:jvm

This suggests that IO could be a cause of variance, but the ramdisk doesn't help to reduce it.