Open ebepho opened 3 months ago
Tagging subscribers to this area: @dotnet/gc See info in area-owners.md if you want to be subscribed.
Is it certain that the write barrier is to blame? volatile
writes have release semantics which I think adds an overhead on ARM architectures.
Is it certain that the write barrier is to blame?
volatile
writes have release semantics which I think adds an overhead on ARM architectures.
The volatile
overhead is not significant enough to explain the performance regressions observed. The numbers were roughly the same with and without it.
@EgorBot -arm64 -amd -perf -commit 55987917ad1ff6ac3f3f49d32b1624196d17a27a vs 55987917ad1ff6ac3f3f49d32b1624196d17a27a
using System;
using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
public class Bench
{
[Benchmark]
public void WB()
{
Foo foo = new Foo();
for (long i = 0; i < 200000000; i++)
foo.x = foo;
}
}
internal class Foo
{
public volatile Foo x;
}
9.0.100-rc.1.24406.4, M1 Pro, osx-arm64
compiled with dotnet publish -p:PublishAot=true
var foo = new Foo();
for (long i = 0; i < 200_000_000; i++) {
foo.x = foo;
}
class Foo {
public volatile Foo? x;
}
time ./wbcost (base)
________________________________________________________
Executed in 425.01 millis fish external
usr time 404.48 millis 0.07 millis 404.41 millis
sys time 18.57 millis 1.02 millis 17.55 millis
@EgorBot -arm64 -amd -perf -commit 55987917ad1ff6ac3f3f49d32b1624196d17a27a vs 55987917ad1ff6ac3f3f49d32b1624196d17a27a --envvars DOTNET_TieredCompilation:0 DOTNET_ReadyToRun:0
using System;
using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
public class Bench
{
[Benchmark]
public void WB()
{
Foo foo = new Foo();
for (long i = 0; i < 200000000; i++)
foo.x = foo;
}
}
internal class Foo
{
public volatile Foo x;
}
I cannot reproduce your numbers, I suspect you might be measuring OSR pace difference (consider running with DOTNET_TieredCompilation=0
).
Although, arm64 is still slower due to:
movabs r10, 0xF0F0F0F0F0F0F0F0
directly (they have to be aligned), etc. Looks like Arm64's WB performs 5 memory loads (wbs_sw_ww_table
, wbs_ephemeral_low
, wbs_ephemeral_high
, wbs_card_table
+ card table value load) while x64 has just one. Annotated asm: arm64 vs x64add x14, x14, #0x8
Also, we might want to have a more complicated benchmark where objects aren't ephemeral as well?
@jkotas @cshung If you're not busy - do you have any idea why "is card table already updated" check is so expensive on arm64? π
ldrb
load (wbs_card_table
) on arm64can it be some false sharing etc?
Another thing I noticed that arm64 WB is so expensive that we can add yet another branch ("is object reference null? Exit") and the regression will be <1% (while giving us 2X improvement when we actually write null)
Also, we might want to have a more complicated benchmark where objects aren't ephemeral as well?
Yes, we should totally understand the performance of the write barrier function under other execution paths - for example - when cache miss, when branching away because of heap range, generations, and so on. The initial benchmark was designed to be easy to understand. For example, I wanted to make sure the cache always hit and read exactly the same location, that make sure we don't hit any cache issues. As we can see, even in this trivial scenario, the data is showing surprising results, make it more varying will only make it harder to interpret.
can it be some false sharing etc?
I doubt it is false sharing. Since we aren't allocating, the GC should not be running, and no other thread should be accessing the card table, so the core should have exclusive access to the cache entry.
Beside the obvious fact that this "slow load" used a different instruction, this slow load is also loading from a computed address, does the ARM architecture does anything special with respect to loading from a hard coded address? I don't know.
I wonder if tools like this can give us more insight on what is going on. https://learn.arm.com/learning-paths/servers-and-cloud-computing/top-down-n1/analysis-1/
My bet would be sampling bias or some micro-architecture issue. I think it would be best to ask Arm hw engineers to replicate this on a simulator and tell us what's actually going on.
Description
We observed a significant performance disparity between the Arm64 and x64 write barriers. When running a program without the write barrier, Arm64 was 3x slower than x64. However, with the write barrier enabled, Arm64 became 10x slower. This suggests that Arm64's handling of the write barrier is less optimized compared to x64.
Data
Performance Counter Stats without the Write Barrier
To test the performance of the write barrier, we used Crank to run a simple program 10 times on the two machines. Notice that when we do not access the write barrier, itβs approximately 3x slower on the Arm64 machine.
This is a simple program that does not access the write barrier that we measured the performance of using crank:
Table 1: Average Performance Counter Stats without the write barrier.
Performance Counter Stats with the Write Barrier
When we do access the write barrier, performance degrades further, with the Arm64 machine becoming 10x slower.
This is a simple program that access the write barrier that we measured the performance of using crank:
Table 2: Performance Counter Stats with the write barrier.