Open AngelicosPhosphoros opened 3 years ago
The Rust backend is not in a position to generate phis or code that uses SSA values directly in general.
Note that the optimized IR for the Rust code is also SSA and is also exploiting phis. When comparing optimized IR there is ultimately just one difference between the eq0 case and the Clang IR – Rust code has a bb2
block that acts as a phi springboard. That's what LLVM is ultimately unable to see through, it seems.
Would it be worth investigating improvements to LLVM for this, then? That would help Rust here but also help Clang if it ever generates similar things.
There are a couple of avenues we could explore here, yes. We probably won't emit ideal code from Rust, but we could look into emitting something that LLVM does not trip over.
Adding ability in the x86 backend to see past the offending pattern, such as seen here, might be beneficial in a more general way. Its LLVM itself that optimizes to this particular "springboard" block pattern too, so adjusting some of LLVM's optimization passes to produce something more canonical could be an option as well.
Does this only fail to vectorize on x86(_64)? Should we check other backends?
Just adding some info: Rust 1.37 vectorizes just fine, Rust 1.38 and above produce the bug OP has https://rust.godbolt.org/z/s1vnzdThT
@rijenkii The upgrade to LLVM 9 seems like the most likely culprit between those two releases.
@pthariensflame
Does this only fail to vectorize on x86(_64)? Should we check other backends?
No, ARM fails to remove branches too. godbolt
The Rust backend is not in a position to generate phis or code that uses SSA values directly in general.
Understood. But my second suggested algorithm doesn't use this instructions.
I wrote a benchmark
Output:
cmp/eq0: Self/0 time: [6.7986 us 6.8072 us 6.8151 us]
Found 13 outliers among 100 measurements (13.00%)
1 (1.00%) low severe
2 (2.00%) low mild
9 (9.00%) high mild
1 (1.00%) high severe
cmp/eq0: Random field/0 time: [19.259 us 19.285 us 19.313 us]
Found 11 outliers among 100 measurements (11.00%)
1 (1.00%) low severe
3 (3.00%) low mild
7 (7.00%) high mild
cmp/eq0: Last field/0 time: [6.7979 us 6.8041 us 6.8110 us]
Found 13 outliers among 100 measurements (13.00%)
1 (1.00%) low severe
11 (11.00%) high mild
1 (1.00%) high severe
cmp/eq2: Self/0 time: [6.7886 us 6.7941 us 6.7999 us]
Found 2 outliers among 100 measurements (2.00%)
1 (1.00%) high mild
1 (1.00%) high severe
cmp/eq2: Random field/0 time: [15.222 us 15.237 us 15.253 us]
Found 7 outliers among 100 measurements (7.00%)
1 (1.00%) low severe
5 (5.00%) high mild
1 (1.00%) high severe
cmp/eq2: Last field/0 time: [6.7563 us 6.7688 us 6.7820 us]
Found 16 outliers among 100 measurements (16.00%)
10 (10.00%) low mild
5 (5.00%) high mild
1 (1.00%) high severe
As can be seen in criterion ouput, SIMD version works better in case when branch predictor cannot predict which field differ (Random field) while having same speed in other cases. Also, difference is big: 26% and I think that it matters even more for struct with more fields.
@rijenkii #62993 is probably the cause of why the optimization disappeared.
In 1.33, the same version where the SIMD optimizations appear, the following code doesn't optimize out the unnecessary modulo operations.
pub fn is_leap_year1(year: i32) -> bool {
let div_4 = year % 4 == 0;
let div_100 = year % 100 == 0;
let div_400 = year % 400 == 0;
div_4 && !(div_100 && !div_400)
}
The fix to revert the optimization in 1.33 was added for 1.38.
@pedantic79 @rijenkii I reimplemented simplification of CFG from reverted commit, I think, we should benchmark changes to decide if they should stay. https://github.com/rust-lang/rust/pull/83663
If it turns out that #83663 costs more than it saves in terms of runtime performance, would there be value in simply changing #[derive(PartialEq)]
to produce code that looks more like eq2()
in @AngelicosPhosphoros's examples above? It's more verbose, but since it's generated, that doesn't seem like a major issue; if it produces faster binaries, that does sound like a win
@mcronce Yes, that's a plan.
As PR merged, I think we can close this issue. https://github.com/rust-lang/rust/pull/83663
Reopening to track the LLVM 13 regression.
@sdgoihew that depends on the fields being Copy
, which is nice when it's possible, but typically the derive(PartialEq)
can't know that.
cc an issue I opened recently about the derived ==
https://github.com/rust-lang/rust/issues/117800
It seems that the rules on &
are not strong enough to allow fully removing the short-circuiting in general, and will stay that way because of implications on checkability.
(Rust 1.47 -- well, more accurately the LLVM optimizer in that release -- optimized it away https://rust.godbolt.org/z/zxYKM17KT, but that's poison-incorrect, so it optimizing to that today would be a bug.)
May be worth noting, Compiler Explorer for the latest stable (1.76) shows the good SIMDified version for the original example (and it seems to be not the first version with it):
<example::Blueprint as core::cmp::PartialEq>::eq:
vmovdqu xmm0, xmmword ptr [rdi]
vmovd xmm1, dword ptr [rdi + 16]
vmovd xmm2, dword ptr [rsi + 16]
vpxor xmm1, xmm1, xmm2
vpxor xmm0, xmm0, xmmword ptr [rsi]
vpor xmm0, xmm1, xmm0
vptest xmm0, xmm0
sete al
ret
almost identical to the "good" example::eq2:
(only differing in the picked registers)
UPD (thanks, @sdgoihew): even the registers are the same for both code variants in the latest version.
@JarvisCraft https://rust.godbolt.org/z/5hEMzoezj
Both of the above versions compile identically for all aarch64 targets I tried, with and without SVE or NEON, because they prefer not to vectorize at all: https://rust.godbolt.org/z/ez7hhK7T7
<example::Blueprint as core::cmp::PartialEq>::eq:
ldp x8, x9, [x0]
ldp x10, x11, [x1]
ldr w12, [x0, #16]
cmp x8, x10
ldr w13, [x1, #16]
ccmp x9, x11, #0, eq
ccmp x12, x13, #0, eq
cset w0, eq
ret
Curiously inlining seems to break the optimization. The inlined code still uses the inefficient jump based logic: https://godbolt.org/z/YbT9nKz6c.
tl;dr: Current IR generated by && chain too hard to optimize for LLVM and always compiles to chain of jumps.
I started of investigation of this from this Reddit thread about lack of using SIMD instructions in PartialEq implementations.
Current Rust PartialEq
I assumed that PartialEq implementation generates code like:
handwritten eq
```rust pub struct Blueprint { pub fuel_tank_size: u32, pub payload: u32, pub wheel_diameter: u32, pub wheel_width: u32, pub storage: u32, } impl PartialEq for Blueprint{ fn eq(&self, other: &Self)->bool{ (self.fuel_tank_size == other.fuel_tank_size) && (self.payload == other.payload) && (self.wheel_diameter == other.wheel_diameter) && (self.wheel_width == other.wheel_width) && (self.storage == other.storage) } } ```
and it produce such asm:
```asm::eq:
mov eax, dword ptr [rdi]
cmp eax, dword ptr [rsi]
jne .LBB0_1
mov eax, dword ptr [rdi + 4]
cmp eax, dword ptr [rsi + 4]
jne .LBB0_1
mov eax, dword ptr [rdi + 8]
cmp eax, dword ptr [rsi + 8]
jne .LBB0_1
mov eax, dword ptr [rdi + 12]
cmp eax, dword ptr [rsi + 12]
jne .LBB0_1
mov ecx, dword ptr [rdi + 16]
mov al, 1
cmp ecx, dword ptr [rsi + 16]
jne .LBB0_1
ret
.LBB0_1:
xor eax, eax
ret
```
godbolt link for handwritten Eq,l:'5',n:'0',o:'Rust+source+%231',t:'0')),k:50,l:'4',n:'0',o:'',s:0,t:'0'),(g:!((h:compiler,i:(compiler:r1510,filters:(b:'0',binary:'1',commentOnly:'0',demangle:'0',directives:'0',execute:'1',intel:'0',libraryCode:'1',trim:'1'),fontScale:14,j:1,lang:rust,libs:!(),options:'-C+opt-level%3D3+-C+target-cpu%3Dhaswell',selection:(endColumn:12,endLineNumber:21,positionColumn:1,positionLineNumber:1,selectionStartColumn:12,selectionStartLineNumber:21,startColumn:1,startLineNumber:1),source:1),l:'5',n:'0',o:'rustc+1.51.0+(Editor+%231,+Compiler+%231)+Rust',t:'0')),k:50,l:'4',n:'0',o:'',s:0,t:'0')),l:'2',n:'0',o:'',t:'0')),version:4)
It is quite ineffective because have 5 branches which probably can be replaced by few SIMD instructions.
State on Clang land
So I decided to look how Clang compiles similar code (to know, if there some LLVM issue). So I written such code:
clang code and asm
```cpp #include
struct Blueprint{
uint32_t fuel_tank_size;
uint32_t payload;
uint32_t wheel_diameter;
uint32_t wheel_width;
uint32_t storage;
};
bool operator==(const Blueprint& th, const Blueprint& arg)noexcept{
return th.fuel_tank_size == arg.fuel_tank_size
&& th.payload == arg.payload
&& th.wheel_diameter == arg.wheel_diameter
&& th.wheel_width == arg.wheel_width
&& th.storage == arg.storage;
}
```
And asm
```asm
operator==(Blueprint const&, Blueprint const&): # @operator==(Blueprint const&, Blueprint const&)
movdqu xmm0, xmmword ptr [rdi]
movdqu xmm1, xmmword ptr [rsi]
pcmpeqb xmm1, xmm0
movd xmm0, dword ptr [rdi + 16] # xmm0 = mem[0],zero,zero,zero
movd xmm2, dword ptr [rsi + 16] # xmm2 = mem[0],zero,zero,zero
pcmpeqb xmm2, xmm0
pand xmm2, xmm1
pmovmskb eax, xmm2
cmp eax, 65535
sete al
ret
```
Also, godbolt with Clang code,l:'5',n:'0',o:'C%2B%2B+source+%231',t:'0')),k:50,l:'4',n:'0',o:'',s:0,t:'0'),(g:!((h:compiler,i:(compiler:clang1101,filters:(b:'0',binary:'1',commentOnly:'0',demangle:'0',directives:'0',execute:'1',intel:'0',libraryCode:'1',trim:'1'),fontScale:14,j:1,lang:c%2B%2B,libs:!(),options:'-O3',selection:(endColumn:1,endLineNumber:1,positionColumn:1,positionLineNumber:1,selectionStartColumn:1,selectionStartLineNumber:1,startColumn:1,startLineNumber:1),source:1),l:'5',n:'0',o:'x86-64+clang+11.0.1+(Editor+%231,+Compiler+%231)+C%2B%2B',t:'0')),k:50,l:'4',n:'0',o:'',s:0,t:'0')),l:'2',n:'0',o:'',t:'0')),version:4).
As you see, Clang successfully optimizes the code to use SIMD instructions and doesn't ever generates branches.
Rust variants of good asm generation
I checked other code variants in Rust.
Rust variants and ASM
```rust pub struct Blueprint { pub fuel_tank_size: u32, pub payload: u32, pub wheel_diameter: u32, pub wheel_width: u32, pub storage: u32, } // Equivalent of #[derive(PartialEq)] pub fn eq0(a: &Blueprint, b: &Blueprint)->bool{ (a.fuel_tank_size == b.fuel_tank_size) && (a.payload == b.payload) && (a.wheel_diameter == b.wheel_diameter) && (a.wheel_width == b.wheel_width) && (a.storage == b.storage) } // Optimizes good but changes semantics pub fn eq1(a: &Blueprint, b: &Blueprint)->bool{ (a.fuel_tank_size == b.fuel_tank_size) & (a.payload == b.payload) & (a.wheel_diameter == b.wheel_diameter) & (a.wheel_width == b.wheel_width) & (a.storage == b.storage) } // Optimizes good and have same semantics as PartialEq pub fn eq2(a: &Blueprint, b: &Blueprint)->bool{ if a.fuel_tank_size != b.fuel_tank_size{ return false; } if a.payload != b.payload{ return false; } if a.wheel_diameter != b.wheel_diameter{ return false; } if a.wheel_width != b.wheel_width{ return false; } if a.storage != b.storage{ return false; } true } ``` ```asm example::eq0: mov eax, dword ptr [rdi] cmp eax, dword ptr [rsi] jne .LBB0_1 mov eax, dword ptr [rdi + 4] cmp eax, dword ptr [rsi + 4] jne .LBB0_1 mov eax, dword ptr [rdi + 8] cmp eax, dword ptr [rsi + 8] jne .LBB0_1 mov eax, dword ptr [rdi + 12] cmp eax, dword ptr [rsi + 12] jne .LBB0_1 mov ecx, dword ptr [rdi + 16] mov al, 1 cmp ecx, dword ptr [rsi + 16] jne .LBB0_1 ret .LBB0_1: xor eax, eax ret example::eq1: mov eax, dword ptr [rdi + 16] cmp eax, dword ptr [rsi + 16] vmovdqu xmm0, xmmword ptr [rdi] sete cl vpcmpeqd xmm0, xmm0, xmmword ptr [rsi] vmovmskps eax, xmm0 cmp al, 15 sete al and al, cl ret example::eq2: vmovdqu xmm0, xmmword ptr [rdi] vmovd xmm1, dword ptr [rdi + 16] vmovd xmm2, dword ptr [rsi + 16] vpxor xmm1, xmm1, xmm2 vpxor xmm0, xmm0, xmmword ptr [rsi] vpor xmm0, xmm0, xmm1 vptest xmm0, xmm0 sete al ret ```
And godbolt link with variants
Function
eq0
uses&&
,eq1
uses&
so has different semantics,eq2
has similar semantics but optimized better.eq1
is very simple case (we have one block generated by rustc which easily optimized) and has different semantics so we skip it. We would useeq0
andeq2
further.Investigation of LLVM IR
clang
andeq2
cases successfully proved that LLVM capable to optimization of && so I started to investigate generated LLVM IR and optimized LLVM IR. I decided to check differences of IR for clang, eq0 and eq2.I would put first generated LLVM IR, its diagram, and optimized LLVM IR for cases. Also, I used different code in files than godbolt so function names aren't match for case names.
Clang IR
I compiled code to LLVM IR using
clang++ is_sorted.cpp -O0 -S -emit-llvm
, removedoptnone
attribut manually, then looked optimizations usingopt -O3 -print-before-all -print-after-all 2>passes.ll
Code and diagrams
real code
```cpp #include
struct Blueprint{
uint32_t fuel_tank_size;
uint32_t payload;
uint32_t wheel_diameter;
uint32_t wheel_width;
uint32_t storage;
};
bool operator==(const Blueprint& th, const Blueprint& arg)noexcept{
return th.fuel_tank_size == arg.fuel_tank_size
&& th.payload == arg.payload
&& th.wheel_diameter == arg.wheel_diameter
&& th.wheel_width == arg.wheel_width
&& th.storage == arg.storage;
}
```
As you see, Clang original code doesn't changed much, it removed only copying from source structs to temporary locals. Original control flow is very clear and last block utilizes single phi node with a lot of inputs to fill result value.
Rust eq2 case
I generated LLVM code using such command:
rustc +nightly cmp.rs --emit=llvm-ir -C opt-level=3 -C codegen-units=1 --crate-type=rlib -C 'llvm-args=-print-after-all -print-before-all' 2>passes.ll
Rust eq2 case IR and graphs
real code
```rust pub struct Blueprint { pub fuel_tank_size: u32, pub payload: u32, pub wheel_diameter: u32, pub wheel_width: u32, pub storage: u32, } impl PartialEq for Blueprint{ fn eq(&self, other: &Self)->bool{ if self.fuel_tank_size != other.fuel_tank_size{ return false; } if self.payload != other.payload{ return false; } if self.wheel_diameter != other.wheel_diameter{ return false; } if self.wheel_width != other.wheel_width{ return false; } if self.storage != other.storage{ return false; } true } } ```In general, algorithm can be described as
false
into the byte and jump to end.This indirect usage of byte is optimized in mem2reg phase to pretty SSA form and control flow remains forward only.
Rust eq0 case (which used in reality very often and optimizes bad)
IR code and control flow diagrams
Real code
```rust pub struct Blueprint { pub fuel_tank_size: u32, pub payload: u32, pub wheel_diameter: u32, pub wheel_width: u32, pub storage: u32, } impl PartialEq for Blueprint{ fn eq(&self, other: &Self)->bool{ (self.fuel_tank_size == other.fuel_tank_size) && (self.payload == other.payload) && (self.wheel_diameter == other.wheel_diameter) && (self.wheel_width == other.wheel_width) && (self.storage == other.storage) } } ```Well, it is really hard to tell whats going on in the generated code. Control flow operators placed basically in reversed order (first checked condition put in last position, then it jumps in both cases back than jumps against after condition), and such behaviour doesn't change during optimization passes and in final generated ASM we end with a lot of jumps and miss SIMD usage. It looks like that LLVM fails to reorganize this blocks in more natural order and probably fails to understand many temporary allocas.
Conclusions of LLVM IR research
Let's look the control flow diagrams last time.
Clang:
Rust eq2 (with manual early returns)
Rust eq0 with usage of
&&
operator.Finally, I have 2 ideas of new algorithms which can be generated by proper codegen for
&&
chains:First approach
We need exploit φ nodes with lots of inputs from Clang approach: Pseudocode:
Probably, it is the best solution because LLVM tends to handle Clang approach better and this code is already in SSA form which is loved by optimizations.
Second approach
Pseudocode
This version is less friendly to the optimizer because we use pointer here but it would be converted to SSA form in mem2reg phase of optimization.
Implementing of such algorithms would probably require handling of chains of && operators as one prefix operator with many arguments e.g.
&&(arg0, arg1, ..., argN)
.I don't know which part of pipeline is needed to be changed to fix this and which my suggested generated code is easier to produce.
Also, very I expect same problem with
||
operator implementation too.Importance and final thoughts
This bug effectively prevents SIMD optimizations in most
#[derive(PartialEq)]
and some other places too so fixing it could lead to big performance gains. Hope my investigation of this bug would help to fix it.Also, sorry for possible weird wording and grammar mistakes because English isn't native language for me.
And finally, which rustc and clang I used: