Open martisch opened 7 years ago
If we are going to do this, we can split lea into lea+lea, to avoid clobbering flags. For reference, there are 952 3 operands leas in go tool binary.
grep "lea.*0x.*%.*%.*,%" qaz | grep -v "0x0(" | wc -l
Looks like I found a specific example of this. After b1df8d6ffa2c4c5be567934bd44432fff8f3c4a7 we convert 32-bit multiplication by 0x101 into slow lea, which caused a performance regression on master vs 1.10 for image/draw benchmarks:
CMYK-8 461µs ± 2% 513µs ± 1% +11.31% (p=0.000 n=10+9)
Thanks for noticing the regression and finding a specific test case Ilya. I will work on a slow LEA fix within the next week unless somebody else wants to have a go at it earlier.
can it be fixed in the assembler? I found that there are inefficient LEA used in assembly code.
Fixing it in asm will need a manual fix (or script detecting it + cl to fix) as i dont think we should rewrite handwritten code when translating to binary. Maybe its needed to achieve some alignment of instructions.
Also for asm depending on the context a LEA+ADD or MUL could be better or equally valid instead of LEA+LEA so i would say those should be fixed on a case by case basis with benchmarks if possible.
Maybe some vet/lint check for asm might be nice.
I was working on a fix in the amd64 opcode code that emits the assembler instructions. Similar how MOV of 0 is rewritten as XOR.
I think we should leave assembly as is. Magic in the assembler is always confusing.
So is the fix just to disable the rules that can generate 3-part LEA instructions? Basically, enforce that LEAQ[1248] can't have either an Aux or AuxInt? Are all LEA affected? LEA[LW]? Also on 386?
This is a (pretty weak) argument for introducing GOAMD64 architecture variants. If we could figure out reliably which chips have this problem and which don't.
While looking into this I found and started on 3 different ways to reduce slow LEAs:
Still working on testing these as well as gathering statistics about occurrences and rerunning benchmarks so we have slow downs that we know covered.
AFAIK Slow LEAs with 3 operands exist on modern Intel and AMD chips independent of 386 or x64 mode and word width. But 16bit LEA are slower even with 2 operands.
We have a bunch of rules for merging lea into mov, so I feel like option 3 will have lowest amount of unintended consequences. It also makes it easier to switch between versions, if we want to e. g. optimize some function for size, instead of speed or update it for future CPUs.
Change https://golang.org/cl/114655 mentions this issue: cmd/compile: split slow 3 operand LEA instructions into two LEAs
@TocarIP did you find a file+line number where a slow lea is created that effects the draw benchmarks? For 0x101 multiplication it seems a 2 (but not 3) operand lea is created since there is no offset/displacement e.g.:
draw.go:243 0x112f368 89ca MOVL CX, DX
draw.go:243 0x112f36a c1e108 SHLL $0x8, CX
draw.go:243 0x112f36d 8d0c11 LEAL 0(CX)(DX*1), CX
Minimal example for that multiplication is:
package foo
func mul(x uint32) uint32 {
return x * 0x101
}
0x0000 (../foo.go:4) MOVL "".x+8(SP), AX
0x0004 (../foo.go:4) MOVL AX, CX
0x0006 (../foo.go:4) SHLL $8, AX
0x0009 (../foo.go:4) LEAL (AX)(CX*1), AX
0x000c (../foo.go:4) MOVL AX, "".~r1+16(SP)
0x0010 (../foo.go:4) RET
It used to emit IMUL3L
:
0x0000 (../foo.go:4) MOVL "".x+8(SP), AX
0x0004 (../foo.go:4) IMUL3L $257, AX, AX
0x000a (../foo.go:4) MOVL AX, "".~r1+16(SP)
0x000e (../foo.go:4) RET
And prior to that, it was IMUL
:
0x0000 (../foo.go:3) MOVL "".x+8(SP), AX
0x0004 (../foo.go:4) IMULL $257, AX
0x000a (../foo.go:4) MOVL AX, "".~r1+16(SP)
0x000e (../foo.go:4) RET
@Quasilyte Thanks but i do not think this is a slow LEA version affecting the benchmark. (However the particular rewrite to MOV+SHL+(fast)LEA still looks like to be slower) LEAL (AX)(CX*1), AX is AFAIK not a slow version (3 operand) of LEA so the posted fix for slow LEAs itself wont fix that particular issue (mult by 0x101).
@martisch In image/draw.drawCMYK I see (output of perf) following hotspot:
mov 0x4(%rsp),%r8d
2.20 │ lea -0xffff(%r8,%r13,1),%r8d
0.20 │ neg %r8d
3.77 │ imul %eax,%r8d
│ g := (0xffff - uint32(m)*0x101) * w / 0xffff
4.77 │ mov %r15d,%r13d
1.37 │ shl $0x8,%r15d
0.02 │ lea -0xffff(%r13,%r15,1),%r13d
2.37 │ neg %r13d
1.96 │ imul %eax,%r13d
│ b := (0xffff - uint32(y)*0x101) * w / 0xffff
1.13 │ mov %esi,%r15d
0.04 │ shl $0x8,%esi
2.33 │ lea -0xffff(%r15,%rsi,1),%esi
1.66 │ neg %esi
1.46 │ imul %eax,%esi
So it looks like g := (0xffff - uint32(m)*0x101) * w / 0xffff
produces
lea -0xffff(%r13,%r15,1),%r13d
, which is a 3-operand lea
Are there objections to waiting till 1.12? The performance improvements are overall very small (0.16% is what I measure pretty consistently, including across a much larger selection of benchmarks), and it's getting very late in the 1.11 cycle and we seem to be behind schedule.
To summarize the performance changes, across the low-noise 256 benchmarks (out of 488 total, low noise is less than 2% max standard deviation reported across 30 trials) 15 showed >= 1% improvement, 16 showed >= 1% slowdown.
For comparison, the experiment to pretend we are only targeting more-recent versions of intel hardware (CL 117925) had 83 improved, 6 slowed down, and the geomean improvement was 1.28%
Abusing "early-in-cycle" so we don't forget it, this one is low risk.
See also https://github.com/golang/go/issues/36468#issuecomment-580107630, for an approach that can reduce the number of slow LEAs.
Wondering what's left to do here since https://github.com/golang/go/issues/21735#issuecomment-392144356 has been merged. Fix the 16bit LEA?
Even for LEAQ and LEAL my CL doesnt manage to split all 3 operand LEA. There are some that had aux set and arent transformed.
Change https://go.dev/cl/440035 mentions this issue: cmd/compile: split 3 operand LEA in late lower pass
Change https://go.dev/cl/482820 mentions this issue: cmd/compile: remove broken LEA "optimization"
Change https://go.dev/cl/482164 mentions this issue: cmd/compile: remove broken LEA "optimization"
On newer x86 cpus (amd and intel) 3 operand LEA instructions with base, index and offset have a higher latency and less throughput than 2 operand LEA instructions.
The compiler when emitting the instructions could rewrite slow leas into e.g. LEA + ADD instructions where possible (flag clobbering ok) similar how MOV $0 R is rewritten to XOR R R.
relevant llvm optimization ticket: https://reviews.llvm.org/D32277
/cc @TocarIP @randall77 @josharian