golang / go

The Go programming language
https://go.dev
BSD 3-Clause "New" or "Revised" License
123.2k stars 17.57k forks source link

cmd/compile: pgo can dramatically increase goroutine stack size #65532

Open felixge opened 7 months ago

felixge commented 7 months ago

Go version

go1.21.5

Output of go env in your module/workspace:

GOARCH=amd64
GOOS=linux
GOAMD64=v1

What did you do?

Build my application using a default.pgo CPU profile from production.

What did you see happen?

Go memory usage (/memory/classes/total:bytes − /memory/classes/heap/released:bytes) increased from 720 MB to 850 MB (18%) until rollback, see below.

2024-02-05 pgo for koutris-forwarder-intake (increase goroutine stack size)  Datadog at 23 03 54@2x

This increase in memory usage seems to have been caused by an increase in goroutine stack size (/memory/classes/heap/stacks:bytes) from 207 MB to 280MB (35%).

2024-02-05 pgo for koutris-forwarder-intake (increase goroutine stack size)  Datadog at 23 06 01@2x

This increase was not due to an increase in the number of active goroutines, but due to an increase of the average stack size (/memory/classes/heap/stacks:bytes / /sched/goroutines:goroutines).

2024-02-05 pgo for koutris-forwarder-intake (increase goroutine stack size)  Datadog at 23 05 17@2x

To debug this further, I built a hacky goroutine stack frame profiler. This pointed me to to google.golang.org/grpc/internal/transport.(*loopyWriter).run

For the binary compiled without pgo, my tool estimated 2MB of stack usage for ~1000 goroutines:

2024-02-05 koutris-forwarder-intake goroutine_space at 23 23 33@2x

And for the binary compiled with pgo, my tool estimated 71MB of stack usage for ~1000 goroutines:

2024-02-05 koutris-forwarder-intake goroutine_space at 23 22 14@2x

Looking at the assembly, it becomes clear that is due to the frame size increasing from 0x50 (80) bytes to 0xc1f8 (49656) bytes.

assembly **before pgo:** ``` TEXT google.golang.org/grpc/internal/transport.(*loopyWriter).run(SB) /go/pkg/mod/google.golang.org/grpc@v1.58.2/internal/transport/controlbuf.go 0x8726e0 493b6610 CMPQ SP, 0x10(R14) // cmp 0x10(%r14),%rsp 0x8726e4 0f86ab020000 JBE 0x872995 // jbe 0x872995 0x8726ea 55 PUSHQ BP // push %rbp 0x8726eb 4889e5 MOVQ SP, BP // mov %rsp,%rbp 0x8726ee 4883ec50 SUBQ $0x50, SP // sub $0x50,%rsp ``` **after pgo:** ``` TEXT google.golang.org/grpc/internal/transport.(*loopyWriter).run(SB) /go/pkg/mod/google.golang.org/grpc@v1.58.2/internal/transport/controlbuf.go 0x8889a0 4989e4 MOVQ SP, R12 // mov %rsp,%r12 0x8889a3 4981ec80c10000 SUBQ $0xc180, R12 // sub $0xc180,%r12 0x8889aa 0f82c0300000 JB 0x88ba70 // jb 0x88ba70 0x8889b0 4d3b6610 CMPQ R12, 0x10(R14) // cmp 0x10(%r14),%r12 0x8889b4 0f86b6300000 JBE 0x88ba70 // jbe 0x88ba70 0x8889ba 55 PUSHQ BP // push %rbp 0x8889bb 4889e5 MOVQ SP, BP // mov %rsp,%rbp 0x8889be 4881ecf8c10000 SUBQ $0xc1f8, SP // sub $0xc1f8,%rsp ```

And the root cause for this appears to be the inlining of 3 calls to processData, each of which allocates a 16KiB byte array on its stack

What did you expect to see?

No significant increase in memory usage.

Maybe PGO could take frame sizes into account for inlining, especially if multiple calls are being made to a function that has a large frame size.

Meanwhile, maybe we should send a PR that adds a //go:noinline pragma to the processData func in gRPC. Given the current code structure, it seems highly undesirable to inline this function up to 3 times in the run method.

cc @prattmic

prattmic commented 7 months ago

Thanks for the detailed report! I know you are aware, but just to be clear to others skimming the issue: this stack size estimate tool will likely underestimate the actual stack allocation size. We are seeing the goroutines sitting parked in loopyWriter.run, below a call to processData. But presumably these goroutines do call processData, at which point they will need 16KB of stack.

Inlining three copies of this function is a particularly bad case because without inlining you can't call processData three times at once, so the peak stack use is 16KB, but with inlining, the frame size of the caller is 16KB*3.

I agree with you that frame size does seem reasonable to consider when making inlining decisions. We probably want to avoid making frames too large. This is potentially in scope for #61502, or a follow-up to that (cc @mdempsky @thanm). I don't think this would really be PGO-specific, though PGO inlining may make us more likely to hit general frame size thresholds. I'm also not sure how good of a sense of final frame size we have during inlining; this is pretty early in compilation and even before we do escape analysis.

At the other end of the spectrum, I wonder if we could more aggressively reuse stack slots for large objects. Since localBuf doesn't escape the function, it seems likely that even after inlining, the three uses of localBuf are mutually exclusive and could technically use the same stack slot. I have no idea how complicated this would be.

cc @golang/compiler @cherrymui @aclements

thanm commented 7 months ago

As part of my work on inlining heuristics I considered adding a heuristic that would bias against inlining functions with large stack frames, but didn't get as far as implementing it. It is a tricky thing to get right, since the inliner runs at an early stage in the compiler, making it tricky to compute an accurate stack size estimate.

I think a better long term solution here is something like #62737, e.g. do the inlines but then arrange for the three inlined blobs to share the same instance of the array in question. This can be done in the back end (e.g. late during stack frame layout) or we can change the inliner itself to reuse temps (this is something that the LLVM inliner does IIRC).

felixge commented 7 months ago

I know you are aware, but just to be clear to others skimming the issue: this stack size estimate tool will likely underestimate the actual stack allocation size.

Yup, it's a rough estimate. As discussed during the last diagnostic sync, I'll try to file my proposal for a runtime implementation of this profile type which would allow us to overcome some of the estimation issues.

think a better long term solution here is something like https://github.com/golang/go/issues/62737, e.g. do the inlines but then arrange for the three inlined blobs to share the same instance of the array in question.

That's probably a good solution in most cases, including this one.

There will still be edge cases e.g. A inlining B (with a big frame) and then calling C in a for loop forever. But maybe those edge cases are rare enough to not be a problem in practice.

aclements commented 7 months ago

This is closely related to #62077, #62737, and #65495

thepudds commented 6 months ago

Some related progress might be happening in https://go.dev/cl/553055.

gopherbot commented 6 months ago

Change https://go.dev/cl/566177 mentions this issue: cmd/compile/internal/liveness: introduce "live intervals" utility

gopherbot commented 6 months ago

Change https://go.dev/cl/553055 mentions this issue: cmd/compile/internal: merge stack slots for selected local auto vars

gopherbot commented 6 months ago

Change https://go.dev/cl/575415 mentions this issue: cmd/compile/internal: merge stack slots for selected local auto vars

gopherbot commented 5 months ago

Change https://go.dev/cl/576680 mentions this issue: cmd/compile/internal: small tweak to merge locals trace output

gopherbot commented 5 months ago

Change https://go.dev/cl/576735 mentions this issue: cmd/compile/internal/liveness: enhance mergelocals for addr-taken candidates

gopherbot commented 5 months ago

Change https://go.dev/cl/577615 mentions this issue: cmd/compile/internal: stack slot merging region formation enhancements

gopherbot commented 5 months ago

Change https://go.dev/cl/576681 mentions this issue: cmd/compile/internal/base: enable stack slot merging by default

thanm commented 5 months ago

Hello @felixge -- I have a CL stack that should fix this problem (mostly submitted, there is one more enhancement still pending). I would like to test your particular case to make sure it gets handled. Would you be willing to share your PGO file so that I ran rerun your PGO build please? Thanks.

felixge commented 5 months ago

@thanm yeah, what's the best way to get you a copy? It's from our prod environment, so I'd prefer to no post it in public. I'm on gopher slack or could e-mail you the file.

Also: How are you planning to test this? By building the gRPC library with the PGO profile and checking the generated machine code?

thanm commented 5 months ago

I'm on gopher slack or could e-mail you the file.

Thanks, email is probably easiest (thanm @ google).

By building the gRPC library with the PGO profile and checking the generated machine code?

Yes.

thanm commented 5 months ago

An update on this issue:

@felixge provided me with the PGO profile off-line and I was able to easily reproduce the problematic set of inlines.

Unfortunately it looks as though the CLs I've been working on to do stack slot merging for disjointly accessed locals (e.g. 577615 and related predecessors) are not going to help here.

The reason is that the variable in question, "localBuf" (defined here) is considered address-taken by the compiler. IR excerpt:

   AS tc(1) # controlbuf.go:573:30 controlbuf.go:956:8
   .   NAME-transport.buf esc(no) Class:PAUTO Offset:0 InlLocal OnStack Used SLICE-[]byte tc(1) # controlbuf.go:573:30 controlbuf.go:931:3
   .   SLICEARR SLICE-[]byte tc(1) # controlbuf.go:573:30 controlbuf.go:956:18
   .   .   ADDR Implicit PTR-*[16384]byte tc(1) # controlbuf.go:573:30 controlbuf.go:956:18
   .   .   .   NAME-transport.localBuf esc(no) Class:PAUTO Offset:0 Addrtaken InlLocal OnStack Used ARRAY-[16384]byte tc(1) # controlbuf.go:573:30 controlbuf.go:953:8
   .   SLICEARR-High
   .   .   NAME-transport..autotmp_266 esc(N) Class:PAUTO Offset:0 AutoTemp OnStack Used int tc(1) # controlbuf.go:559:28 controlbuf.go:910:32

This corresponds to the code that assigned a slice of localBuf to buf at line 956. This results in an indirect read (via buf) later in the function. The liveness analysis I'm using in the Go compiler's SSA backend isn't set up to track these sorts of indirect reads and writes, so this variable winds up being rejected from consideration.

I can think of a couple of ways to potentially attack this problem.

The first would be to introduce additional lifetime markers, similar to what's used in the LLVM back end (code). At the moment the Go compiler uses just a single lifetime-start marker (e.g. ssa.OpVarDef), which works great if you're only computing stack maps at calls; if we were to add a lifetime end marker as well we could change the inliner to insert them and the exploit them during stack slot merging liveness analysis. This carries some risk in that we would need to make the other back end optimizations aware of the new marker (so as to avoid having reads/writes moved across it).

A second possibility would be to teach the inliner to reuse variables when it actually carries out the inlining, e.g. keep track of additional locals introduced by inlining within a given functions, and then at the point where we inline the body of B into A, check to see if there is an existing var (from a previous inline) that we can reuse. We could apply this sort of optimization even for address-taken variables, which would (in theory) make it applicable in this case.

I'll look into this some more and see whether I can come up with something viable.

randall77 commented 5 months ago

We used to have a lifetime-ending marker, VarKill. I got rid of it because we weren't using it any more. https://go-review.googlesource.com/c/go/+/419234

cherrymui commented 5 months ago

Address-taken local variables are tricky. If a variable is address-taken, and the address escapes the inlined function scope, we can't mark its lifetime end. Currently we do the escape analysis after inlining, so at the inlining stage we don't have the escape information. Perhaps (long term?) we could do a pre-pass of escape analysis before inlining, then at inlining add the lifetime markers (or reuse locals directly) if the variable does not escape the inline function scope.

thanm commented 5 months ago

If a variable is address-taken, and the address escapes the inlined function scope, we can't mark its lifetime end.

Maybe I am missing something, but if an address value can flow out from a function wouldn't the var in question be marked as AUTOHEAP or escaping? E.g.

package e

type Bar struct {
    x [240]byte
    s string
}

func Foo(b []byte) *Bar {
    var localBuf [16 * 1024]byte
    copy(localBuf[:], b)
    buf := localBuf[0:10]
    if len(b) != 101 {
        buf[1] = 0
    }
    var bar Bar
    bar.x[1] = buf[0]
    return &bar
}

Here localBuf is marked as address-taken, but also marked as non-escaping (as opposed to bar, which escapes).

If we only target address-taken autos that are also non-escaping, seems as though this would be a viable option?

cherrymui commented 5 months ago

Maybe I am missing something, but if an address value can flow out from a function wouldn't the var in question be marked as AUTOHEAP or escaping?

Yes, if it actually escapes the physical (outermost) function. What I meant was, say, F calls G, G inlined into F, and G has a local variable x, &x escapes G but doesn't escape F. In this case, x can still be stack allocated in F (with G inlined into it), but its lifetime is not bounded by G.

thanm commented 5 months ago

A second possibility would be to teach the inliner to reuse variables when it actually carries out the inlining, e.g. keep track of additional locals introduced by inlining within a given functions, and then at the point where we inline the body of B into A, check to see if there is an existing var (from a previous inline) that we can reuse. We could apply this sort of optimization even for address-taken variables, which would (in theory) make it applicable in this case.

I spent a little while prototyping this, but there are still significant hurdles that would have to be overcome. I cooked up some machinery to detect when two given autos within function F are derived from disjoint inlines as in this case, that is straightforward code to write. The difficulty comes in with regard to escape analysis.

Let's say you have two packages, P and Q, where P imports Q:

package Q

func Good(x, y int) int {
    var ok [4096]int
    sl := ok[:]
    sl[x] = y
    return ok[y]
}

func Bad(x, y int) []int {
    var nope [4096]int
    sl := nope[:]
    return sl[x:y]
}
package P

import "Q"

func A(x, y int) {
    r := 0
    if x < y {
        r += Q.Good(x, y)
    } else if y > x {
        r += Q.Good(y, x)
    } else {
        sl1 := Bad(x, y)
        sl2 := Q.Bad(x, y)
        sl3:= Q.Bad(y, x)
        r += sl1[x] + sl2[x] + sl[x]
    }
}

func Bad(x, y int) []int {
    var nada [4096]int
    sl := nope[:]
    return sl[x:y]
}

When inlining is done for P's function A, we'll have two copies of the array ok (from inlined bodies of Q.Good), two copies of the array nope (from inlined bodies of Q.Bad) and a single copy of the array nada (from the inlined body of P.Bad).

If you compile Q and dump out the AST you can see that escape analysis has determined that ok doesn't escape but nope does escape:

.   DCL # Q.go:4:6
.   .   NAME-Q.ok esc(no) Class:PAUTO Offset:0 OnStack Used ARRAY-[4096]int tc(1) # Q.go:4:6
...
.   DCL # Q.go:10:6
.   .   NAME-Q.nope esc(h) Class:PAUTO Offset:0 Addrtaken Used ARRAY-[4096]int tc(1) # Q.go:10:6

Note the 'esc(no)' (doesn't escape) for ok and esc(h) (escapes to heap) for nope. This is the information we need, but the problem is that at the moment we don't write this granularity of escape info for locals when writing the export data. It's only around when we're compiling Q, and isn't around when we're copiling P, which is when we need it.

We can't use the escape analysis info computed for A, since it only tell us the state of the world with respect to that function. The various copies of nope and nada don't escape A, but they are not safe to overlap. In the specific case of nada from Q.Bad, at the point where the inliner is running we haven't done escape analysis, but it might be possible to go back later on and inspect the results then. This doesn't help with the cross-package case however.

aclements commented 5 months ago

Summary: In P.Good, escape analysis tells us that, while ok is addrtaken, it doesn’t escape, so that address must not leave the extent of P.Good. In theory, inlining should be able to use this to realize that ok isn't live past the extent of P.Good inlined into Q.A. But escape analysis information about locals doesn’t get exported (or kept at all), so we lose that information about ok.

Some options:

(Side thought: addrtaken may be a mistake in our compiler. It’s a big hammer and very coarse-grained.)

aclements commented 5 months ago

Could escape analysis of Q.A keep more information about the extents of locals?

@cherrymui, @thanm, and I were just discussing this in more depth.

My basic observation is that there are deep parallels between liveness analysis and escape analysis. Liveness analysis determines the extents within a function of non-addrtaken variables, while escape analysis determines whether the extent of addrtaken variables is bounded by a function or not.

The idea here is to produce finer-grain extent information in escape analysis, bringing liveness analysis and escape analysis closer to each other.

Since escape analysis is done at the IR level, this raises the question of where to attach extent information. In principle this could get very complex, but for these purposes we really only need it at the OINLCALL level. When we translate an OINLCALL to SSA, we would produce VarKill operations for all addrtaken variables that don't escape the OINLCALL, threaded onto the mem at the "end" of the OINLCALL.