golang / go

The Go programming language
https://go.dev
BSD 3-Clause "New" or "Revised" License
121.19k stars 17.37k forks source link

runtime: optimization to reduce P churn #32113

Open amscanne opened 5 years ago

amscanne commented 5 years ago

Background

The following is a fairly frequent pattern that appears in our code and others:

goroutine1:

ch1 <- data (1)
result = <-ch2 (2)

goroutine2:

data = <- ch1 (3)
// do work...
ch2 <- result (4)

The scheduler exhibits two different behaviors, depending on whether goroutine2 is busy and there are available Ps.

In the second case, if the P wakes and successfully steals the now runnable goroutine2, i.e. (3) happens first, then it will start executing on the new P. Unfortunately, the whole dance will happen again with the result. If the P wakes but does not successfully steal the now runnable goroutine2, i.e. (4) happens first and goroutine2 is run locally, then a large number of cycles are wasted. Either way, this dance happens again with the result. In both cases, we spend a large number of cycles and interprocessor co-ordination costs for what should be a goroutine context switch.

These are further problems caused by this, as it will introduce unnecessary work stealing and bouncing of goroutines between system threads and cores. (Leading to locality inefficiencies.)

Ideal schedule

With an oracle, the ideal schedule after (1) would be:

In essence, we want to yield the goroutine1's time to goroutine2 in this case, or at least avoid all the wasted signaling overhead. To put it another way: if goroutine1's P will block, then it fills the role of the "idle P" far more efficiently.

Proposal

It may be possible to specifically optimize for this case in the compiler, just as certain loop patterns are optimized.

In the case where a blocking channel send is immediately followed by a blocking channel receive, I propose an optimization that tries to avoid these scheduler round trips.

Here's a rough sketch of the idea:

Rejected alternatives

I thought about this problem a few years ago when it caused issues. In the past, I considered the possibility of a different channel operator. Something like:

ch1 <~ data

This operator would write to the channel and immediately yield to the other goroutine, if it was not already running (otherwise would fall back to the existing channel behavior). Using this operator in the above situation would make it much more efficient in general.

However, this is a language change, and confusing to users. When do you use which operator? It would be good to have the effect of this optimization out of the box.

Extensions

[1] https://github.com/golang/go/blob/master/src/runtime/proc.go#L665

randall77 commented 5 years ago

What about just enforcing a minimum delay between when a G is created and when it can be stolen? That gives the local P time to finish the spawning G (finish = either done or block) and pick up the new G itself.

The delay would be on the order of the overhead to move a G between processors (sys calls, cache warmup, etc.)

The tricky part is to not even wake the remote P when the goroutine is queued. We want a timer somehow that can be cancelled if the G is started locally.

bradfitz commented 5 years ago

/cc @aclements @dvyukov @ianlancetaylor @cherrymui

amscanne commented 5 years ago

Yes, most of the waste is generated by the wakeup call itself. Ensuring that the other P does not steal the G is probably a minor improvement, but you're still going to waste a ton of cycles (maybe even doing these wake ups twice -- on (1) and (4)).

I think using a timer gets much trickier. This is the reason I have limited the proposal to compiler-identified sequences of "chansend(block=true); chanrecv(block=true)" calls. It's possible that the system thread could be pre-empted between those calls, but if the system is busy (though Ps in this process may still be idle) it's probably even more valuable to not waste useless cycles.

amscanne commented 5 years ago

(Totally open to a timer, but I'm concerned about replacing a P wakeup with a kick to sysmon in order to enforce the timer, which solves the locality issue but still burns cycles.)

dvyukov commented 5 years ago

Also see #8903 which was about a similar problem. I don't remember all details exactly now, but as far as I remember my proposal was somewhat more generic, but your wins in simplicity and most likely safer from potential negative effects for corner cases.

rsc commented 5 years ago

This has come up repeatedly. Obviously it is easy to recognize and fuse

ch1 <- data (1)
result = <-ch2 (2)

It's harder to see that in more complex code that would benefit from the optimization, though. We've fiddled with heuristics in the runtime to try to wait a little bit before stealing a G from a P, and so on. Probably more tuning is needed.

It's unclear this needs to be a proposal, unless you are proposing a language change, and it sounds like you've backed away from that.

The way forward with a suggestion like this is to try implementing it and see how much of an improvement (and how general of an improvement) it yields.

rsc commented 5 years ago

/cc @randall77 @aclements

randall77 commented 5 years ago

Related:

27345 (start working on a new goroutine immediately, on the parent's stack)

18237 (lots of time in findrunnable)

I also remember an issue related to ready goroutines ping-ponging around Ps, but I can't find it at the moment.

amscanne commented 5 years ago

I backed away from a language change proposal based on the assumption that it would likely not be accepted. My personal preference would be to have an operation like <~ that immediately switches to the other goroutine if currently waiting. (And behaves like a normal channel operation if busy.) But I realize that the existence of this operator might be confusing.

I think it's unclear how much of a impact this would have in general. This is probably just be a tiny optimization that doesn't matter in the general case, but can help in a few very specific ones. For us, it might let us structure some goroutine interactions much more efficiently.

I hacked something together, and it seems like there's a decent effect on microbenchmarks at least (unless I screwed something up).

Code:

func BenchmarkPingPong(b *testing.B) {
    var wg sync.WaitGroup
    defer wg.Wait()

    ch1 := make(chan struct{}, 1)
    ch2 := make(chan struct{}, 1)
    wg.Add(2)
    go func() {
        defer wg.Done()
        for i := 0; i < b.N; i++ {
            ch1 <- struct{}{}
            <-ch2
        }
    }()
    go func() {
        defer wg.Done()
        <-ch1
        for i := 0; i < b.N-1; i++ {
            ch2 <- struct{}{}
            <-ch1
        }
        ch2 <- struct{}{}
    }()
}

Before:

/usr/bin/time /usr/bin/go test -bench=.* -benchtime=5s
goos: linux
goarch: amd64
BenchmarkPingPong-4     20000000           563 ns/op
PASS
ok      _/home/amscanne/gotest/spin 11.805s
12.68user 1.00system 0:11.98elapsed 114%CPU (0avgtext+0avgdata 46036maxresident)k
0inputs+3816outputs (0major+19758minor)pagefaults 0swaps

After:

/usr/bin/time go test -bench=.* -benchtime=5s
goos: linux
goarch: amd64
BenchmarkPingPong-4     20000000           330 ns/op
PASS
ok      _/home/amscanne/gotest/spin 6.949s
7.11user 0.05system 0:07.11elapsed 100%CPU (0avgtext+0avgdata 46460maxresident)k
0inputs+3824outputs (0major+19084minor)pagefaults 0swaps

The system time is telling @ 20x, and the extra 14% in CPU usage is indicative of an additional P waking up with nothing to do. (Or maybe it occasionally successfully steals the goroutine, which is also bad.)

Assuming this small optimization is readily acceptable -- what's the best way to group those operations and transform the channel calls? The runtime bits are straight-forward, but any up front guidance on the compiler side is appreciated. Otherwise, I'm just planning to call a specialized scan in walkstmt list, but maybe there's a better way.

rsc commented 5 years ago

Given that there is no language change here anymore, going to move this to being a regular issue.

prattmic commented 3 years ago

I've started looking into this. I've got a very naive implementation (probably very similar to Adin's) to use with his microbenchmark.

Combined with perf stat, we can see higher-level system effects of the change.

Fixed time (-benchtime=1s):

name                   old time/op  new time/op  delta
ChanPingPong-12         467ns ± 2%   247ns ± 2%  -47.07%  (p=0.000 n=9+10)

name                   old iters    new iters    delta
ChanPingPong-iters-12   2.51M ± 5%   4.44M ±14%  +76.87%  (p=0.000 n=10+10)

name                   old msec     new msec     delta
Perf-task-clock         2.01k ± 2%   1.38k ± 5%  -31.43%  (p=0.000 n=10+8)

name                   old val      new val      delta
Perf-context-switches   44.1k ± 2%    0.7k ± 8%  -98.52%  (p=0.000 n=10+8)
Perf-cpu-migrations       182 ±19%      10 ±27%  -94.39%  (p=0.000 n=10+9)
Perf-page-faults          518 ± 8%     511 ± 8%     ~     (p=0.536 n=10+9)
Perf-cycles             4.58G ± 2%   5.54G ± 6%  +21.05%  (p=0.000 n=10+8)
Perf-instructions       8.21G ± 3%  11.22G ± 6%  +36.63%  (p=0.000 n=10+8)
Perf-branches           1.69G ± 3%   2.30G ± 6%  +35.76%  (p=0.000 n=10+8)

Fixed iterations (-benchtime=10000000x):

name                   old time/op  new time/op  delta
ChanPingPong-12         473ns ± 2%   241ns ± 3%  -49.00%  (p=0.000 n=10+10)

name                   old msec     new msec     delta
Perf-task-clock         5.68k ± 3%   2.43k ± 3%  -57.15%  (p=0.000 n=10+10)

name                   old val      new val      delta
Perf-context-switches    125k ± 3%      1k ± 9%  -99.54%  (p=0.000 n=10+9)
Perf-cpu-migrations       517 ±13%      11 ±51%  -97.95%  (p=0.000 n=10+10)
Perf-page-faults          469 ± 9%     473 ±11%     ~     (p=0.928 n=10+10)
Perf-cycles             13.1G ± 2%   10.4G ± 2%  -20.56%  (p=0.000 n=10+10)
Perf-instructions       23.2G ± 0%   21.0G ± 0%   -9.52%  (p=0.000 n=10+8)
Perf-branches           4.79G ± 0%   4.31G ± 0%  -10.11%  (p=0.000 n=10+8)

I've included both since the the different fixed dimensions change the interpretation. e.g., the first case has higher cycles after because it is simply able to do a lot more work. And it still does nearly double the iterations in 30% less CPU time (== far less time stalled)!

This certainly looks worthwhile from the micro-benchmark perspective. The questions remaining to me are if we can efficiently and reliably detect these scenarios, and if they affect many programs.

prattmic commented 3 years ago

For future reference, here's @amscanne's prototype: https://github.com/amscanne/go/commit/eee812b594577f71894fd30a27d9a39ba99bf590

This is a bit more advanced than mine, as I haven't made any compiler changes yet.

gopherbot commented 3 years ago

Change https://golang.org/cl/254817 mentions this issue: WIP: merge chansend1 + chanrecv1 into unified chansendrecv1

GuhuangLS commented 3 years ago

For future reference, here's @amscanne's prototype: amscanne@eee812b

This is a bit more advanced than mine, as I haven't made any compiler changes yet.

@prattmic Michael, I have one question, amscanne@eee812b needs to modify apis? and needs user's program to perceive?How does compiler make the decision?

prattmic commented 3 years ago

Neither @amscanne nor my prototype change any language syntax or APIs. Rather, the compiler detects channel send followed immediately by channel receive and rather than calling the typical runtime.chansend and runtime.chanrecv functions, it emits calls to alternative implementations (a single merged runtime.chansendrecv in my case).

Both prototypes are rudimentary and would probably hurt performance for many programs due to poor decisions and would need more refinement.

gopherbot commented 1 year ago

Change https://go.dev/cl/473656 mentions this issue: runtime: don't usleep() in runqgrab()