golang / go

The Go programming language
https://go.dev
BSD 3-Clause "New" or "Revised" License
122.21k stars 17.46k forks source link

cmd/compile: profile-guided optimization #55022

Closed cherrymui closed 1 year ago

cherrymui commented 1 year ago

We propose adding support for profile-guided optimization (PGO) to the Go gc toolchain. PGO will enable the toolchain to perform application- and workload-specific optimizations based on run-time information. Unlike many compiler optimizations, PGO requires user involvement to collect profiles and feed them back into the build process. Hence, we propose a design that centers user experience and ease of deployment and fits naturally into the broader Go build ecosystem.

Detailed design can be found here.

In summary, we propose

Input welcome. Beyond input on the general approach, we're particularly looking for input on whether PGO should be default enabled in 1.20, and flag and file naming conventions.

If accepted, we plan to implement a preview of PGO in Go 1.20. Raj Barik and Jin Lin plan to contribute their work on the compiler implementation (initial prototype in CL 429863, but not yet following proposed API).

See also previous discussions at #28262. Filing a new issue to make it clearer what we are proposing. Thanks.

cc @aclements @prattmic @rajbarik

gopherbot commented 1 year ago

Change https://go.dev/cl/430355 mentions this issue: design/55022-pgo.md: add design doc

rhysh commented 1 year ago

This looks very interesting! Starting with the pprof format sounds good, but I have some concerns about its verbosity in two ways.

First, the files can be really big and processing them can be slow. When I work with profiling data manually, I often start with pprof-format files that include hundreds or thousands of samples, usually covering less than one thread-minute. For PGO, I'd be inclined to use "the most and best data available". For some applications I support, that means multi-megabyte pprof-format files which take multiple seconds to load. That's a big file to add to an application's source control repository (for build environments that work that way), and a big build speed hit if it applies to every package. The approach of "Some of these downsides can be solved transparently by converting the profile to a simpler, indexed format at build time and storing this processed profile in the build cache" sounds promising, but I wonder if that format should be obtainable through some explicit go tool subcommand, which could then be used until the next Go version series (re-generated when upgrading from go1.20.x to go1.21.y).

Second, how can application owners understand what the compiler is learning from the profile? When I encounter a bug that might be affected by or otherwise involve PGO, I'd like to be able to report it (including steps to reproduce) without accidentally disclosing proprietary information about the rest of my applications' performance and without accidentally changing the compiler's PGO-related decisions. When I have the choice between using a 100kB pprof file or a 10MB pprof file, I'd like to know if using the more detailed file actually changes the results. When reviewing a change to the PGO input file (maybe to adjust its weight towards more latency-critical parts of the app), I'd like to know what it's communicating to the compiler. These could also be addressed by a "simpler, indexed format" that is semi-stable and human-readable / human-redactable.

cherrymui commented 1 year ago

@rhysh thanks for the feedback!

Yes, if we later switch to a different format, we will provide a tool to convert profiles between the formats. And yes, if we choose a different format we will consider how to make it more human-readable. (We considered this possibility but it was unclear to me how to do that. There is also a range of possibilities. The human readable form could be very low level, basically a textual representation of the raw profile, to very high level, basically a list of "compiler optimization directives", or something in between.)

Related, I think the compiler will have an option to output what optimizations are done based on the profile, either through the -m, -json flag, or a new flag. Maybe that helps?

rhysh commented 1 year ago

Visibility into the compiler's decisions via -m/-json/etc sounds like a big help, yes. That lets users report bugs like "I have this problem with the 'encoding/json' package when the compiler uses 'this list' of PGO decisions". Will the maintainers who receive a bug report like that be able to convince their copy of the compiler to apply the same set of PGO decisions?

If so, it sounds like we have the "simpler [...] format" described in the proposal, in an ad-hoc way. Maybe ad-hoc is all we need, and would keep the messier and less stable parts of the ecosystem out of the main go toolchain. But I can see it being useful enough to earn top-tier support over time.

This sounds like the "basically a list of 'compiler optimization directives'" option. What design challenges did you encounter with that? Thanks @cherrymui !

josharian commented 1 year ago

Like @rhysh, I feel strongly that the .pgo file format should be something more like a hand-editable list of compiler optimization directives, and definitely not a raw input like a pprof file. Reciting the list of reasons I gave for this previously, and adding some more...

cherrymui commented 1 year ago

Interesting idea about compiler optimization directive list for bug report. Thanks.

I thought for bug report due to PGO we would expect one to share the profile in order to reproduce the buggy build. It is understandable that this can be difficult in some cases. Maybe we can consider supporting an optimization list, even just for debugging purposes. (FWIW, we even had a prototype for optimization list (thanks @dr2chase) which I used for early experiments.)

One challenge is how to generate such an optimization directive list from raw profiles. For a streamlined user experience, one would take profiles from production binary, and go build could use that to build a new binary, without much manual work. So if we use optimization list, we will need a tool to convert/compute it from the raw profile. I would think the tool will need to, besides parse the profile, also parse the source files, do some analysis such as control flow propagation, etc., in order to map profile samples to optimization decisions. This has a lot overlap with what the compiler already does. So, instead of building a separate tool (and keep in in sync with the compiler), probably we just let the compiler do it.

Another benefit for profiles is that we can do multiple optimizations based on, say, a CPU profile. If we add a new type of PGO-based optimization, one would automatically get the optimization without taking a new profile. For optimization lists, one would need to explicitly add it to the file.

I'm not really clear what the optimization list would include. Should it be binary (boolean) options, or weights? Binary options sound simpler, but maybe weights are better for optimizations, so the compiler can take into account the information from source code and its own analysis?

There is also the question about the file becoming stale. Of course profiles can also become stale. It might be possible to design an optimization directive list format that is resilient to staleness.

Maybe one option is to have the compiler

rajbarik commented 1 year ago

Do you have any concrete examples of how an optimization file gets generated for a specific profile? Any existing compilers do this today? Please let us know.

Typically compiler optimizations such as inlining, specialization, code layout, register allocation and instruction scheduling can use profile information to improve application performance by optimizing hot-paths.

klauspost commented 1 year ago

Non-main packages

As a package writer, I cannot supply any improvements. It would require package users to implement pgo benchmarks that hits my package. Writing these benchmarks to build cpu profiles are far from trivial. As a result 99% of all compiles with my packages would be without (useful) pgo.

You mention this as future work, but it seems like a big miss.

Platforms

This will only cover platform independent code. Much optimized code has separate code paths for different platforms. As proposed I don't see any reasonable way to handle multiple platforms automatically, for example when cross-compiling.

You could make this a user problem, and say they should use go build -pgo=$GOOS_$GOARCH.pgo, but it isn't very friendly, and will probably hinder adoption.

Future Optimization Scope

I may be lacking imagination, but I don't see the scope of this going much beyond inline decisions.

Maybe with branch-information, you could re-order unordered switch statements to check the most common targets first. Maybe some auto-vectorization could be done in a few cases.

To justify such a rather complex setup it is important to keep the end-goal in sight.

Counter Proposal: Simplify Proposal

I appreciate the intent to make this automatic and low-effort. But writing the benchmarks and/or capture useful profiles is a big task and for a big application with many subsystems it will be a massive. While it proposes to just run pprof on a production server, apply the profile -> faster application, it isn't really that simple, since profiles would need to continuously be captured as code changes.

I very much agree with Josh, et al, that this should be simpler and in the hands of the developer.

To add to that, I know that Go doesn't like pragmas, but it seems to me with a few, well designed pragmas we could get the same result, that doesn't have all the problems. I feel that that would bring us the benefit we are looking for in a much more controlled manner as described by Josh.

As far as I can see having a program propose pragmas (or code reordering) from a profile would be just as simple, and allows you to review the proposed changes before they are committed.

If Go still isn't ready for performance related pragmas, then an external file describing the same would also be an (IMO suboptimal) solution. I dread a bit how you would specify inlining at callers and auto-vectorization of a for loop without resorting to line numbers in such a file.

merykitty commented 1 year ago

@klauspost Given inlining is the most significant optimisation mechanism, I believe even at the current status, pgo is a much valuable improvement. In addition to inlining, pgo can be applied to interface dispatch, which helps devirtualise method invocations, profiling on branch-taken frequency helps produce better code layout at the very least, and guides the optimiser toward better decisions in other operations such as loop unrolling, cloning operations through phi nodes, in extreme cases branch pruning is also a possibility.

While pragmas or external decorative files work on call graphs, optimisation decisions are most optimal when having information in the call trees, this would be a suboptimal proposal. Additionally, it is not internal details, so I would disagree with this idea.

klauspost commented 1 year ago

@merykitty Thanks! I see your point with interface dispatch, where knowing common implementation could output implementation specific branches.

The other cases I don't really see as something that couldn't be done with pragsmas. Granted, generic implementations could make this hard to control, since they would probably apply to all instances.

I don't see how you can make branch pruning without the compiler being able to prove that branches are impossible, in which case you don't really need pgo. Branch reordering and loop unrolling (without vectorization) has very limited impact on modern CPUs.

Either way, I stand by my point, that as proposed seems like a complex system that is hard to use correctly, and will provide limited benefits. It is not a "set up and forget", which is my main problem.

I am hoping for a simpler system that is more controllable, that doesn't require 'main-function' implementation with continuous updates required, which provides 90% of the benefits.

mvdan commented 1 year ago

One worry I have with using profiles directly is that it can easily be an unstable mechanism. If I profile a program's current build and use that profile to rebuild it with PGO, grabbing another profile would likely result in fairly different results. For example, if a function call used to take ~2% of CPU time due to being in a very hot path and PGO now inlines it, then it could become far less relevant in the second profile.

I imagine we want developers to only use PGO with profiles obtained from runs without PGO. Can we restrict that to prevent confusion?

prattmic commented 1 year ago

@mvdan The ability to collect profiles from a PGO-optimized binary is actually a core requirement in our design because it simplifies profile collection significantly (you can collect directly from production deployments without need for some kind of unoptimized "canary" deployment used only for profile collection).

It certainly has the challenge you mention, which we call "iterative stability" and discuss here and here. Sections 4.2.1 and 5.2 of the AutoFDO paper also discuss this. This is something we will need to pay close attention to and test well to make sure results are stable, and may be an area where additional metadata could help (e.g., collecting a CPU "last branch record" (LBR) profile would tell us which basic blocks are frequently executed, even if they have become much cheaper, so, used carefully, that could further mitigate this issue).

prattmic commented 1 year ago

@josharian Having a list of optimization directives (or pragmas in the source) is something we've thought about, and definitely has pros and cons. I certainly think we want PGO to be applicable from a profile (which I think you agree with?), but whether translation to optimization decisions happens directly in the compiler, or as a pre-processing step.

Some thoughts on list of advantages below. Note, this list actually talks about two axes: profile vs optimization list and binary vs plain text format. I think we could decide along either of these axes (e.g., we could use a plain-text format that describes the pprof profile).

  • the compiler can stay simpler (internal)

I'm not sure I fully agree with this. While I agree that the compiler will be simpler if it only takes a list of directives rather than having to process a profile, something still needs to convert the profile to a list of directives. That new tool will contain the complexity, and share a lot of similarities with the compiler, perhaps even directly sharing code, depending on the specifics of the directives.

e.g., if inlining directives are binary decisions ("inline this function"), then the conversion tool should probably contain a near copy of the compiler's inlining heuristics so that it makes very similar decisions. That is perhaps less important if the directives are more abstract (providing an inlining importance "weight"?), but if we go too abstract then we are probably just describing a profile anyways.

As Cherry mentioned, one option here is that the compiler could accept either a profile or optimization list. When given a profile, it generates the optimization list to be used for future compilations.

  • it is easier/possible to write compiler tests (internal)
  • it is easier to debug: is the issue the interpretation of the pprof profile? (internal)

I agree this would be easier to test/debug because it adds an intermediate format which you could test both sides of.

  • users can easily see what it is doing
  • users can bisect/minimize to figure out what optimization directive is causing issues (or providing speed-ups)

100% agreed that a plain test format and an optimization list are more transparent about what is happening in the build.

  • provides a mechanism for advanced tuning, that everyone always wants compiler directives for
  • it opens the door for other mechanisms/tooling to add information and optimizations, such as: offline static analysis, offline human analysis, bespoke profiling techniques, or combining profiles taken across a fleet of machines

While I agree that an optimization list makes manual optimization tuning easier/possible, that has not been a goal of this proposal, and it is something we have avoided adding pragmas for (also likely a reason we wouldn't have PGO work by adding pragmas to source code). I think it is a bigger discussion if we want to support custom tuning of optimizations.

or combining profiles taken across a fleet of machines

This strikes me as out of place in your list. We absolutely want to support (and encourage!) merging profiles from multiple instances in order to get a more representative sample. This seems to be a point in favor of pprof, as it is easy to merge pprof profiles (go tool pprof -proto 1.pprof 2.pprof > merged.pprof).

On the other hand, optimization lists seem difficult to merge. If one list says "inline foo" and the other says "do not inline foo", how do you merged? It seems in this case you'd still want to merge the profiles prior to generating the optimization list.

  • it at least has a chance of supporting other common forms of optimization information, such as dynamic type info, starting goroutine stack sizes, scheduler hints

Earlier in our design we were planning to create a new PGO format specifically to have the flexibility of adding information beyond CPU profiles. We switched to proposing pprof directly because the format, while not perfect, is actually quite flexible.

e.g., dynamic type info could be encoded as a Label on Samples of calls describing the type being called (though for calls the type is already obvious from the call destination, so not the best example). Stack sizes could be sampled as Samples with Location == gp.startpc, and value == stack size.

The format is not perfect and we may find ourselves limited in the future (discussion here), but I think there is a lot of runway.

  • if optimizations are not getting applied correctly (function renamed?), instead of having to generated a brand new profile, which could be expensive/difficult, it can be fixed directly by hand-editing

Agreed that a plain-text format makes fix-ups easier. We've discussed tooling for renames for pprof files, but plain text is easier.

  • easier to write CI tests around

I'm not sure what you mean by this, could you expand?

ianlancetaylor commented 1 year ago

@klauspost My opinion is that history tells us that using pragmas to guide optimizations doesn't sustain over time. What happens is that someone does a bunch of analysis with a specific compiler version and a specific set of benchmarks, and writes a bunch of pragmas that makes those benchmarks faster. So far so good. But two years later the compiler has changed, the libraries have changed, and the program has changed. The existing optimization pragmas have not changed, and no longer do any good, and occasionally actually do harm.

Profiling can have the same problem, of course, if nobody updates the profiles. But updating the profiles is a much simpler task than analyzing performance in detail, and therefore tends to happen more frequently. In the best case, updating the profiling is simply automated.

So over the long term, I believe that profile guided optimization is a better approach for most projects. It's unlikely to beat careful analysis and hand tuning at the point in time when that happens. But profile guided optimization is much cheaper in what is for most projects the greatest expense: developer time.

Just my opinion, of course, but based on experience.

merykitty commented 1 year ago

@klauspost, just a small point

I don't see how you can make branch pruning without the compiler being able to prove that branches are impossible

Branch pruning here means splitting everything below the if to separate hot paths and cold paths in the whole function.

prattmic commented 1 year ago

@klauspost

Platforms

This will only cover platform independent code. Much optimized code has separate code paths for different platforms. As proposed I don't see any reasonable way to handle multiple platforms automatically, for example when cross-compiling.

You could make this a user problem, and say they should use go build -pgo=$GOOS_$GOARCH.pgo, but it isn't very friendly, and will probably hinder adoption.

This is something we've been thinking about recently and are open to suggestions. I think we may want automatic build configuration selection eventually, but the details are tricky. Do we use suffixes, like .go files (default_linux_amd64.pgo)? That could make directories quite noisy with a dozen or more files (should we move to a subdirectory like testdata?). How should we handle selection for build tags?

What we've proposed I think is the MVP option which remains forward compatible with future auto-selection we might do.

(FWIW, I think that a single profile for all platforms is likely good enough for more users. A platform-specific one will of course be at least a bit better, but my intuition says that the vast majority of most profiles will be common across platforms. We could verify this with data by comparing profiles).)

aclements commented 1 year ago

@klauspost

I may be lacking imagination, but I don't see the scope of this going much beyond inline decisions.

To add to what others have said, we've actually been keeping an eye toward many possible PGO-based optimizations while designing this. Here's a non-exhaustive list:

Edit 2023-09-05: A more up-to-date and exhaustive list is here

cherrymui commented 1 year ago

@klauspost thanks for the feedback!

Non-main packages

I totally agree that PGO for non-main packages are important (we want to do it for the standard library as well), and we spent quite some time considering it. But many details are still unclear to us. At the moment we're proposing what we know how to do, and leave the door open for future improvements including PGO for non-main packages. I think this is better than waiting.

It would require package users to implement pgo benchmarks that hits my package.

This is not what we expected. Instead, we expect the user to take profiles from their main program. If in that program your package is in the hot path, PGO will apply to your package.

Platforms

As @prattmic mentioned, we are proposing an MVP option which remains forward compatible with future improvements.

As mentioned in the design doc

In the future, we may support automatic lookup for different profiles for different build configurations (e.g., GOOS/GOARCH), but for now we expect profiles to be fairly portable between configurations.

By "portable", for example, if the workloads are similar across platforms, the hot functions on one platform is likely also hot on another platform.

See also "Input welcome" from the original post. We're particularly welcome input for naming default profiles.

Future Optimization Scope

I'm not really sure what your comment meant. But here is a (non-exhaustive) list of optimizations we plan to do in the future https://go.googlesource.com/proposal/+/master/design/55022-pgo.md#optimizations

apparentlymart commented 1 year ago

I will first admit that I do not have direct experience with profile-guided optimization in any other language, and so my line of questioning here may be naive. If this seems like a nonsense question, I'm happy to accept that as an answer to avoid anyone feeling the need to do my homework for me! 😄


The proposal and some of the commentary above discusses the fact that a particular set of profile information is specific to a particular version of the compiler (because it may generate code with different execution characteristics) and to particular input source code (because changing the program will at least change the behavior of the part of the program that changed, and may also have knock-on effects elsewhere from optimizations that are able to consider global information).

I quite enjoy the fact that go test is able to (mostly) reliably determine when a particular part of the test cache has been invalidated by changes to the input source code. It works well enough that I rarely question whether it's working correctly, and am surprised in the rare cases where I catch it out by doing something particularly odd!

Of course I understand that profile-guided optimization cannot possibly be implemented with an automatically-maintained cache, because gathering profile information is always an explicit step and similarly it's up to the person running go build to decide which profile information is relevant to the current situation.

However, I do wonder if it seems feasible for the toolchain to automatically announce when a particular profile information file has been invalidated, so that I could be prompted to regenerate it. I assume "invalidated" is not a boolean value in this case but perhaps more like a code coverage report: how much of the program is still the same now as it was when this profile information was generated? What specific parts of the program are not covered by this profile because they have changed since the profile was generated? Was the profile generated from a program compiled with the same toolchain version as I'm currently using? (Hopefully this could also somehow take into account the degree to which the system can "make do" with mismatching profiles, giving higher precedence to changes that are outside the set that the compiler and trace format can account for automatically, as described in Stability to source code and compiler changes.)

With what I've understood from reviewing the proposal so far it seems like it would be too expensive to track this sort of "profile coverage" on a per-line or per-statement basis, since it would seem to require retaining some sort of checksum for every single line/statement. But I wonder if it would be feasible to track at a coarser grain, such as per-function, per-file, or per-package, just to give the user something meaningful to understand the results against, so that they can then use their intuition to estimate whether the indicated objects have changed enough to warrant the effort of capturing fresh profile data.

(For this question I'm imagining a situation where a team collects profile information semi-manually once and uses it for some time before semi-manually generating it again. I expect that the capability I'm describing would be far less useful in situations where a team is able to constantly gather profile information for a currently-running and feed it into the next build, as described in Iterative Stability. The software I spend most of my time working on is "shrinkwrap" software which we package and ship to users to run on their own computers, and so our ability to capture traces from real user runs is limited. "This feature is primarily intended for servers you can profile constantly" would be a perfectly reasonable argument to dismiss my questioning above, of course.)

josharian commented 1 year ago

easier to write CI tests around

I'm not sure what you mean by this, could you expand?

Yes, that was rather vague. :) It's similar to @apparentlymart's point (which is a good one).

In many systems, performance properties are correctness properties. Losing key optimizations can cause performance regressions, which can cause existing provisioning to be insufficient, which can take down a system. Even merely slowing down a system can cause cascading failures. (I speak from experience, unfortunately.)

The more opaque the toolchain and its inputs are, the harder it is to (a) write safety checks that detect performance problems before they make it to production and (b) diagnose performance issues after they make it to production. This is in some ways even worse with less severe regressions, which don't result in an obvious, immediate problem.

The obvious approach here using pprof files is to have excellent compiler diagnostic output. This puts us in the same boat as (for example) current inlining tests: exec a build from inside a test and check for magic output strings. It's kind of a miserable existence, but it works.

@apparentlymart's notion of a PGO fitness/staleness score would also help, albeit in a less fine-grained way.

josharian commented 1 year ago

Or to put it a different way: How do I code review a commit that replaces one pprof profile with a new one?

prattmic commented 1 year ago

Thanks for the clarification, that makes sense. Having an optimization list doesn't solve the problem of determining if a new profile has "good" results, but it is at least easier to look at a diff and notice potentially worrying changes.

(Though if the list is thousands of lines long and changes a lot from profile-to-profile, then I imagine that could also be difficult to review).

cherrymui commented 1 year ago

Thanks.

I think we all agree that we want the compiler to emit an optimization list for optimizations it does based on a given profile.

Interesting idea about profile invalidation. I think that is a good point. And we can make the compiler emit some information when some profile information doesn't apply, either in the same optimization list file our some other file. I think it shouldn't be too hard or expensive.

rajbarik commented 1 year ago

I recommend that we use the same technique as of AutoFDO] to deal with stale profiles, i.e., to use \<pkg_name, function_name, line_offset> instead of \<pkg_name, function_name, line_number>. With line_offset information, we can identify if a call site has moved up or down relative to the start of the function. In this case, we do not inline this call site, however other call sites that have not moved up or down will continue to use the profile information and get optimized. This design also allows functions that are unchanged will continue to be optimized via PGO.

It should be easy to produce a post-PGO report containing a list of optimized functions and their corresponding PGO-enabled optimizations.

Couple of notes more:

cherrymui commented 1 year ago

@rajbarik yes, this is what we are proposing. https://go.googlesource.com/proposal/+/master/design/55022-pgo.md#stability-to-source-code-and-compiler-changes "We propose using function-relative line numbers ..."

It is unlikely that a PGO enabled binary will crash in production compared to its non-PGO version

I agree that it is unlikely, but not impossible. We still want to make debugging that easier.

aclements commented 1 year ago

@apparentlymart, you're right that shrink-wrap software generally isn't our target for this. Perhaps we should mention that in the proposal and call out simplifying it as potential future work. There are still a few options there, though: one is to use PGO in a more traditional way, where you gather profiles from representative benchmarks and immediately feed those back into the second build. Our proposed PGO isn't optimized for that use case, but it should still work perfectly fine, subject to the usual limitations of benchmark-based approaches. We might even do that ourselves for the Go toolchain, perhaps using the standard library build as a benchmark.

The other option is, if releases are frequent enough and you have telemetry and subject to potential PII issues, you could imagine phoning home with profile data. That's obviously going to require a fair amount of infrastructure, especially to make it efficient.

aclements commented 1 year ago

I find the arguments that @rhysh and @josharian have been making about debuggability (and bug reportability) pretty persuasive, but I also strongly believe that using pprof profiles as the source of truth dramatically simplifies the overall user experience and the overall complexity of the system (even if it may put more complexity in the compiler vs other approaches).

So, I'm brainstorming a bit on what @cherrymui mentioned above about the compiler taking its own optimization report format back as input. Supposing that format is really intended for closed-loop debugging, it wouldn't have to have the same stability properties as the profile. It would strike a balance of machine-readable and user-editable, so the compiler can read it, tools could use it to check for critical optimizations, users could filter/obfuscate it for bug reports, and compiler devs could use it to their heart's content for testing and debugging. We could probably use the -json format as a starting point, though I'm pretty sure we'd have to at least tweak some things because the needs of IDE-as-consumer and compiler-as-consumer aren't quite the same. On the implementation side, we would have to be careful to abstract out whether a decision is being made from a profile versus whether it's being made from this optimization control file, but that's probably a good abstraction boundary to have anyway. Some things like inlining decisions are easy to express in this. Others, like the block hot path for regalloc, could be rather hard.

One thing to bear in mind is that we're mostly expecting PGO to replace the decision heuristics of existing optimizations, which means those optimizations are already happening and could have happened with or without PGO. That doesn't mean PGO won't tickle bugs by turning them on in places they wouldn't usually be turned on, but it does reduce the risk of PGO producing incorrect code. Indirect call devirtualization is a notable exception to this, since it's hard to do that optimization at all without profile data.

Or to put it a different way: How do I code review a commit that replaces one pprof profile with a new one?

That's a really interesting question! But, on the flip side, if you're depending on deep compiler optimization decisions, how do you ensure that source changes don't break those during code review? I think the vast majority of people don't depend on these and are just happy when their code gets faster, though they aren't the primary audience for PGO. I think the most people who do care use benchmarks, and you'd benchmark the profile commits just like a source commit. And some people run tools that check the compiler's optimization decisions (there are at least two users: the Go project and @josharian :) ), and you'd run the same tools for profile and source updates. Granted, it's less clear what you do if a profile update fails such a check, but that may be a good signal that you have a stale optimization requirement (I know we have those in the Go project's list, since I just removed one).

dr2chase commented 1 year ago

A possibility for keeping track of changes to optimizations is to keep a copy of the "bad news from the compiler" JSON file intended to tell users about likely performance problems. I.e., it's a not-optimization log. It's currently aimed at LSP, but early on I did write a profile+badnews processor to filter the list to "here's the missed optimizations you might care about". (Said tool is currently failing its tests, because optimization logs are fragile, oh well.)

So for this purpose, you might have a test that used -gcflags=-json=0,$PWD/test.lspdir and then scan that to ensure that certain sources-lines + missed-optimizations do not appear in the output. It's not necessarily the finest format for this purpose (line-numbering conventions come from LSP) but it is already there.

I am not 100% sure that we care why an optimization was done. We could in practice improve the default set of optimizations and where to apply them, and then they would be "no longer done because of profiling" -- but that's not actually a performance bug.

thanm commented 1 year ago

@josharian writes "I feel strongly that the .pgo file format should be something more like a hand-editable list of compiler optimization directives, and definitely not a raw input like a pprof file. Reciting the list of reasons I gave for this previously, and adding some more..."

This idea seems at least tractable for a compiler optimization like inlining, but I can't really imagine what it would look like for optimizations such as basic block layout or register allocation. For these two in particular, can you sketch what you have in mind? What would the transcript or the "optimization directives" look like?

josharian commented 1 year ago

@thanm Austin's recent comment makes me feel far more comfortable about about the current trajectory.

josharian commented 1 year ago

I wonder a bit whether CPU profiles will be enough for devirtualization. Some important devirtualization optimizations work by way of allocations, not direct CPU cycles. Memory profiles might also be useful.

cherrymui commented 1 year ago

@josharian yes, there are optimizations we want to do later that may need more than the CPU profile. See also Austin's comment https://github.com/golang/go/issues/55022#issuecomment-1245605666 . We start with CPU profiles for now, and may extend it in the future. The pprof format is extensible, so we could still use that format. For profile collection, we may also want to extend the runtime/pprof API to make it easier. It is briefly discussed in the doc https://go.googlesource.com/proposal/+/master/design/55022-pgo.md#profile-collection , but nothing is concrete yet. Thanks.

aclements commented 1 year ago

@josharian

I wonder a bit whether CPU profiles will be enough for devirtualization. Some important devirtualization optimizations work by way of allocations, not direct CPU cycles.

That's an interesting example, though there are enough layers there that I'm not sure we could do that automatically (clicking down into UDPConn, I couldn't syntactically piece together that optimization, which doesn't bode well for a compiler figuring it out :) )

Are you saying that this could be a problem for CPU profiles because there may not be a lot of CPU time spent right there, but devirtualizing would still save a lot of allocations? When we tackle devirtualization, I think we're going to have to think hard about the heuristics. I was thinking more along the lines that the stacks in a CPU profile tell you that, say, >99% of the time, some indirect call has a particular target. That doesn't necessarily mean it's hot in an absolute sense, but tells you its worth exploring a specific devirtualization. The UDPConn case is again interesting because it may not be the vast majority of the call targets from there, so ">99%" may be wrong, but it's an important optimization for a particular category, and the profile would still tell you "it could be this", which is more than we know now.

The other dimension, which perhaps you were getting at, is that devirtualization would be best when combined with better escape analysis. Currently, if we naively devirtualize a call, the fallback path will often still force everything to escape. I think the big opportunity here is in some combination of devirtualization and more conditionalized escape analysis. (My vague plan is that our next optimization mountain after PGO will be better escape analysis.)

gopherbot commented 1 year ago

Change https://go.dev/cl/430398 mentions this issue: design/55022-pgo-implementation.md: add pgo implementation doc

rajbarik commented 1 year ago

Just fleshing out the details of the PGO-based devirtualization, let's assume we have the following code:

type Shape interface { Area() int }
type Square struct { x, y int }
func (a Square) Area() int { return a.x * a.y }
type Circle struct { r int }
func (c Circle) Area() int { return 3 * c.r * c.r }

func foo(s Shape) {
  for {
    s.Area()
  }
}

Our current implementation design proposes to build a WeightedCallGraph data structure from the pprof CPU profile. Thus, we are able to transform the virtual method calls.Area() in foo to the following code:

func foo(s Shape) {
  for {
    if  c, ok := s.(*Circle); ok {
      c.Area() 
    } else {
      s.Area()
    }
  }
}

That is, if our pprof CPU profile reveals that we are spending 90%+ time in Circle::Area(), then s.Area() is special-cased via an if-then-else branch that checks the runtime type of s and introduces a direct function call to c.Area(). Please note, this c.Area will be inlined as long as c.Area's inlining_cost is less than the fixed budget 80 used by the current Inliner. A PGO-enabled Inliner however can go beyond this fixed budget of 80 and can inline large functions if they are hot.

I know there will be many questions on why are we introducing one if-then-else not more and whether the branch condition can introduce a slow down, we have discussed some of these in our implementation proposal and we are open to other ideas as well, particularly if they should be driven by memory profiles as well [idea from @josharian].

Coming back to the devirtualization example from @josharian, I would have thought that our implementation proposal above to exactly do this kind of transformation automatically in the compiler by introducing an if-then-else based on the CPU profile. Furthermore, the function call udpConn.ReadFromUDPAddrPort(b) could be inlined by our PGO-enabled Inliner if it is not done so already today and the fact that it is a hot function.

I agree with @aclements on improving escape analysis based on memory profiles.

rajbarik commented 1 year ago

...When I encounter a bug that might be affected by or otherwise involve PGO, I'd like to be able to report it (including steps to reproduce) without accidentally disclosing proprietary information about the rest of my applications' performance and without accidentally changing the compiler's PGO-related decisions.

@rhysh would it also help if the compiler dumps and takes-in the per-package WeightedCallGraph mentioned in the implementation doc instead of the large pprof profile file? We should be able to debug two PGO-enabled optimizations, Inlining and Function Ordering, using this graph data structure.

josharian commented 1 year ago

Are you saying that this could be a problem for CPU profiles because there may not be a lot of CPU time spent right there, but devirtualizing would still save a lot of allocations? When we tackle devirtualization, I think we're going to have to think hard about the heuristics. I was thinking more along the lines that the stacks in a CPU profile tell you that, say, >99% of the time, some indirect call has a particular target. That doesn't necessarily mean it's hot in an absolute sense, but tells you its worth exploring a specific devirtualization.

Indeed, the problem space here is interesting but complicated. For example, for very hot functions with interface args, instead of devirtualizing specific call sites, we might want to specialize the entire function. And it's not obvious to me how much of the devirtualization impact is from inlining vs escape analysis vs target lookup (branch prediction should be similar to jump target prediction); it might turn out that inlining and escape analysis opportunities should drive specialization decisions as much as uniformity. But this is all speculation and fine details. :) The key observation here is that effects are non-local (specialization in one place can impact escape analysis, which can impact distant callers) and not necessarily tightly tied to CPU usage. Anyway, all of this is wandering afield from the central aspects of the proposal.

My vague plan is that our next optimization mountain after PGO will be better escape analysis.

Yay!!! My informal, vague, but decade-of-experience sense is that inlining and escape analysis remain the highest impact opportunities.

rsc commented 1 year ago

Input welcome. Beyond input on the general approach, we're particularly looking for input on whether PGO should be default enabled in 1.20, and flag and file naming conventions.

People sound generally happy about this design.

-pgo for the flag, and default.pgo for the default name both sound fine.

Given that the enabling happens when a file is named default.pgo in the current directory, it seems unlikely people are going to trip over this. Starting with -pgo=auto by default from the start in Go 1.20 seems fine to me, especially since people can use -pgo=off if they happen to have a default.pgo lying around (in, coincidentally, exactly the right format, namely a pprof profile) for unexpected reasons.

Are there any remaining objections that we need to consider or discuss before moving this to likely accept?

rsc commented 1 year ago

This proposal has been added to the active column of the proposals project and will now be reviewed at the weekly proposal review meetings. — rsc for the proposal review group

aclements commented 1 year ago

@rhysh

When I work with profiling data manually, I often start with pprof-format files that include hundreds or thousands of samples, usually covering less than one thread-minute. ... That's a big file to add to an application's source control repository

This is an interesting point that I don't think we have a great answer to, but I'll share the thoughts from the wildly speculative discussion @prattmic and I just had. :)

I think there are three options:

  1. Make profiles smaller. This would almost certainly require changes to the pprof format. I don't know what this would look like, but we'd start by profiling the bytes of a large profile in order to make data-driven decisions about what to improve.
  2. Make profile diffs smaller. This can really depend on your source control system, but there are some interesting opportunities with git here.
  3. Put the profile outside of git, either using something like git-annex, or using the -pgo flag. I'm not sure there's much more we can do on our side to improve this experience.

Option 2 is actually kind of interesting. When git is building a packfile, it does binary delta compression plus zlib compression. The exact heuristics of how it picks which objects to delta compress are quite subtle, but I think they would pretty reliably pick up on replacing one default.pgo with a new default.pgo. This all suggests that if we can arrange for the profile to be efficiently delta-compressible, then the history storage would actually be fairly cheap. I'm not sure about other source control systems, but I think some form of delta compression is common.

The sample data is going to be pretty noisy, so I don't think there's much we can do to make that delta-friendly. But we could arrange to put all of the metadata together in some sorted order, and that's probably pretty delta-friendly. The other subtlety here is that profiles are zlib-compressed by default, which will totally destroy deltas. One option is to not compress them at all and depend on the source control system to compress its history store (which git does). A wilder option is to perform content-defined chunking on the metadata and compress it in blocks. The result would still be a valid zlib stream with slightly worse compression but potentially much better delta compression.

rsc commented 1 year ago

Based on the discussion above, this proposal seems like a likely accept. — rsc for the proposal review group

gopherbot commented 1 year ago

Change https://go.dev/cl/429638 mentions this issue: cmd/compile,cmd/link,runtime: add start line numbers to func metadata

gopherbot commented 1 year ago

Change https://go.dev/cl/438255 mentions this issue: runtime/pprof: set Function.start_line field

gopherbot commented 1 year ago

Change https://go.dev/cl/438737 mentions this issue: cmd/go: add PGO auto mode

gopherbot commented 1 year ago

Change https://go.dev/cl/438736 mentions this issue: cmd/go: add -pgo build flag

rsc commented 1 year ago

No change in consensus, so accepted. 🎉 This issue now tracks the work of implementing the proposal. — rsc for the proposal review group

cherrymui commented 1 year ago

Thanks!

gopherbot commented 1 year ago

Change https://go.dev/cl/439297 mentions this issue: design/55022-pgo-implementation.md: fix image paths

gopherbot commented 1 year ago

Change https://go.dev/cl/444475 mentions this issue: sweet: upgrade tile38 to 1.29.1

gopherbot commented 1 year ago

Change https://go.dev/cl/429863 mentions this issue: cmd/compile: enable PGO in Go and performs profile-guided inlining (for issue #55022)