Open dsyme opened 2 years ago
@sblom
(How do we protect the perf of 'dotnet build' for C# projects today?)
Is there a way to get performance metrics from the compiler with some context about the source being compiled? The first step to improving performance is being able to measure it, so I hope this effort will add a way to get that information. Today, the granularity is just the project level because you can see how long msbuild spends on a project, but it's not enough. The --times
option could return some useful info like how long the compiler spends on each module or even each function so when people hit issues, they will know roughly what is causing them and be able to reproduce the issue.
We track some very similar stuff to that in the perf lab currently, specifically dotnet build
times for most of the in-box templates. We'd be happy to include Fsharp in the data collection, reporting, and regression auto-filing that we have in place. I'll schedule an intro call.
With the exception of tailcalls, we feel existing .NET CLR perf testing for C# code is adequate. For tailcalls, we should test there are existing perf tests in place
For that we could create a new project with F# microbenchmarks similar to what we have for C# and then configure the CI and Reporting System accordingly. The good thing is that all scripts expect just a path to project file, so as long as the project is a console app and dotnet run
works, it's going to be easy to plug this in.
Whoever is going to work on that might want to take a look at these BenchmarkDotNet examples for F#.
Hi dotnet/performance folk :)
The F# compiler and tools ship as part of the .NET SDK, but we don't have a systematic, reproducible, reliable, longitudinal approach to performance and scalability testing for those tools. We'd like to understand what we should do about this.
Basically we'd love your advice on how we should be thinking about this, what we should be doing, and how we should be going about it.
People on our side are @vzarytovskii (team lead), @KathleenDollard (PM), @KevinRansom, @brettfo and others. @dsyme, @TIHan and others are v-team contributors.
cc @danmoseley @adamsitnik @DrewScoggins @LoopedBard3. Also @davkean since he's historically been a good source of advice on perf issues and may know the Roslyn team performance methodology.
I've written some initial notes below, thanks :) Overall it feels like these requirements must be similar in nature to many "upstack" components like Roslyn, ASP.NET and so on.
Areas of high concern:
Our analysis is that it is the performance of the compiler and tools themselves that is of most immediate concern to .NET customers.
This includes the performance of the FSharp.Compiler.Service component (F# equivalent of Roslyn) under developer-time scenarios, which we would model via bespoke perf benchmarking code testing code capturing common usage scenarios
Scaling of the developer tools is of particular concern. We do not, for example, regularly test the tools with '00s of projects, or have automated testing for knowing how the tools scale w.r.t. larger inputs of different kinds.
Areas of currently lower concern:
The quality of IL code generated by the F# compiler is not currently the high concern for F# customers or the dev team - while it can be improved, it's not likely to regress and we have many "IL baseline" tests that pin down the existing performance quality.
The performance of the .NET CLR with respect to the code generated by the F# compiler is not of current high concern. With the exception of tailcalls, we feel existing .NET CLR perf testing for C# code is adequate. For tailcalls, we should test there are existing perf tests in place
The performance of the FSharp.Core library is not currently the high concern for F# customers of the dev team. Incremental improvements are made by dev team and community, but regressions are very rare, and most implementation code is readily analysable for performance characteristics in code review.
The interaction between F# and PGO and other .NET perf tooling is not currently of high concern. It is important, and should be validated, but we do not feel it's an area susceptible to regression
Approximate needs
For the compiler and tools, our rough needs are as follows:
What:
dotnet
SDK command line, which is highly stable.Execution:
main
andrelease
branches on dotnet/fsharpVariation:
Where possible, "everything else" besides the F# tooling should be kept constant in these scenarios, or at least we should be able to identify change in F# tooling performance independently to change in .NET CLR performance, and know exactly which .NET CLR is used at each step.
Reporting: