dotnet / performance

This repo contains benchmarks used for testing the performance of all .NET Runtimes
MIT License
701 stars 272 forks source link

F# Compiler & Tools performance discussion #2457

Open dsyme opened 2 years ago

dsyme commented 2 years ago

Hi dotnet/performance folk :)

The F# compiler and tools ship as part of the .NET SDK, but we don't have a systematic, reproducible, reliable, longitudinal approach to performance and scalability testing for those tools. We'd like to understand what we should do about this.

Basically we'd love your advice on how we should be thinking about this, what we should be doing, and how we should be going about it.

People on our side are @vzarytovskii (team lead), @KathleenDollard (PM), @KevinRansom, @brettfo and others. @dsyme, @TIHan and others are v-team contributors.

cc @danmoseley @adamsitnik @DrewScoggins @LoopedBard3. Also @davkean since he's historically been a good source of advice on perf issues and may know the Roslyn team performance methodology.

I've written some initial notes below, thanks :) Overall it feels like these requirements must be similar in nature to many "upstack" components like Roslyn, ASP.NET and so on.

Areas of high concern:

Areas of currently lower concern:

Approximate needs

For the compiler and tools, our rough needs are as follows:

What:

Execution:

Variation:

Where possible, "everything else" besides the F# tooling should be kept constant in these scenarios, or at least we should be able to identify change in F# tooling performance independently to change in .NET CLR performance, and know exactly which .NET CLR is used at each step.

Reporting:

danmoseley commented 2 years ago

@sblom

(How do we protect the perf of 'dotnet build' for C# projects today?)

ninjarobot commented 2 years ago

Is there a way to get performance metrics from the compiler with some context about the source being compiled? The first step to improving performance is being able to measure it, so I hope this effort will add a way to get that information. Today, the granularity is just the project level because you can see how long msbuild spends on a project, but it's not enough. The --times option could return some useful info like how long the compiler spends on each module or even each function so when people hit issues, they will know roughly what is causing them and be able to reproduce the issue.

sblom commented 2 years ago

We track some very similar stuff to that in the perf lab currently, specifically dotnet build times for most of the in-box templates. We'd be happy to include Fsharp in the data collection, reporting, and regression auto-filing that we have in place. I'll schedule an intro call.

adamsitnik commented 2 years ago

With the exception of tailcalls, we feel existing .NET CLR perf testing for C# code is adequate. For tailcalls, we should test there are existing perf tests in place

For that we could create a new project with F# microbenchmarks similar to what we have for C# and then configure the CI and Reporting System accordingly. The good thing is that all scripts expect just a path to project file, so as long as the project is a console app and dotnet run works, it's going to be easy to plug this in.

Whoever is going to work on that might want to take a look at these BenchmarkDotNet examples for F#.