A common question from users is how much overhead it is to add profiling (most commonly through net/http/pprof) into a running production system, and whether it's safe to just leave it in a non-debug build.
The package doc should state the expected / target impact of just importing the package (negligible?) and when actively running a cpu/memory profile.
In compiler/runtime triage, we think this is a good idea, but we need to figure out what number to write down (and how to determine it). Maybe we need to add this to our benchmark dashboard?
A common question from users is how much overhead it is to add profiling (most commonly through
net/http/pprof
) into a running production system, and whether it's safe to just leave it in a non-debug build.The package doc should state the expected / target impact of just importing the package (negligible?) and when actively running a cpu/memory profile.
The last time the Go project seems to have said something about this was 10 years ago in a blog post while people appear to cite 5% coming from Google Cloud Profiler