I was thinking about a reusable API for profiling code to see which parts of an application take the most resources. Metrics should be at least execution time, maybe even memory consumption or other data. But that would be an enhanced step.
Now, there are obviously tools like gprof which can be used to just analyse the execution of the binary. I'm talking about a different approach, where the profiling instrumentation would be directly embedded into the application as this can give better insight into individual components and is easily usable.
The API could look something like this:
Profiler.profile("Expensive calculations") do
calculate_gulf_stream_flow
end
And this should then aggregate performance metrics when executed.
You probably wouldn't want the profiler to run always, so there could be two ways of (de-)activating it:
With a compile time flag. This could keep profiling code completely out of the binary. This is great because it doesn't have any impact when it is not needed. The downside is, you can't easily switch it on.
With runtime configuration (ENV var, Crystal API etc.). This allows you to switch profiling on and off easily, but it will always add a little overhead even if profiling is not used.
I was thinking about a reusable API for profiling code to see which parts of an application take the most resources. Metrics should be at least execution time, maybe even memory consumption or other data. But that would be an enhanced step.
Now, there are obviously tools like
gprof
which can be used to just analyse the execution of the binary. I'm talking about a different approach, where the profiling instrumentation would be directly embedded into the application as this can give better insight into individual components and is easily usable.The API could look something like this:
And this should then aggregate performance metrics when executed.
You probably wouldn't want the profiler to run always, so there could be two ways of (de-)activating it: