Closed adomokos closed 8 years ago
I dig it :+1:
Another approach would be to expose a benchmark configuration setting as part of the LightService::Configuration
options. If benchmark
is set to true
(by default I would imagine it would be false
), it could instrument or benchmark the run time of each action
and each organizer
.
The benchmark results could be output as part of the log that LightService
already uses, or perhaps you can expose a benchmark_log
configuration option so that benchmark
output is not conflated with the regular LightService
log.
This way, if someone is interested in benchmarking their code via LightService
, they don't have to litter their production code with benchmark related before_action
or after_action
blocks. If you expose a separate log file for benchmark specific auditing, someone also doesn't have to parse that output to get rid of the noise.
I really like the idea of having a configurable benchmark
setting. It makes me realize that having benchmark built into a library like this might help adoption, or in the very least, measure the impact of abstraction by understanding how many objects it creates and how much pressure it puts on garbage collection. All that info would be very useful.
We're already using LS in a variety of long series of actions, ETL processing included. :+1: for this
I submitted a PR that can be used to accomplish this: https://github.com/adomokos/light-service/pull/79
All benchmarking @rewinfrey suggested can be accomplished by PR #79, closing this issue.
We are thinking about using LS for ETL processing. However, instrumentation is key, and measuring how long each action takes is important to us.
I am thinking about exposing 2 events from the LS pipeline.
That way we could hook into the events and measure the execution. I would also think that this kind of events would be beneficial for others to extend LS.
This is how it would look like:
This way extending LS wiht custom event handling mechanism is feasible. I would love to use this functionaity as an "around advice"-like benchmarking in my actions.
Has anybody tried doing something like this before? If your answer is yes, what did you do? Maybe we could use that in LS.
If not, what do you think about this approach?
Thanks!