adomokos / light-service

Series of Actions with an emphasis on simplicity.
MIT License
837 stars 67 forks source link

`before_action` and `after_action` events #78

Closed adomokos closed 8 years ago

adomokos commented 8 years ago

We are thinking about using LS for ETL processing. However, instrumentation is key, and measuring how long each action takes is important to us.

I am thinking about exposing 2 events from the LS pipeline.

  1. Before Action
  2. After Action

That way we could hook into the events and measure the execution. I would also think that this kind of events would be beneficial for others to extend LS.

This is how it would look like:

class SomeOrganizer
  extend LightService::Organizer

  def self.call(user)
    with(:user).reduce(actions)
  end

  def self.actions
    [
      OneAction,
      TwoAction,
      ThreeAction,
      FourAction
    ]
  end

  def register_event_handlers
    for(OneAction).before_action do |ctx|
      puts "OneAction before action event"
    end

    for(OneAction).after_action do |ctx|
      puts "OneAction after action event"
    end
  end
end

This way extending LS wiht custom event handling mechanism is feasible. I would love to use this functionaity as an "around advice"-like benchmarking in my actions.

Has anybody tried doing something like this before? If your answer is yes, what did you do? Maybe we could use that in LS.

If not, what do you think about this approach?

Thanks!

rewinfrey commented 8 years ago

I dig it :+1:

Another approach would be to expose a benchmark configuration setting as part of the LightService::Configuration options. If benchmark is set to true (by default I would imagine it would be false), it could instrument or benchmark the run time of each action and each organizer.

The benchmark results could be output as part of the log that LightService already uses, or perhaps you can expose a benchmark_log configuration option so that benchmark output is not conflated with the regular LightService log.

This way, if someone is interested in benchmarking their code via LightService, they don't have to litter their production code with benchmark related before_action or after_action blocks. If you expose a separate log file for benchmark specific auditing, someone also doesn't have to parse that output to get rid of the noise.

I really like the idea of having a configurable benchmark setting. It makes me realize that having benchmark built into a library like this might help adoption, or in the very least, measure the impact of abstraction by understanding how many objects it creates and how much pressure it puts on garbage collection. All that info would be very useful.

padi commented 8 years ago

We're already using LS in a variety of long series of actions, ETL processing included. :+1: for this

bwvoss commented 8 years ago

I submitted a PR that can be used to accomplish this: https://github.com/adomokos/light-service/pull/79

adomokos commented 8 years ago

All benchmarking @rewinfrey suggested can be accomplished by PR #79, closing this issue.