brandonhilkert / sucker_punch

Sucker Punch is a Ruby asynchronous processing library using concurrent-ruby, heavily influenced by Sidekiq and girl_friday.
MIT License
2.64k stars 114 forks source link

What happens if the background job remains? #107

Closed namit closed 9 years ago

namit commented 9 years ago

Don't know where else I could have asked this question.

Say Sinatra / Passenger / MRI executes code A for a route for a client request and that code A performs a background job B. Now, say code A is done and the server is ready to respond to client, but B is not done. Then what happens? Server waits for B to finish before responding to client (guess not)? Server responds immediately, but passenger waits for B to finish before killing the process forked for this request (this would be my guess)? Server responds immediately, passenger kills the process forked, and job B gets lost? Something completely different happens?

brandonhilkert commented 9 years ago

When code A creates a background job B, the job is sent to a separate thread to be worked on. The A path is not blocked and can continue to serve subsequent requests immediately after it enqueues the job. It doesn't need to wait for the background job B to complete.

There is no processes forked when a background job is enqueued, it's a thread within that web process. So only when the web process is killed, will the background jobs enqueued within that process be killed off.

namit commented 9 years ago

Thank you for the response! When I say process forked, I'm talking about the passenger process. I understand that the background job will be in a thread within the passenger process. And, as we're using MRI (with Global Interpreter Lock), the thread for the background job is effectively a green thread. So, my question is (editing for clarity):

Say client makes a GET /users call to Sinatra / Passenger / MRI. Sinatra executes code A and that code A performs a background job B. Now, say code A is done and the server is ready to respond to client with a JSON array of the users, but job B is not done. Then what happens? Server waits for B to finish before responding to client (guess not)? Server responds immediately, but passenger waits for B to finish before killing the process that passenger had forked for this request (this would be my guess)? Server responds immediately, passenger kills the process forked, and job B gets lost? Something completely different happens?

brandonhilkert commented 9 years ago

Maybe I'm missing something, but my response is the same as before. Perhaps you can provide a more concrete example or actual code. That might help if there are outstanding questions. Clarifying what the code actually does will help me better assist you.

namit commented 9 years ago

Say I define a route in Sinatra as:

get '/' do
  function_A()
end
def function_A()
  LogJob.new.async.perform("xyz")
  return "Success"
end

This is running using Phusion Passenger on MRI. Now, a client request comes to "/" and Passenger creates a new process with PID 123 to serve this request. The route is hit and function_A() is called. a background LogJob is created. Now, say LogJob takes 5 seconds to execute on a separate thread. Now, what happens:

(A) Client won't get the "Success" message till LogJob is completed (B) Client immediately gets the "Success" message, but Passenger won't kill the process 123 till LogJob is completed (So, 5 seconds) (C) Client immediately gets the "Success" message, and Passenger kills the process 123. So, LogJob is not completed (D) Something else

brandonhilkert commented 9 years ago

"Success" will be retuned immediately because the LogJob is run in a separate thread.

Re: processes. Passenger doesn't fork a process for a request. The processes are forked before hand, so there should be no worry they will be killed off before th process gets killed.

In theory, the process could die because of another external condition, which is why I make the recommendation in the README that jobs are small and fast.

On Thursday, March 26, 2015, Namit Yadav notifications@github.com wrote:

Say I define a route in Sinatra as:

get '/' do function_A() end def function_A() LogJob.new.async.perform("xyz") return "Success" end

This is running using Phusion Passenger on MRI. Now, a client request comes to "/" and Passenger creates a new process with PID 123 to serve this request. The route is hit and function_A() is called. a background LogJob is created. Now, say LogJob takes 5 seconds to execute on a separate thread. Now, what happens:

(A) Client won't get the "Success" message till LogJob is completed (B) Client immediately gets the "Success" message, but Passenger won't kill the process 123 till LogJob is completed (So, 5 seconds) (C) Client immediately gets the "Success" message, and Passenger kills the process 123. So, LogJob is not completed (D) Something else

— Reply to this email directly or view it on GitHub https://github.com/brandonhilkert/sucker_punch/issues/107#issuecomment-86749781 .


_Build a Ruby Gem is available! http://brandonhilkert.com/books/build-a-ruby-gem/?utm_source=gmail-sig&utm_medium=email&utm_campaign=gmail_

http://brandonhilkert.com

namit commented 9 years ago

That makes sense. Thank you for your responses!

namit commented 9 years ago

Anybody using Passenger and Sucker Punch may be interested in following this issue: https://github.com/phusion/passenger/issues/1211

EsaMakinen commented 8 years ago

I seem to be having similar issue with Sinatra/Puma. Here's the code:

sinatra_main_app.rb

require_relative 'jobs/testjob.rb'
NewJob.perform_async("This works async outside user request")  

get '/' do
  NewJob.perform_async("This does not work")  
  puts "Non-async puts works"
  erb :frontpage
end

Newjob.rb

require 'sucker_punch'

class NewJob
  include SuckerPunch::Job

  def perform(data)
       puts data
  end
end

The output, after route "/" is called:

This works async  outside user request
Non-async puts works

So when user makes request, the Suckerpunch job gets killed before it's executed or something. Is there a workaround for this?

brandonhilkert commented 8 years ago

What version of sucker punch are you using?

On Sunday, September 25, 2016, Esa Mäkinen notifications@github.com wrote:

I seem to be having similar issue with Sinatra/Puma. Here's the code:

sinatra_main_app.rb

require_relative 'jobs/testjob.rb' NewJob.perform_async("This works async outside user request")

get '/' do NewJob.perform_async("This does not work") erb :frontpage end

Newjob.rb

require 'sucker_punch'

class NewJob include SuckerPunch::Job

def perform(data) puts data end end

The output: This works async outside user request

So when user makes request, the Suckerpunch job gets killed before it's executed or something. Is there a workaround for this?

— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/brandonhilkert/sucker_punch/issues/107#issuecomment-249404890, or mute the thread https://github.com/notifications/unsubscribe-auth/AAtbFIOi6IQBPrTqrNk1bKtuwfAkYKzlks5qthAjgaJpZM4D1aBJ .


http://brandonhilkert.com

EsaMakinen commented 8 years ago

It's Sucker punch gem version 2.02, concurrent-ruby 1.00 and Ruby 2.24

richardkmichael commented 6 years ago

@EsaMakinen I know your comment is old, but I hit this today myself. :-) You must require "sucker_punch" (so in your case require_relative 'jobs/test_job') before require "sinatra".