nesquena / backburner

Simple and reliable beanstalkd job queue for ruby
http://nesquena.github.com/backburner
MIT License
428 stars 68 forks source link

Create new connection every time we enqueue? #125

Open xinuc opened 8 years ago

xinuc commented 8 years ago

Why we need to create new connection every time we enqueue a job? https://github.com/nesquena/backburner/blob/master/lib/backburner/worker.rb#L39

It leads to a very huge number of connections. In my case it can even exhaust linux max open ports. Maybe use one connection per thread is enough.

funkyboy commented 5 years ago

I can confirm that some code like

300kelements.each do |element|
  Backburner.enqueue MyWorker, element
end

after a while leads to

...
gems/beaneater-1.0.0/lib/beaneater/connection.rb:238:in `_raise_not_connected!': Connection to beanstalk 'localhost:11300' is closed! (Beaneater::NotConnected)

The machine seems to run out of connections. Happens on Ubuntu 18.04 with the following settings:

> beanstalkd -v
beanstalkd 1.10

> ruby -v
ruby 2.5.5p157 (2019-03-15 revision 67260) [x86_64-linux]

> gem list | grep beane
beaneater (1.0.0)

> gem list | grep backbur
backburner (1.5.0)

> ulimit -n
200000

Connection error happens when roughly 28k of jobs have been successfully enqueued.

Happy to provide more details if needed @nesquena

funkyboy commented 5 years ago

A workaround I found is to recycle the connection, like:

connection = Backburner::Connection.new(Backburner.configuration.beanstalk_url)
tube = connection.tubes['mytube']

300kelements.each do |p|
  data = { :class => MyWorker.name, :args => my_args }
  serialized_data = Backburner.configuration.job_serializer_proc.call(data)
  tube.put(serialized_data, :pri => MyWorker.queue_priority, :delay => 0, :ttr => Backburner.configuration.respond_timeout)
end

connection.close