Open mlvea opened 9 years ago
+1 for this issue
+1
In 0.2.2, we started using nohup
at a couple users' request. Can anyone who's having this issue confirm if locking to 0.2.1 fixes the problem? If so, I could add something to make the nohup
command a configurable setting (defaulting to false).
I'm not sure how you would fix it but where you use this command:
nohup #{fetch(:bundle_cmd, "bundle")} exec rake
I think that the fetch method prepends the RBENV_ variables to the bundle command.
Does this from sidekiq help?
https://github.com/lemurheavy/sidekiq_test/blob/master/lib/sidekiq/capistrano.rb#L39
Note the ;
instead of &&
The primary difference I see is that they're directly cd
ing into the path instead of using Capistrano's built-in within current_path do..end
block, which causes the &&
instead of the ;
. (They fetch :sidekiq_cmd
instead of directly running bundler, but :sidekiq_cmd
defaults to be identical to what we're directly using).
I think capistrano-rbenv reworks the results of fetch(:bundle_cmd)
to be prefixed with the environment stuff, so I would expect the sidekiq code to also put the RBENV_ROOT stuff after the nohup
?
What happens if you manually SSH into your server and run the cd .... && ( RBENV_ROOT=....)
line? Does it work? Does it work if you instead change it to use ;
?
I'll test this out when I have some time, but I recall the current implementation working fine as-is on my test server (which includes capistrano-rbenv) so I may not be able to reproduce.
Also, I really need to clean up and add the stuff below to the README, with the top of the README saying "You probably should not use this library". Using the approach below, with a process manager, avoids the need to do some of the hacks we have like nohup
, adding sleep statements, etc. Just set up an init script and all you have to do on deploy is something like sudo service resque restart
.
This mirrors the author of Sidekiq's thoughts at the bottom of this page: https://github.com/mperham/sidekiq/wiki/Deployment):
From #72: This gem isn't the best approach to managing workers on a production system. What happens if the system reboots or the workers die unexpectedly? Using a system-level process manager (upstart, init.d, etc.) will handle this kind of stuff without having to manually deploy/start resque via capistrano. If you need to restart workers during a deploy, you can just add a super-simple task like sudo service resque restart. And if you need to set a custom user, this can be done in the init/upstart script.
You can create a new release with this changes?
+1
+1 I encountered this issue with the newest Capistrano.
just add env before the
#{SSHKit.config.command_map[:rake]}
my code is :+1:
desc "Start Resque workers"
task :start do
for_each_workers do |role, workers|
on roles(role) do
create_pid_path
worker_id = 1
workers.each_pair do |queue, number_of_workers|
info "Starting #{number_of_workers} worker(s) with QUEUE: #{queue}"
number_of_workers.times do
pid = "#{fetch(:resque_pid_path)}/resque_work_#{worker_id}.pid"
within current_path do
execute :nohup, %{env #{SSHKit.config.command_map[:rake]} RAILS_ENV=#{rails_env} QUEUE="#{queue}" PIDFILE=#{pid} BACKGROUND=yes #{"VERBOSE=1 " if fetch(:resque_verbose)}INTERVAL=#{fetch(:interval)} #{"environment " if fetch(:resque_environment_task)}resque:work #{output_redirection}}
end
worker_id += 1
end
end
end
end
end
if you are using rbenv_prefix
just add env
at the first place,my code is
set :rbenv_prefix, "env RBENV_ROOT=#{fetch(:rbenv_path)} RBENV_VERSION=#{fetch(:rbenv_ruby)} #{fetch(:rbenv_path)}/bin/rbenv exec"
fix this issues with nothing change, good luck!
+1 am getting this with capistrano (3.5.0), capistrano-resque (0.2.2), and capistrano-rbenv (2.0.4) capistrano (~> 3.1) sshkit (~> 1.3)
00:30 resque:start
Starting 2 worker(s) with QUEUE: low
01 nohup $HOME/.rbenv/bin/rbenv exec bundle exec rake RAILS_ENV=stage QUEUE="low" PIDFILE=/var/www/myapp/shared/tmp/pids/resque_work…
(Backtrace restricted to imported tasks)
cap aborted!
SSHKit::Runner::ExecuteError: Exception while executing as ubuntu@ec2-....amazonaws.com: nohup exit status: 1
nohup stdout: Nothing written
nohup stderr: Nothing written
@archfish It works for me! Thanks a lot.
I'm using capistrano 3.2.1 with capistrano-resque gem to restart rasque workers on deployment. Always getting this error.
If I check my error log at
apps/app_production/current/log/resque.log
it shows.I brought the issue to stackoverflow and here is the thread.
http://stackoverflow.com/questions/29705693/rails-deployment-with-capistrano-and-start-resque-workers