Closed vesper8 closed 6 years ago
This issue is not clear, please explain in more details.
I believe that @vesper8 is asking whether he can have multiple servers executing the queued jobs through horizon.
I haven't tested it myself, however, you should be able to have N servers configured to have access to the same redis instance and run horizon on all N servers.
Check out Laravel Forge for managing your servers.
I hope this is the case - I am testing this out today. We're deploying our Laravel app across N servers under an application LB in AWS. Historically we've had each server running X workers, all connecting to same queues. If Horizon does not support this use case, I can't see how this can scale horizontally.
@Radostin explained precisely what I meant, thanks for clarifying
@jstenquist I'm very interested in how this works out for you guys. Could you please report back and would you also mind sharing your configuration files that allowed you to get this to work (if you manage to get it working)
that would be most helpful!! thanks!
So, I was wondering this as well, since we host our Laravel app on Heroku. Heroku has the ability to easily scale the amount of worker dynos. I was curious how this would work with Horizon.
Horizon works very well with multi servers, assuming they are all working off the same Redis database (which they of course would be, since this is for managing queues). Simply run php artisan horizon
on each server (worker). Within Horizon each one of these servers will be listed. Here is a screenshot of Horizon with two servers running:
For any Heroku users wondering how to figure this, you simply update your Procfile like this:
web: vendor/bin/heroku-php-apache2 public/
worker: php artisan horizon
What's nice is you now only need one worker for all your queues, and you can simply bump up the number of dynos for that worker based on your load.
What's also really interesting with Heroku is that you don't have to manually call php artisan horizon:terminate
when you deploy changes. Heroku sends a SIGTERM
signal when gracefully shutting down dynos, and Horizon will automatically terminate when this happens. This gives your jobs 30 seconds to finish before being fully stopped.
Real nice! :feelsgood:
That ist nice - but what if you are not on Heroku, but on Docker (or Amazon ECS):
I suppose scaling works just fine, if you have multiple worker containers running php artisan horizon
What about Deployments? Do you have to call php artisan horizon:terminate
on all worker containers, or is it enough to run this command once (like php artisan queue:restart
)?
so every server has a redis and mysql configuration that connects it to the same instance located on one server? i guess there's port opening involved as well to get this working. i wish there was a tutorial that walked through all of it.. if you know of one post it here!
Can someone clarify what do supervisor-1
and supervisor-2
refer to. I think it is referred in the config as :
'production' => [
'supervisor-1' => [
'connection' => 'redis',
'queue' => ['default'],
In my case I would need to run horizon on two different servers ,with different number of processes . I am curious if supervisor-1 in the above config refers to a specific server horizon process spawned by supervisord ?
@najamhaq 'supervisor-1' doesn't mean anything specifically, it's just a name for a supervisor settings group. You can rename it to whatever makes sense for what you're building (i.e. 'default-supervisor').
This setup seems to cause some jobs being picked up by multiple workers, anyone having the same trouble here?
@reinink Does your Heroku setup still give you the ability to access the Horizon dashboard even though you only start horizon in "worker" and not "web"?
We're running "nginx" and "php artisan horizon" in our Profile.web, otherwise we were unable to access Horizon
https://github.com/huangdijia/laravel-horizon-restart
now support horizon 4.0
Hey everyone. I'm going to close this as this is more a question for a support channel like laracasts etc. Please only report issues with Horizon on this issue tracker.
@carltondickson I believe the horizon dashboard and web routes are not handled by php artisan horizon
, but will be picked up by your main web process (remember horizon is installed into your app, its routes will be registered into your app, which will then serve them via the heroku web process).
What this thread has not clearly answered for me, though, is whether the horizon daemon is ok with it being run more than once concurrently on multiple servers. This of course is key to reliably benefitting from heroku dyno scaling.
It's true you can get about 8 queue workers per standard/hobby dyno if 64MB is enough for what you're doing, so for some apps it may never be needed to scale beyond one horizon worker. (Particularly if queue jobs are not processor bound).
It would be good to know, though, what would happen if dyno scaling became needed. (There is another option on heroku, though, which is to give a more powerful single dyno to the horizon process, potentially avoiding this question in many cases).
Update: re-reading Jonathan Reinink's post above (and expanding the image he posted to a readable size), it appears horizon is fine with multiple servers. It looks like horizon will list each server separately in its UI (the long numbers there look like dyno ids), too.
So, now ready to give this a whirl :) ...
Could anyone write a tutorial if you successfully manage to implement it?
So, I was wondering this as well, since we host our Laravel app on Heroku. Heroku has the ability to easily scale the amount of worker dynos. I was curious how this would work with Horizon.
Horizon works very well with multi servers, assuming they are all working off the same Redis database (which they of course would be, since this is for managing queues). Simply run
php artisan horizon
on each server (worker). Within Horizon each one of these servers will be listed. Here is a screenshot of Horizon with two servers running:For any Heroku users wondering how to figure this, you simply update your Procfile like this:
web: vendor/bin/heroku-php-apache2 public/ worker: php artisan horizon
What's nice is you now only need one worker for all your queues, and you can simply bump up the number of dynos for that worker based on your load.
What's also really interesting with Heroku is that you don't have to manually call
php artisan horizon:terminate
when you deploy changes. Heroku sends aSIGTERM
signal when gracefully shutting down dynos, and Horizon will automatically terminate when this happens. This gives your jobs 30 seconds to finish before being fully stopped.Real nice! :feelsgood:
Wow, So, roughly, that means if I use an AWS Load Balancer with a Scaling group in EC2, when my application is cloned (considering an external Redis connection), the horizon will identify the supervisor with queue workers from each ec2 clone? and by using an external Redis, my jobs will never run twice?
I hope someone answer above question
@rhsanet @tonywei92 you guys can easily run 2 separate " php artisan horizon" in 2 terminal tabs in your local machine and dispatch a job in your tinker and see the job is picked by who and if its picked 2 times or 1 time u dont need to test it in cloud
I have two servers running horizon (with the same redis connection),I log requests using queues,What puzzles me is that server 1's request log is sometimes logged to server 2's log file,and server 2's request log is sometimes logged to the server 1's log file。how it happend? And what I can do to avoid this?
Say I have 5 severs all running the same code, and I need to do image detection on 10k images.
Since Horizon replaces queue:work, can I get my 5 machines to process my 10k images faster?
Assuming all 5 servers have access to all the same resources.
Could I see an example of how to configure queue/horizon/database to set something like this up please?