Closed Tyrionestemaire closed 9 years ago
My gut tells me something is odd with your installation, that queue cleanup task should be checking every 30mins if there are jobs older than 1hour stuck in the queue and error them all out.
Even if you run it at a decent hosted virtual server it should not take that long for one queued job to complete.
A good starting point may be the laravel.log in 'app/storage/logs', there could be some insights what is going wrong.
2015-05-30 23:15 GMT+02:00 Tyrionestemaire notifications@github.com:
just dont understand the queues. are the normally this long for the rest?
— Reply to this email directly or view it on GitHub https://github.com/eve-seat/seat/issues/395.
I'm having the same Problem. Running 2 instanzes of seat, one for testing, one more produktional.
every two of them show
"0 Jobs in the Redis Queue, with a Redis status of: OK"
"Current Working Jobs: No Working Jobs, but there are jobs in the queue. Are the workers started?"
sipervisorctl tells me my 8 workers are Running. Redis says the same.
#/src/seat/artisan seat:queue-status
Redis reports 0 jobs in the queue (queues:default)
The database reports 35 queued jobs
[!] The redis & db queued counts are not the same. This is not always a bad thing
The database reports 17 done jobs
The database reports 0 error jobs
The database reports 0 working jobs
root@v22013021527310404:~#
haven't found any error's till now. where should I look for?
You need two instances of redis and supervisord Le 8 juin 2015 15:06, SauerRam notifications@github.com a écrit :I'm having the same Problem. Running 2 instanzes of seat, one for testing, one more produktional. every two of them show "0 Jobs in the Redis Queue, with a Redis status of: OK" "Current Working Jobs: No Working Jobs, but there are jobs in the queue. Are the workers started?"
sipervisorctl tells me my 8 workers are Running. Redis says the same.
Redis reports 0 jobs in the queue (queues:default) The database reports 35 queued jobs [!] The redis & db queued counts are not the same. This is not always a bad thing The database reports 17 done jobs The database reports 0 error jobs The database reports 0 working jobs root@v22013021527310404:~#
haven't found any error's till now. where should I look for?
—Reply to this email directly or view it on GitHub.
supervisord handles every worker you give him a config for, and redis are two different instances.
Does both of your config file are correctly configured ? In particulare redis port. Do you have one cron task per seat instance ? Le 8 juin 2015 16:06, SauerRam notifications@github.com a écrit :supervisord handles every worker you give him a config for, and redis are two different instances.
—Reply to this email directly or view it on GitHub.
Yes and Yes. Notized this already before setting up the second instance. Some of the Jobs get done, most are requeued, and get done 1-2 once in a while (every ~2h). Tried to solve this by changing the environment for the clone, with getting most recent redit, pip, python, php, supervisor, but issue with unnormal job duration is still there
how many supervisor process do you have foreach of its instance ?
4 each, 8 in total, so 4 per seat instance
root 4095 0.0 0.2 56524 4472 ? Ss Jun07 0:30 /usr/bin/python /usr/bin/supervisord -c /etc/supervisor/supervisord.conf
vu2004 4103 0.1 0.4 360300 9036 ? S Jun07 1:50 /usr/bin/php /var/www/virtual/domain.at/imm/htdocs/src/artisan queue:listen --timeout=3600 --tries 1
vu2004 4105 0.1 0.4 360300 9040 ? S Jun07 1:50 /usr/bin/php /var/www/virtual/domain.at/imm/htdocs/src/artisan queue:listen --timeout=3600 --tries 1
vu2004 4109 0.1 0.4 360300 9052 ? S Jun07 1:50 /usr/bin/php /var/www/virtual/domain.at/imm/htdocs/src/artisan queue:listen --timeout=3600 --tries 1
vu2004 4110 0.1 0.4 360300 9036 ? S Jun07 1:50 /usr/bin/php /var/www/virtual/domain.at/imm/htdocs/src/artisan queue:listen --timeout=3600 --tries 1
vu2004 4114 0.1 0.4 358624 9016 ? S Jun07 1:48 /usr/bin/php /var/www/virtual/domain.at/mus/htdocs/src/seat/artisan queue:listen --timeout=3600 --tries 1
vu2004 4116 0.1 0.4 358624 9020 ? S Jun07 1:48 /usr/bin/php /var/www/virtual/domain.at/mus/htdocs/src/seat/artisan queue:listen --timeout=3600 --tries 1
vu2004 4117 0.1 0.4 358624 9108 ? S Jun07 1:48 /usr/bin/php /var/www/virtual/domain.at/mus/htdocs/src/seat/artisan queue:listen --timeout=3600 --tries 1
vu2004 4120 0.1 0.4 358624 9228 ? S Jun07 1:49 /usr/bin/php /var/www/virtual/domain.at/mus/htdocs/src/seat/artisan queue:listen --timeout=3600 --tries 1
And got this in my laravel.log:
[2015-06-08 16:35:03] production.INFO: Started command seatscheduled:api-update-server {"src":"Seat\\Commands\\Scheduled\\EveServerUpdater"} []
[2015-06-08 16:35:03] production.WARNING: A new job was not submitted due a similar one still being outstanding. Details: {"id":"122","jobID":"sCxD6CgOu8QMTrfI1DuV","ownerID":"0","api":"ServerStatus","scope":"Server","status":"Queued","output":null,"created_at":"2015-06-08 13:00:06","updated_at":"2015-06-08 13:00:06"} {"src":"App\\Services\\Queue\\QueueHelper"} []
[2015-06-08 16:40:03] production.INFO: Started command seatscheduled:api-update-server {"src":"Seat\\Commands\\Scheduled\\EveServerUpdater"} []
[2015-06-08 16:40:03] production.WARNING: A new job was not submitted due a similar one still being outstanding. Details: {"id":"122","jobID":"sCxD6CgOu8QMTrfI1DuV","ownerID":"0","api":"ServerStatus","scope":"Server","status":"Queued","output":null,"created_at":"2015-06-08 13:00:06","updated_at":"2015-06-08 13:00:06"} {"src":"App\\Services\\Queue\\QueueHelper"} []
[2015-06-08 16:45:04] production.INFO: Started command seatscheduled:api-update-server {"src":"Seat\\Commands\\Scheduled\\EveServerUpdater"} []
[2015-06-08 16:45:04] production.WARNING: A new job was not submitted due a similar one still being outstanding. Details: {"id":"122","jobID":"sCxD6CgOu8QMTrfI1DuV","ownerID":"0","api":"ServerStatus","scope":"Server","status":"Queued","output":null,"created_at":"2015-06-08 13:00:06","updated_at":"2015-06-08 13:00:06"} {"src":"App\\Services\\Queue\\QueueHelper"} []
Question is: Do the 2 instances share anything else, except the webserver and being on the same host.
Supervisor in this case does not look like the problem, do you use different Redis DBs for each?
I assume the fact it worked the same way with only one instance pushes that choice apart as a reason, but can you tell me where the redis DBs are configured? Maybe I get to some trustworthy error reason from there. Only thing i can imagine atm is the one with :
The redis & db queued counts are not the same. This is not always a bad thing
In app/config/database.php
around Line 142 the 0 should be changed
http://chrislaskey.com/blog/342/running-multiple-redis-instances-on-the-same-server/ Le 9 juin 2015 09:12, SauerRam notifications@github.com a écrit :I assume the fact it worked the same way with only one instance pushes that choice apart as a reason, but can you tell me where the redis DBs are configured? Maybe I get to some trustworthy error reason from there. Only thing i can imagine atm is the one with :
The redis & db queued counts are not the same. This is not always a bad thing
—Reply to this email directly or view it on GitHub.
For n seat instance you should have : n cron task n redis instance n supervisor instance n sql db n seat clone Le 9 juin 2015 10:14, "Alexander A." notifications@github.com a écrit :In app/config/database.php around Line 142 the 0 should be changed
—Reply to this email directly or view it on GitHub.
Le 9 juin 2015 10:14, "Alexander A." notifications@github.com a écrit :In app/config/database.php around Line 142 the 0 should be changed
--> seems to solve my issue for both instances, pulling the jobs out of queue now as expected
If this is something one of the developers wants to include into settings, please close it, for me changing the value from 0 to 1 for 1 instance worked for both instances (so 1st is set to 0, 2nd to 1). Watched my Queue closely the last 5 days and couldn't reveal any issues, so for my case CLOSED
Multiple instances on the same host is not really something that is officially supported atm. I will tag this as an enhancement and look at having some configs prepared so that may in fact be possible. Thanks to all for the help :)
http://prntscr.com/7b9f66
just dont understand the queues. are the normally this long for the rest?
http://prntscr.com/7b9fr8