Closed djoks closed 7 years ago
This sounds like a code issue and not a Laravel issue. I have an API running on Lumen 5.3 that has over 100.000 requests per hour and runs over 5000 jobs per hour without any issues.
If you want us to inspect your code on issues I would kindly ask you to try it on the Laracasts forum ;)
Well I can certainly post the code here for you to see if you're that certain it's a code issue.
This is not a code issue, all my code does is send an SMS via cURL. schedule:run just hangs and so does queue:work even when there are no schedules to run it just hangs and piles up till my server crashes.
@djoks - it is definitely a code issue.
I run large lumen queues without issue, as do many others.
If you have a memory leak somewhere - then just run queue:listen
instead - which will cycle your entire framework call and should remove memory issues.
all my code does is send an SMS via cURL
I note that you are using file_get_contents()
- that is not cURL. You should really use Guzzle or similar.
I also note you are suppressing the errors for @file_get_contents()
- so you could easily be hiding errors which is causing your application to crash.
Actually this code is old, I switched to cURL afer deciding that it might be an issue with file_get_contents but I still have the same problem. I even took out the loop so that instead of one job executing 4 times I had 4 separate jobs each executing once. Will update my original post to show my current code. If you can see something wrong with it then let let me know coz I honestly cannot.
Did you try switching to queue:listen
? That will reboot the whole framework on each call, ensuring all memory is released.
When I use queue:listen or queue:work I don't seem to have any issues (at least not for the short duration I tested it for), however when I use schedule:run to run queue:work from cron that's when everything starts going wrong.
oooooooooo - THAT is your problem.
You should never be running queue:work
from the scheduler!!
You should be setting queue:work
as its own daemon process which is constantly running. And have the scheduled jobs push onto the queue separately.
I suggest asking about this on the forums to discuss further.
Ohh alright, could have sworn I followed the docs though, anyway so then do you suggest I create a second cron to handle queue:work?
Just to be clear about what you are saying this is my kernel.php
<?php
namespace App\Console;
use Illuminate\Console\Scheduling\Schedule;
use Laravel\Lumen\Console\Kernel as ConsoleKernel;
class Kernel extends ConsoleKernel
{
/**
* The Artisan commands provided by your application.
*
* @var array
*/
protected $commands = [
//\JK\Dingo\Api\Console\Commands\RouteListCommand::class
];
/**
* Define the application's command schedule.
*
* @param \Illuminate\Console\Scheduling\Schedule $schedule
* @return void
*/
protected function schedule(Schedule $schedule)
{
// Run once a minute
$schedule->command('queue:work')->everyMinute()->sendOutputTo(storage_path('logs/scheduler.log'), true);
}
}
so then do you suggest I create a second cron to handle queue:work?
No - you dont use cron to do queue:work
at all. You need to create a daemon process that has nothing to do with cron.
You'll have to ask on the forums - this isnt the right place to discuss.
Edit: this is a starting point: https://laracasts.com/discuss/channels/forge/create-a-single-queue-worker-processing-multiple-queues
I am using lumen 5.3 and followed the docs on how to schedule a job. I did everything as instructed and it worked but then I noticed that my server kept running out of memory and crashing I investigated the issue and found out that mysql used up too much memory and crashed. I traced it to the job I was running since I was using the database for my queues.
So then I switched to redis which eliminated the issue of mysql eating up my memory but then my server still crashed and I had to reboot.
Support says it was because the queue:work command had over 40 processes in memory and a lot of them were defunct so I checked my processes list and realized this was true. Somehow for one simple job I was running the queue:work command creates many and many processes till eventually my server runs out of memory and crashes.
Dispatching job
$this->dispatch(new SendDepositSMSAlertJob($bank . ' - ' . $acNumber, $depositType, $depositAmount, $reportDate, $telNumbers));
SendDepositSMSAlertJOB
Now I used this same SMS script/job in another laravel app and have been doing so for over a year now and never had an issue like this, that app was v5.2 so now I am totally confused here. How on earth can this be a code issue? If it is then it's certainly not MY code.
Screenshots
http://prnt.sc/cx0e2s http://prnt.sc/cx0e8b http://prnt.sc/cx0efp