orobogenius / sansdaemon

Batch process Laravel Queue without a daemon; Processes queue jobs and kills the process
https://medium.com/@orobogenius/laravel-queue-processing-on-shared-hosting-dedd82d0267a
MIT License
176 stars 17 forks source link

Feature request - Not run next job to prevent max execution time exceede #3

Closed arxeiss closed 6 years ago

arxeiss commented 6 years ago

On some webhostings (like mine) I'm not able to run artisan from command line. So I have to run it via route. But there is some limit, how long script can run. Then it is killed and Maximum execution time of XX seconds exceeded errors shown.

See pull request #4

orobogenius commented 6 years ago

Yeah, this is an error one is most likely to run into when they're running the command from a route or from a controller.

This would only work in cases where the system is quickly able to detect that the execution time is almost complete but in cases where it has already scheduled a particularly time-consuming task, it'll still get killed and Maximum execution time error would still be shown.

But I guess it's still useful in cases where it won't schedule a task when the time is almost complete and ergo, avoid abrupt script termination in those cases.

How often did you encounter the Maximum execution time error?

arxeiss commented 6 years ago

As I said, my hosting has no access to command line, so I have to run it via Controller/route as you said. That's why I added all this logic.

Of course, if you will have super time consuming task, error will be shown. This is just to try to prevent it. Before running each new task, it will check if execution time is reaching its limit. That's why I added 5s gap. Assuming that most of tasks will need less than 5s to finish their job.

If you have more time consuming task (let's say 10s each), then for you it is better to set --max_exec_time to something like this ini_get('max_execution_time') - 10. If your script can run maximum time 30s, after 2 processed tasks whole script is executing about 20s + some Laravel time. Then worker will not run next job as you are currently over your set limit. Hope I explained it better.

I encounter that error during importing payments. After that system can create up to 100 notifications, to notify users, we accepted their bank transaction. And as we grow, in future we can import even more payments which will produce even more notifications.

orobogenius commented 6 years ago

Yeah, I get the full picture and it certainly would be a nice addition to the worker.

What do you think about restricting the check for whether max_execution_time as been exceeded only to when the command is run from http?

protected function jobShouldProcess($connectionName, $queue, $options)
{
      //We don't want to consider execution time when app is running in console.
      //Running from console isn't limited by execution time and we shouldn't
     //bother executing extra checks. This would allow the worker to focus on
     //executing jobs instead.
     if (! app()->runningInConsole()) {
        if ($this->isOverMaxExecutionTime($options)) {
           return false;
        }
     }

    //...
}
orobogenius commented 6 years ago

Also, I can't seem to replicate the Maximum execution time error on my local environment. When I run the command from a route, I get an InvalidOptionException exception telling me The "--sansdaemon" option does not exist.. You say you've been able to execute the command with the --sasndaemon option from your application route, would you mind providing a snippet of the code so I can compare?

arxeiss commented 6 years ago

Could be another option as well, but there are 2 things to consider.

  1. In console is already max_execution_time = 0, and this condition below will take care of it. Not sure if one extra condition is really needed. If you run it from CLI, 1 condition is always check, this below or yours. Probably doesn't matter which one. Only during HTTP 2 conditions must be checked.

     protected function isOverMaxExecutionTime($options)
    {
        // Is set 0 or less -> worker can run forever
        if ($options->maxExecutionTime <= 0) {
            return false;
        }
  2. If you will set --max_exec_time even in console for whatever reason, this option will stop working. So either not put your condition, or update description, that this option work only during HTTP request.


Yes it work in route as expected, You have to write it as extra parameters

Artisan::call('queue:work', [ '--tries' => 5, '--sansdaemon' => true ])
orobogenius commented 6 years ago

Yes, that option takes care of it but when you set --max_exec_time in console (for whatever reason also), it still checks whether or not the execution time has completed and does it each time.

Maybe this might do:

// Is set 0 (or less) or app running in console -> worker can run forever
if ($options->maxExecutionTime <= 0 || app()->runningInConsole()) {
     return false;
}

That's what I'm doing, still doesn't work, wonder why.


Artisan::call('queue:work', [
        '--sansdaemon' => true, '--jobs' => 1, '--queue' => 'high,default'
]);
arxeiss commented 6 years ago

Not sure what you mean by:

it still checks whether or not the execution time has completed and does it each time.

In your snippet it will do the same. And because $options->maxExecutionTime <= 0 will be always true in console, second condition after || will never be checked.

And also I meant, that you can set --max_exec_time=60 even if you are running it from console. For example if you don't want to run your worker even in console too long. That's why the check should be there. But with your extra code it will not accept this option and will run forever, if you want to limit it. That's why I think, this extra condition is useless.


Not sure where is your problem about running it from route. Here is my full example. Running in Laravel 5.6, so I did just installed your package, not registered service provided, as it is done automatically.

Route::get('/artisan/queue', function ()
{
    $output = new \Symfony\Component\Console\Output\BufferedOutput;

    Artisan::call('queue:work', [
        '--tries'   => 5,
        '--sansdaemon' => true,
        '--jobs'    => 1,
    ], $output);

    return response($output->fetch(), 200)
                  ->header('Content-Type', 'text/plain');
});

And output:

[2018-09-05 12:56:17][7] Processing: SunApp\Notifications\OrderCreatedNotification
[2018-09-05 12:56:23][7] Processed:  SunApp\Notifications\OrderCreatedNotification
orobogenius commented 6 years ago

I guess I was just considering the fact that if you're running it from console, you shouldn't really care about max_exec_time but enforcing that in the package (disabling max_exec_time option in console) might be too rigid so I get your point.

In the absence of that, all looks good, I'll merge as soon as I'm able to test running it from http.

arxeiss commented 6 years ago

Yeah, it is up to you. If you don't want to limit this in console, just keep your condition there. But I think this will too over complicate already complicated description of this option.

Maybe stupid question, but do you have installed your package? Because I really did nothing special and it worked for the first time I tried it...

orobogenius commented 6 years ago

😅 Yes, I've installed my package. Don't worry about that, I'll sort it out, thanks for the contribution.

arxeiss commented 6 years ago

@orobogenius do you remember how you figured out to run it via routes? Because now I found it little tricky. It sometimes works and sometimes not. Cannot find out why it changes behaviour...

orobogenius commented 6 years ago

I didn't do much else @arxeiss, surprisingly, it just started working by itself. Weird!

arxeiss commented 6 years ago

Yeah. When I will find more time, I will try to dig deeper. This is really weird.

orobogenius commented 6 years ago

Alright, let me know what you find.

arxeiss commented 5 years ago

I don't know why, but if in .env file is APP_DEBUG=false then it works as expected. And if debug is true, then it doesn't work.

I found on StackOverflow, that registering in ServiceProvider should be done with extend not singleton.

If I changed it as follow, it worked. But not tested well. I will try to test it more and make PR, but maybe next week.

$this->app->extend('command.queue.work', function ($command, $app) {
    return new WorkCommand($app['queue.worker']);
});
orobogenius commented 5 years ago

I just tried it again today in a new environment and everything is in place, it works. I tried the .env configuration you talked about and it didn't have any effect, whether APP_DEBUG was true or false, it worked. Upgraded to latest stable release of Laravel, it still worked.

Using extend instead of singleton also does not have any effect for the most part and it isn't necessarily true that registering ServiceProviders should be done with extend, you use different methods based on the kind of service you'd be registering. In the end, they both provide the same result, which is returning our custom service when we resolve the queue work command.

I ran a bunch of other tests and the Service Container does actually return the custom service when resolved because its Service Provider is also deferred and it replaces the original service.

Would be nice if I could replicate the issue exactly as it is on your box.

orobogenius commented 5 years ago

@arxeiss The issue has now been fixed. The problem you were having was most likely related to indirectly resolving the queue service from the container before the custom work command is being called. This causes the original work comand to replace the previous binding to Sansdaemon work command with laravel's workcommand.

As you mentioned earlier, the fix was extending the original command instead of trying to replace it. This allows the custom command to act as a decorated instance of the original command.

arxeiss commented 5 years ago

Good news :) I'm still using it and it works perfectly. Now it will be even better 👍