Closed nowak-ninja closed 7 years ago
I don't think it's a good idea to use __destruct() in a daemon script like queue:work
, I'd just create a method and call it once in handle()
and another in failed()
if you it to run in both success and failure.
@themsaid I was in need to put some logic after the Job is complete and I would not put this in global listener as I prefer to have such logic kept in Job class itself. I decided to build a Trait to handle this, using __destruct()
and abstract protected method fired on Job object unset.
In my case, I got many Jobs dispatched in queue so at the end my solution should work fine, but there still will be a "hole" which would lead to unpredicted behaviour for some users. The best would be to have: start()
failure()
complete()
fired in certain moments of Job's lifecycle :)
@nowak-ninja Why don't you put that logic at the end of the handle()
method then? That means the job is complete by then.
@themsaid I am trying to implement simple way to process batch jobs and control that batch progress. Trait I wrote, provides access to batch which consist of two related Models: Batch and BatchItems. On the first dispatch I am injecting Batch model into the Job and I am passing it to each next dispatched Job. Some jobs, during the flow, are making changes in BatchItems' metadata attribute, status attribute, etc and it must happen after the Job is "done". The thing is, that all these Jobs are not bound to be always used with Batches, they could be run separately without doing anything to Batches. So, imagine all these if
conditions to handle this. Yep, I guess my code is the edge-case but I found something which blocks me from using __destruct
.
@nowak-ninja - I think what @themsaid is saying is what you have now is
handle()
{
// your current job code
}
__destruct()
{
// your current destruct code
}
You could just do
handle()
{
// your current job code
// your current destruct code
}
and that should give you the same result, because once handle()
is finished - it should be destructing the class as the class is finished?
@laurencei There are couple of ways doing this but I would like to have it separated from the handle()
method and without lots of ifs
, hence I thought about using __destruct()
and found that issue. Nevertheless I just wanted to back to the point of this issue: can queue:work
unset Job object and then wait for next one?
You can't use __destruct()
because the job is destructed even before it's processed by the queue, since you use dispatch(new Job())
that Job
will be freed when your script finished execution and even before the Worker starts working on it.
As an alternative, I've submitted this PR: https://github.com/laravel/framework/pull/18605
@themsaid Such a feature would be awesome! Thank You.
@nowak-ninja quoting from Taylor's comment, you can do:
try {
// job logic...
} finally {
// this will always run after everything else has finished...
}
https://github.com/laravel/framework/pull/18605#issuecomment-290920457
Guys, I do really appreciate Your effort, but my goal was to not put additional logic into handle()
. That's why I've used destruct()
. I made a Trait to allow some of the jobs do more thatn usual. Anyway, thank You for Your commitment 🥇
Out of curiosity, is this __destruct()
not being run when the job has finished still an issue (Laravel 7+)? Was the queue listener effectively holding onto the job instance until it gets the next job to handle?
I'm wanting to do some cleaning up after a job ends - no matter how it ends - and also would like to be able to add a trait to the jobs that need the cleanup, rather than mess with the logic within the job itself.
Update: just tested on L7. Yes it is still happening.
Oh my, almost 4 years after issue and new comment :) I am no longer writing in PHP, but I do remember that thing was a surprise after I have assumed it shall work with __destruct
. I do remember that I have found a way to run queue differently in order to fix this behaviour. Can't check right now, but I think it was related to https://laravel.com/docs/8.x/queues#the-queue-work-command (:work vs :listen). One of them was running without touching destruct and another ran in full cycle.
I'm switching to queue:listen
, but that is a bit of a brutal solution. I think a simple unset($job)
in the right place in the framework will fix it for queue:work
too. The framework basically holds onto the last job until it gets a new job to run. Anyway - sorry for digging up ancient history, and good luck in all your future endeavours :-) This issue is a good place to launch a possible PR.
Good luck :)
@judgej: you can add the failed() method.
Description:
When I run
queue:work
the __destruct() method on Job itself isn't called until next job will be processed on queue -queue:listen
works fine. Is this by design?Steps To Reproduce:
Just dispatch any job containing __destruct() method which writes something to log file and see that it will do nothing until next job lands into queue and will be processed.