dusterio / laravel-aws-worker

Run Laravel (or Lumen) tasks and queue listeners inside of AWS Elastic Beanstalk workers
MIT License
311 stars 59 forks source link

Laravel Job `failed` and `Queue::failing()` support? #43

Closed zellkz closed 11 months ago

zellkz commented 6 years ago

How does this package handle Laravel Job's failed() method and Queue::failing()? https://laravel.com/docs/5.5/queues#dealing-with-failed-jobs

Is this not supported? Do I need to create my own failed handling, and not use Laravel's built in failed logic?

aromka commented 5 years ago

Any update on this? Wondering about the same thing - can the jobs be sent into Laravel's failed jobs instead of moved to "dead" queue on AWS?

d3radicated commented 4 years ago

The README should also be updated since there is no FailedJobException and it was removed at some point (because it was unused).

mtx-z commented 4 years ago

When I used auto-generated queues by Beanstalk: I add the dead letter SQS queue created and failed job were pushed there. But, when I use an already created SQS queue: I don't have any dead letter queue. So I can't debug failed job: I only get "Job timeout stuff" in my laravel logs...

So the only way to manage this, with custom SQS queues, is to deal ourselves with a custom failed process/business on each of our jobs?

Thanks

dusterio commented 4 years ago

@mtx-z this is controlled by AWS EB, not this package? AWS daemon will hit our endpoint with the payload and if it doesn't get a valid (200) response, it's going to move the job to the dead queue. there is no way we can control this, I think. cannot you just manually select a dead letter queue?

dusterio commented 4 years ago

@zellkz @aromka failed jobs are already implemented in the Elastic Beanstalk, if a request returns a non-ok (200) code, EB is going to put the job in the dead letter queue

Moving jobs to local, Laravel-specific dead queue sounds a bit like a hack? Wouldn't it be better to make "php artisan retry" work with Amazon's dead letter queue instead and fully utilise what they are providing? Dead letter queue is just an analog of Laravel's "failed_jobs" table after all

mtx-z commented 4 years ago

@dusterio with more tests, I was able to:

But with this configuration, I'm not able to start a php artisan queue:failed to try to work on failed message, cause it's SQS that post us messages. So, I would need to set my queue 2 to POST to the same endpoint on my application. But in this case, it'll immediately fail (message to queue 1, fail, push to queue 2, that POST to our app, fail again for the same error, theoretically). What I would need is:

When working with the classic failed_job table, I decide when laravel should retry the failed job: when I fixed the issue ^^.

Maybe I'm missing something? Thx

canast02 commented 3 years ago

Any update or suggestions on this?

un-code-dev commented 2 years ago

Any updates or suggestions?

mcandylab commented 1 year ago

Any updates or suggestions?

aromka commented 1 year ago

I would suggest switching to Laravel Horizon + Redis (ElastiCache). You can actually monitor what's going on with your queues, don't need separate servers (just scale your AWS fleet for regular servers if needed), and don't need any third party libs. Works like magic on AWS EB with no issues, and if you're already using Redis for caching makes it even easier to switch.