Closed zellkz closed 11 months ago
Any update on this? Wondering about the same thing - can the jobs be sent into Laravel's failed jobs instead of moved to "dead" queue on AWS?
The README should also be updated since there is no FailedJobException
and it was removed at some point (because it was unused).
When I used auto-generated queues by Beanstalk: I add the dead letter SQS queue created and failed job were pushed there. But, when I use an already created SQS queue: I don't have any dead letter queue. So I can't debug failed job: I only get "Job timeout stuff" in my laravel logs...
So the only way to manage this, with custom SQS queues, is to deal ourselves with a custom failed process/business on each of our jobs?
Thanks
@mtx-z this is controlled by AWS EB, not this package? AWS daemon will hit our endpoint with the payload and if it doesn't get a valid (200) response, it's going to move the job to the dead queue. there is no way we can control this, I think. cannot you just manually select a dead letter queue?
@zellkz @aromka failed jobs are already implemented in the Elastic Beanstalk, if a request returns a non-ok (200) code, EB is going to put the job in the dead letter queue
Moving jobs to local, Laravel-specific dead queue sounds a bit like a hack? Wouldn't it be better to make "php artisan retry" work with Amazon's dead letter queue instead and fully utilise what they are providing? Dead letter queue is just an analog of Laravel's "failed_jobs" table after all
@dusterio with more tests, I was able to:
But with this configuration, I'm not able to start a php artisan queue:failed
to try to work on failed message, cause it's SQS that post us messages.
So, I would need to set my queue 2 to POST to the same endpoint on my application. But in this case, it'll immediately fail (message to queue 1, fail, push to queue 2, that POST to our app, fail again for the same error, theoretically).
What I would need is:
When working with the classic failed_job table, I decide when laravel should retry the failed job: when I fixed the issue ^^.
Maybe I'm missing something? Thx
Any update or suggestions on this?
Any updates or suggestions?
Any updates or suggestions?
I would suggest switching to Laravel Horizon + Redis (ElastiCache). You can actually monitor what's going on with your queues, don't need separate servers (just scale your AWS fleet for regular servers if needed), and don't need any third party libs. Works like magic on AWS EB with no issues, and if you're already using Redis for caching makes it even easier to switch.
How does this package handle Laravel Job's
failed()
method andQueue::failing()
? https://laravel.com/docs/5.5/queues#dealing-with-failed-jobsIs this not supported? Do I need to create my own
failed
handling, and not use Laravel's built infailed
logic?