Closed tommerrett closed 3 years ago
Hi, this is a nice idea and I didn't realize it was missing. I have started working on a first pass for this feature, and I'll let you know once it's nearing completion.
Thanks, let me know if you would like any support
Hey, I have tagged a v2.1.0-beta1
release that includes this new feature. Is there a chance you can help test this and verify that it matches your needs?
Next week I will be able to test it on one of my own projects.
It now respects the max attempts per task, so if the task has reached the max attempts, the failed job should be logged (in the database if configured)
Great thanks @marickvantuil I will test it and let you know.
On a side note, do you experience high latency from the task queues API when creating tasks through this package? I am experiencing > 350ms latency and would expect this to be <100ms.
hi @marickvantuil I have tested this and it is working as expected in my application. I had to set the failed.driver value in config\queue.php
to database-uuids
as below as my failed_jobs table had the uuid column as mandatory.
'failed' => [ 'database' => env('DB_CONNECTION', 'mysql'), 'table' => 'failed_jobs', 'driver' => 'database-uuids', ],
You may want to add this to the readme for the later versions of Laravel. The branch is ready to release out of beta from my perspective. Thank you for all your work.
Thank you. I have tagged 2.1.0
. Apparently 2.1.0
already existed as a version (must have been a typo) so I had to remove it first and remake it with these changes.
On a side note, do you experience high latency from the task queues API when creating tasks through this package? I am experiencing > 350ms latency and would expect this to be <100ms.
Getting around 250ms. I found this on their documentation:
Explicitly specifying a task ID enables task de-duplication. If a task's ID is identical to that of an existing task or a task that was deleted or executed recently then the call will fail with google.rpc.Code.ALREADY_EXISTS. If the task's queue was created using Cloud Tasks, then another task with the same name can't be created for ~1hour after the original task was deleted or executed. If the task's queue was created using queue.yaml or queue.xml, then another task with the same name can't be created for ~9days after the original task was deleted or executed.
Because there is an extra lookup cost to identify duplicate task names, these tasks.create calls have significantly increased latency. Using hashed strings for the task id or for the prefix of the task id is recommended. Choosing task ids that are sequential or have sequential prefixes, for example using a timestamp, causes an increase in latency and error rates in all task commands. The infrastructure relies on an approximately uniform distribution of task ids to store and serve tasks efficiently.
Though this package isn't using explicitly defined task IDs and the IDs I am seeing are already random (and I think without de-duplication):
Maybe it just is that slow? 😬
Problem Currently Jobs fail and are not written to the failed_jobs table.
Enhancement request Log failed jobs to the failed_jobs table.
Description of how it is handled by Laravel in the Queue Worker With the native Laravel queue worker command failed jobs are caught and logged to the failed_jobs table. Details of the Laravel implementation are in the
Illuminate\Queue\Console\WorkCommand
class.Specifically it listens for the JobFailed event and then calls
$this->logFailedJob()
which performs the following:This functionality is not provided currently in the laravel-google-tasks-queue package and it would be a great addition to track failed jobs.
Please let me know if you would consider adding this functionality. Thanks