stackkit / laravel-google-cloud-tasks-queue

Use Google Cloud Tasks as the queue driver for Laravel
MIT License
69 stars 36 forks source link

Enhancement - Failed jobs are written to the failed_jobs table (if configured) #12

Closed tommerrett closed 3 years ago

tommerrett commented 3 years ago

Problem Currently Jobs fail and are not written to the failed_jobs table.

Enhancement request Log failed jobs to the failed_jobs table.

Description of how it is handled by Laravel in the Queue Worker With the native Laravel queue worker command failed jobs are caught and logged to the failed_jobs table. Details of the Laravel implementation are in the Illuminate\Queue\Console\WorkCommand class.

Specifically it listens for the JobFailed event and then calls $this->logFailedJob() which performs the following:

$this->laravel['queue.failer']->log(
            $event->connectionName,
            $event->job->getQueue(),
            $event->job->getRawBody(),
            $event->exception
        );

This functionality is not provided currently in the laravel-google-tasks-queue package and it would be a great addition to track failed jobs.

Please let me know if you would consider adding this functionality. Thanks

marickvantuil commented 3 years ago

Hi, this is a nice idea and I didn't realize it was missing. I have started working on a first pass for this feature, and I'll let you know once it's nearing completion.

tommerrett commented 3 years ago

Thanks, let me know if you would like any support

marickvantuil commented 3 years ago

Hey, I have tagged a v2.1.0-beta1 release that includes this new feature. Is there a chance you can help test this and verify that it matches your needs?

Next week I will be able to test it on one of my own projects.

marickvantuil commented 3 years ago

It now respects the max attempts per task, so if the task has reached the max attempts, the failed job should be logged (in the database if configured)

Screenshot 2021-03-28 at 17 01 04
tommerrett commented 3 years ago

Great thanks @marickvantuil I will test it and let you know.

On a side note, do you experience high latency from the task queues API when creating tasks through this package? I am experiencing > 350ms latency and would expect this to be <100ms.

tommerrett commented 3 years ago

hi @marickvantuil I have tested this and it is working as expected in my application. I had to set the failed.driver value in config\queue.php to database-uuids as below as my failed_jobs table had the uuid column as mandatory.

'failed' => [ 'database' => env('DB_CONNECTION', 'mysql'), 'table' => 'failed_jobs', 'driver' => 'database-uuids', ],

You may want to add this to the readme for the later versions of Laravel. The branch is ready to release out of beta from my perspective. Thank you for all your work.

marickvantuil commented 3 years ago

Thank you. I have tagged 2.1.0. Apparently 2.1.0 already existed as a version (must have been a typo) so I had to remove it first and remake it with these changes.

marickvantuil commented 3 years ago

On a side note, do you experience high latency from the task queues API when creating tasks through this package? I am experiencing > 350ms latency and would expect this to be <100ms.

Getting around 250ms. I found this on their documentation:

Explicitly specifying a task ID enables task de-duplication. If a task's ID is identical to that of an existing task or a task that was deleted or executed recently then the call will fail with google.rpc.Code.ALREADY_EXISTS. If the task's queue was created using Cloud Tasks, then another task with the same name can't be created for ~1hour after the original task was deleted or executed. If the task's queue was created using queue.yaml or queue.xml, then another task with the same name can't be created for ~9days after the original task was deleted or executed.

Because there is an extra lookup cost to identify duplicate task names, these tasks.create calls have significantly increased latency. Using hashed strings for the task id or for the prefix of the task id is recommended. Choosing task ids that are sequential or have sequential prefixes, for example using a timestamp, causes an increase in latency and error rates in all task commands. The infrastructure relies on an approximately uniform distribution of task ids to store and serve tasks efficiently.

Though this package isn't using explicitly defined task IDs and the IDs I am seeing are already random (and I think without de-duplication):

Screenshot 2021-05-11 at 20 40 17

Maybe it just is that slow? 😬