Closed 0xNekr closed 2 months ago
I forgot, it's probably important,
my job imports in package.json and ts.config
“imports": {
“#jobs/*": ‘./app/jobs/*.js’,
“paths": {
“#jobs/*“: [”./app/jobs/*.js"],
It looks like the issue is related to our $$filepath()
method.
In simple terms, when a job is dispatched, the file path is stored in Redis for the worker to use when instantiating the job. However, it appears that the stored path or the one computed later is incorrect.
Could you please log the value of import.meta.url
in your job class and share what it looks like in your production setup?
Thanks for the quick reply!
I tried to log it directly in the $$filepath()
function but I don't see anything, what is the best way to log it? And should I log it on the server side that manages the queue or on the adonis instance side that manages the API?
The logs will be shown from the instance that scheduled the job, not from the queue worker.
Ok cool, here's the log :
file:///app/build/app/jobs/send_auto_message_job.js
Could you confirm if both the worker and server are running on the same system, or is one running inside a Docker container while the other runs directly on the machine?
Because the following ~/breeve/adonis-backend
doesn't look like /app/build
that the running application creates.
There's one running on Docker (the Adonis intance) and the queue is on a direct server.
I hadn't thought of that, I can run the queue on a Docker too!
Yeah, this is a known issue with how our system retrieves jobs. I'll look into a possible workaround in case it happens.
Ok great, thanks for the quick help, I'll be able to solve this problem easily now!
Small question related to turning queues on Docker. If I have a scaling of this queue with several instances in case I have volume, is there any risk of duplication of work or will it distribute the load correctly?
Small question related to turning queues on Docker. If I have a scaling of this queue with several instances in case I have volume, is there any risk of duplication of work or will it distribute the load correctly?
This is managed directly by BullMQ, so there should be no risk of duplicated tasks.
📚 https://docs.bullmq.io/guide/workers/concurrency#multiple-workers
After launching the workers via Docker: everything works perfectly, thanks again for the help, I've saved precious time ahah
I'm closing the issue!
Hi,
First of all, everything works perfectly on my local machine.
But once deployed on an EC2 Ubuntu instance I get this error at the time of process a message that arrives:
My job is as follows
and is in /app/jobs/
I don't know if this error has ever happened to anyone. I also tested with a build, but the error is the same.
I have the same version of node on the server and on my local machine.
I'm a bit stuck on this problem. I don't know if this is the best way to deploy this in production.
My queue.ts config :
And i'm using elasticache from AWS as Redis Cluster.
Thanks in advance for your help !