RomainLanz / adonis-bull-queue

Queue system based on BullMQ for AdonisJS
MIT License
147 stars 26 forks source link

Cannot find module in "production" #54

Closed 0xNekr closed 2 months ago

0xNekr commented 2 months ago

Hi,

First of all, everything works perfectly on my local machine.

But once deployed on an EC2 Ubuntu instance I get this error at the time of process a message that arrives:

~/breeve/adonis-backend$ npm run jobs:auto

> adonis-backend@0.0.1 jobs:auto
> node ace queue:listen --queue=autoMessages

(node:4498) [DEP0180] DeprecationWarning: fs.Stats constructor is deprecated.
(Use `node --trace-deprecation ...` to show where the warning was created)
(node:4498) [DEP0040] DeprecationWarning: The `punycode` module is deprecated. Please use a userland alternative instead.
(Use `node --trace-deprecation ...` to show where the warning was created)
{"level":30,"time":1726053405322,"pid":4498,"hostname":"ip-10-0-20-136","msg":"Queue [autoMessages] processing started..."}
{"level":50,"time":1726053416354,"pid":4498,"hostname":"ip-10-0-20-136","msg":"Job file:///app/build/app/jobs/send_auto_message_job.js was not able to be created"}
{"level":50,"time":1726053416354,"pid":4498,"hostname":"ip-10-0-20-136","err":{"type":"Error","message":"Cannot find module '/app/build/app/jobs/send_auto_message_job.js' imported from /home/ubuntu/breeve/adonis-backend/node_modules/@rlanz/bull-queue/build/src/queue.js","stack":"Error: Cannot find module '/app/build/app/jobs/send_auto_message_job.js' imported from /home/ubuntu/breeve/adonis-backend/node_modules/@rlanz/bull-queue/build/src/queue.js\n    at finalizeResolution (/home/ubuntu/breeve/adonis-backend/node_modules/ts-node/dist-raw/node-internal-modules-esm-resolve.js:366:11)\n    at moduleResolve (/home/ubuntu/breeve/adonis-backend/node_modules/ts-node/dist-raw/node-internal-modules-esm-resolve.js:801:10)\n    at Object.defaultResolve (/home/ubuntu/breeve/adonis-backend/node_modules/ts-node/dist-raw/node-internal-modules-esm-resolve.js:912:11)\n    at /home/ubuntu/breeve/adonis-backend/node_modules/ts-node/src/esm.ts:218:35\n    at entrypointFallback (/home/ubuntu/breeve/adonis-backend/node_modules/ts-node/src/esm.ts:168:34)\n    at /home/ubuntu/breeve/adonis-backend/node_modules/ts-node/src/esm.ts:217:14\n    at addShortCircuitFlag (/home/ubuntu/breeve/adonis-backend/node_modules/ts-node/src/esm.ts:409:21)\n    at resolve (/home/ubuntu/breeve/adonis-backend/node_modules/ts-node/src/esm.ts:197:12)\n    at nextResolve (node:internal/modules/esm/hooks:746:28)\n    at Hooks.resolve (node:internal/modules/esm/hooks:238:30)"},"msg":"Cannot find module '/app/build/app/jobs/send_auto_message_job.js' imported from /home/ubuntu/breeve/adonis-backend/node_modules/@rlanz/bull-queue/build/src/queue.js"}

My job is as follows

import { inject } from '@adonisjs/core'
import { Job } from '@rlanz/bull-queue'
import User from '#models/user'
import ChatService from '#services/chat_service
import Creator from '#models/creator'
import { AutoMessageType } from '#interfaces/message'

interface SendAutoMessageJobPayload {
  user : User
  creator : Creator
  messageType : AutoMessageType
}

@inject()
export default class SendAutoMessageJob extends Job {
  constructor(protected chatService : ChatService) {
    super()
  }

  static get $$filepath() {
    return import.meta.url
  }

  async handle(payload : SendAutoMessageJobPayload) {
    const { user, creator, messageType } = payload

    await this.chatService.sendAutoMessage(user, creator, messageType)
  }

  /**
   * This is an optional method that is called when the number of attempts has been exceeded and is marked as having failed.
   */
  async rescue(_payload : SendAutoMessageJobPayload) {}
}

and is in /app/jobs/

I don't know if this error has ever happened to anyone. I also tested with a build, but the error is the same.

I have the same version of node on the server and on my local machine.

I'm a bit stuck on this problem. I don't know if this is the best way to deploy this in production.

My queue.ts config :

import env from '#start/env'
import { defineConfig } from '@rlanz/bull-queue'

export default defineConfig({
  defaultConnection: {
    host: env.get('QUEUE_REDIS_HOST'),
    port: env.get('QUEUE_REDIS_PORT'),
    password: env.get('QUEUE_REDIS_PASSWORD'),
  },

  queue: {},

  worker: {},

  jobs: {
    /*
    |--------------------------------------------------------------------------
    | Default Job Attempts
    |--------------------------------------------------------------------------
    |
    | The default number of attempts after which the job will be marked as
    | failed. You can also set the number of attempts on individual jobs
    | as well.
    |
    | @see https://docs.bullmq.io/guide/retrying-failing-jobs
    |
    */
    attempts: 3,

    /*
    |--------------------------------------------------------------------------
    | Auto-Removal of Jobs
    |--------------------------------------------------------------------------
    |
    | Numbers of jobs to keep in the completed and failed queues before they
    | are removed. This is important to keep the size of these queues in
    | control. Set the value to false to disable auto-removal.
    |
    | @see https://docs.bullmq.io/guide/queues/auto-removal-of-jobs
    |
    */
    removeOnComplete: 100,
    removeOnFail: 100,
  },
})

And i'm using elasticache from AWS as Redis Cluster.

Thanks in advance for your help !

0xNekr commented 2 months ago

I forgot, it's probably important,

my job imports in package.json and ts.config

“imports": {
    “#jobs/*": ‘./app/jobs/*.js’,
“paths": {
    “#jobs/*“: [”./app/jobs/*.js"],
RomainLanz commented 2 months ago

It looks like the issue is related to our $$filepath() method.

In simple terms, when a job is dispatched, the file path is stored in Redis for the worker to use when instantiating the job. However, it appears that the stored path or the one computed later is incorrect.

Could you please log the value of import.meta.url in your job class and share what it looks like in your production setup?

0xNekr commented 2 months ago

Thanks for the quick reply!

I tried to log it directly in the $$filepath() function but I don't see anything, what is the best way to log it? And should I log it on the server side that manages the queue or on the adonis instance side that manages the API?

RomainLanz commented 2 months ago

The logs will be shown from the instance that scheduled the job, not from the queue worker.

0xNekr commented 2 months ago

Ok cool, here's the log :

file:///app/build/app/jobs/send_auto_message_job.js

RomainLanz commented 2 months ago

Could you confirm if both the worker and server are running on the same system, or is one running inside a Docker container while the other runs directly on the machine?

Because the following ~/breeve/adonis-backend doesn't look like /app/build that the running application creates.

0xNekr commented 2 months ago

There's one running on Docker (the Adonis intance) and the queue is on a direct server.

I hadn't thought of that, I can run the queue on a Docker too!

RomainLanz commented 2 months ago

Yeah, this is a known issue with how our system retrieves jobs. I'll look into a possible workaround in case it happens.

0xNekr commented 2 months ago

Ok great, thanks for the quick help, I'll be able to solve this problem easily now!

0xNekr commented 2 months ago

Small question related to turning queues on Docker. If I have a scaling of this queue with several instances in case I have volume, is there any risk of duplication of work or will it distribute the load correctly?

RomainLanz commented 2 months ago

Small question related to turning queues on Docker. If I have a scaling of this queue with several instances in case I have volume, is there any risk of duplication of work or will it distribute the load correctly?

This is managed directly by BullMQ, so there should be no risk of duplicated tasks.

📚 https://docs.bullmq.io/guide/workers/concurrency#multiple-workers

0xNekr commented 2 months ago

After launching the workers via Docker: everything works perfectly, thanks again for the help, I've saved precious time ahah

I'm closing the issue!