novuhq / novu

Open-Source Notification Platform. Embeddable Notification Center, E-mail, Push and Slack Integrations.
https://novu.co
Other
34.5k stars 3.52k forks source link

Oversized Docker Images and Inefficient Monorepo Construction in Novu Project #3897

Open player0x3 opened 1 year ago

player0x3 commented 1 year ago

Description

The Docker image sizes across several repos in the Novu Project are exceedingly large, leading to additional storage and runtime costs. In addition, there are noticeable inefficiencies in the monorepo's npm construction, which may generate unnecessary redundancy and lower efficiency.

Suggestion

I propose we optimize the Docker image sizes in Novu Project along with enhancing the monorepo's build.
For Docker image sizes, it is necessary to analyze the factors affecting the image size and find appropriate solutions.
Regarding the monorepo build, we could consider taking advantage of the Tree Shaking feature in NestJS during the build process. This could optimize the build results, improve build efficiency, and thereby reduce Docker image size.

scopsy commented 1 year ago

Hi @luz33c thank you for your submission! One of the main problems is the passing of the redundant context, we have added some smarter context passing logic to the API docker image, but haven't yet updated it to the rest of the services

player0x3 commented 1 year ago

Then there's the runtime memory, which consumes a bit too much, 200m RAM for an api service, 200m RAM for a worker service, and then there's ws and so on, which is unacceptable for a message notification app

L-U-C-K-Y commented 1 year ago

Then there's the runtime memory, which consumes a bit too much, 200m RAM for an api service, 200m RAM for a worker service, and then there's ws and so on, which is unacceptable for a message notification app

IMO this sounds fine for nestjs services. We have many microservices with nestjs and most of them consume similar amounts of RAM.

scopsy commented 1 year ago

@L-U-C-K-Y that makes sense, we can try to do some investigations regarding what is the higher memory consumer. But not sure when I will be able to take a look at this due to time constraints.