prisma / prisma

Next-generation ORM for Node.js & TypeScript | PostgreSQL, MySQL, MariaDB, SQL Server, SQLite, MongoDB and CockroachDB
https://www.prisma.io
Apache License 2.0
39.68k stars 1.55k forks source link

High memory usage on Prisma 5.11.0 #23661

Open clemclem93 opened 7 months ago

clemclem93 commented 7 months ago

Bug description

I was trying to migrate from Prisma 4.2.1 to Prisma 5.11.0 but this new version seems to use more memory than before. And possibly with a memory leak.

My application is deployed on heroku and its dashboard shows me that with the version 4.2.1, it used on average 700MB but with the version 5.11.0 it used on average 1.5GB with a continuous increasing until all memory is consumed and need to start a new instance.

Here is the memory usage before the migration (31fb79a7 is the commit with the version 5.11.0):

Screenshot 2024-03-28 at 17 45 38

Here we can see the diff after the revert to the version 4.2.1:

Screenshot 2024-03-29 at 14 21 06

Do you have any idea why this version use more memory and if there is any memory leak?

How to reproduce

  1. Install prisma 4.2.1 yarn add prisma@4.2.1
  2. Observe the memory usage
  3. Migrate to prisma 5.11.0
  4. Observe the memory usage

Environment & setup

Prisma Version

prisma                  : 5.11.0
@prisma/client          : 5.11.0
Computed binaryTarget   : darwin-arm64
Operating System        : darwin
Architecture            : arm64
Node.js                 : v16.20.2
Query Engine (Node-API) : libquery-engine efd2449663b3d73d637ea1fd226bafbcf45b3102 (at node_modules/@prisma/engines/libquery_engine-darwin-arm64.dylib.node)
Schema Engine           : schema-engine-cli efd2449663b3d73d637ea1fd226bafbcf45b3102 (at node_modules/@prisma/engines/schema-engine-darwin-arm64)
Schema Wasm             : @prisma/prisma-schema-wasm 5.11.0-15.efd2449663b3d73d637ea1fd226bafbcf45b3102
Default Engines Hash    : efd2449663b3d73d637ea1fd226bafbcf45b3102
Studio                  : 0.499.0
jkomyno commented 7 months ago

Hi @clemclem93, can you please share your schema.prisma file, and show us how you're instantiating PrismaClient from @prisma/client? Thanks.

Also, have you tried any other Prisma version in between? We usually recommend upgrading from Prisma 4 -> 5 by trying out prisma@5.0.0 first. Trying more Prisma versions would potentially help us identifying the changes that led to the memory increase observed in your Heroku graph.

clemclem93 commented 7 months ago

Hi @jkomyno, the schema.prisma is a bit complex, I have 79 models, 42 enums, I used one-to-one and one-to-many relations, @@index, @unique, @default(now()), @default(uuid()), @default(false), @default(0), @updatedAt,

Here is the start:

generator client {
  provider = "prisma-client-js"
}

I don't use any preview feature anymore. I was using orderByNulls with the version 4.2.1.

and here is how I instantiate it:

import { PrismaClient } from "@prisma/client";

import activity from "./prismaCustomParams/activity";
import comment from "./prismaCustomParams/comment";
import recommendation from "./prismaCustomParams/recommendation";
import tag from "./prismaCustomParams/tag";
import tagCategory from "./prismaCustomParams/tagCategory";
import user from "./prismaCustomParams/user";

const prisma = new PrismaClient();

prisma.$use(async (params, next) => {
  switch (params.model) {
    case "User": {
      return next({ ...params, ...user(params) });
    }
    case "TagCategory": {
      return next({ ...params, ...tagCategory(params) });
    }
    case "Tag": {
      return next({ ...params, ...tag(params) });
    }
    case "Comment": {
      return next({ ...params, ...comment(params) });
    }
    case "Recommendation": {
      return next({ ...params, ...recommendation(params) });
    }
    case "Activity": {
      return next({ ...params, ...activity(params) });
    }
  }

  return next(params);
});

export default prisma;

All prismaCustomParams files handle soft delete rows (only fetch rows with a deletedAt null and instead of doing delete, I use update to set deleteAt). I follow this documentation to handle soft delete : https://www.prisma.io/docs/orm/prisma-client/client-extensions/middleware/soft-delete-middleware (with option 2 for step 3).

To give you more context I use prisma for GraphQL API.

I tested the version 5.0.0 and I don't see the high memory usage of the version 5.11.0. I will try other intermediate versions and keep you posted.

clemclem93 commented 7 months ago

Hello @jkomyno

I reproduce the memory issue with the prisma version 5.3.1.

prisma5 3 1

I also tested the version 5.1.0 and I don't have the issue as suddently as I had with 5.11.0 or 5.3.1. I have another issue on this one (query related and not memory issue which trigger PrismaClientValidationError that I didn't reproduce in the others versions, I will create a separate issue). I'm not sure yet but it seems like 5.1.0 is progressively using more and more memory too. I prefer to keep running the version 5.0.0 in the whole week to observe if I have a similar behavior.

But I hope this give you more clue.

jkomyno commented 7 months ago

Thanks for the progress @clemclem93!

clemclem93 commented 6 months ago

Hi @jkomyno FYI we don't have memory issue with prisma 5.2.0 neither.

DevangMstryls commented 5 months ago

I am also experiencing a high memory usage after upgrading from v4.4.0 to v5.11.0 in production. We got to know about this when all our Node.js apps running different services started to automatically restart on all servers sporadically.

Questions:

  1. Is this primarily because of Read replica or it would happen even without the usage of read replica?
  2. Does v5.x.x requires higher memory systems in general? We are running an AWS
  3. Is this happening with the latest v5.14.0 as well?
janpio commented 5 months ago

We are not aware of a general increase in memory usage for most of our users, except for the report here in this issue. Optimally we would have a reproduction that clearly shows the increase, so we can start digging in and figure out what is going on.

@DevangMstryls It would be interesting for you to also try other 5.x versions as clemclem above did.

GDTNguyen commented 3 months ago

i'm having this same issue in production too. It happens when all connections to the database are used up and the server has to wait for open connections before executing the prisma queries

clemclem93 commented 3 months ago

Hope this can help: I didn't have issue with Prisma 5.17.0

GDTNguyen commented 3 months ago

5.17.0

Thanks for the info, I've just upgraded from 4.**, hopefully it solves the issue. I'll monitor memory usage over a few days

mohammedpatla commented 2 months ago

I am having a similar issue. I am still debugging this issue, but we are running with NestJs and GraphQL with node 20.15.0 and Prisma v5.19.0 (just recently bumped our deps to ensure old deps were not causing a leak). I run our service inside a container bullseye ( we used to run in Alpine, but our containers just usually crashed due to memory spillage and the garbage collector never catching it), and I notice that our RSS spills over by a big margin; the issue is that it usually just increases and does not decrease (hence I conclude a memory issue). I am still assessing if this requires a new problem and finding a way to reproduce it.

Update: After a bunch of troubleshooting, I can't seem to reproduce it outside our Prod env, likely because the amount of data processed is more relevant on prod than our staging or local. One thing i can say for sure is that the heap usage does not go above 2-3gb at most and clears out after as well, which means at least the active process (is our case nestjs with prisma) is not the issue. It can be some secondary process since we have so many async operations (both prisma and non prisma) maybe one of those lives in RSS. The way I temporarily mitigated this issue is to make sure we are using Bookworm or Bullseye as container images and setting the --max-old-space-size in our nodejs process to 50% of the available system ram so that way the other process has enough space to utilize for running processes. So even though RSS blows up the system garbage collector kicks in and clears the memory for that.

leppaott commented 1 month ago

We are not aware of a general increase in memory usage for most of our users, except for the report here in this issue. Optimally we would have a reproduction that clearly shows the increase, so we can start digging in and figure out what is going on.

@DevangMstryls It would be interesting for you to also try other 5.x versions as clemclem above did.

For reproduction is having a timer loop setInterval using findMany() and mapped array of update/upsert() enough? RSS should keep growing over time. I haven't tried. Other than #25371 seems to be the same issue.