When the crud-service starts, it may end up opening lots of MongoDB connection that will remain unused (this also happens when the service receives lots of requests and needs more connections to the database).
There are two reasons why this happens:
1. No maxIdleTimeMs
The mongodb connection configuration is missing the maxIdleTimeMs which defaults to 0 and makes connection remain alive forever
2. createIndexes
At boot, the routine that creates indexes is spawning 1 promise for each collection. The promises start immediately, meaning that on a project with lots of collections (say 20) there are 20 concurrent requests to the collection.indexes()
I guess the reason for the Promise.all + splice is meant to cap concurrent promises, but that's not how Node works, this lines will only wait for promises to end, it won't start them 5 by 5
I think something like p-limit should do the trick
This command is the one responsible for the connection creation, since there is a request spike due to the uncontrolled promises the connection pool is increased, the lack of maxIdleTimeMs causes such connection to remain there forever
Verify connection changes with proposed solutions brings the current connections to much lower values, specifically:
a simple createIndex fix (using for/await instead of map+Promise.all brings connection down to a value ~11
with the maxIdleTimeMs value set to 1000 (1s): the opened connection (even with the spike caused by the map+Promise.all implementation) get closed as soon as a mongo operation is performed.
Note: to run requests benchmarks I've used vegeta (tuned with different rates, from 1/1s to 5000/1s) to verify connection increase behaviours
Description
When the crud-service starts, it may end up opening lots of MongoDB connection that will remain unused (this also happens when the service receives lots of requests and needs more connections to the database).
There are two reasons why this happens:
1. No
maxIdleTimeMs
The mongodb connection configuration is missing the
maxIdleTimeMs
which defaults to 0 and makes connection remain alive forever2.
createIndexes
At boot, the routine that creates indexes is spawning 1 promise for each collection. The promises start immediately, meaning that on a project with lots of collections (say 20) there are 20 concurrent requests to the
collection.indexes()
https://github.com/mia-platform/crud-service/blob/8bdef5c2313b1577ed18babfde9d79aa998ca431/index.js#L334-L433
I guess the reason for the Promise.all + splice is meant to cap concurrent promises, but that's not how Node works, this lines will only wait for promises to end, it won't start them 5 by 5
https://github.com/mia-platform/crud-service/blob/8bdef5c2313b1577ed18babfde9d79aa998ca431/index.js#L431-L433
I think something like p-limit should do the trick
This command is the one responsible for the connection creation, since there is a request spike due to the uncontrolled promises the connection pool is increased, the lack of
maxIdleTimeMs
causes such connection to remain there foreverhttps://github.com/mia-platform/crud-service/blob/8bdef5c2313b1577ed18babfde9d79aa998ca431/lib/createIndexes.js#L29
Environment
I'm using CRUD Service version 6.9.2
Minimal Reproduction
Run MongoDB locally
Connect and watch for opened connections
Check for connection count before the service starts:
Starting the service with 50 collection ends up to:
Verify connection changes with proposed solutions brings the current connections to much lower values, specifically:
for/await
instead ofmap+Promise.all
brings connection down to a value ~11map+Promise.all
implementation) get closed as soon as a mongo operation is performed.Note: to run requests benchmarks I've used vegeta (tuned with different rates, from
1/1s
to5000/1s
) to verify connection increase behavioursProposed solution
I'd go with both changes, add a configurable
maxIdleTimeMs
option and change thecreateIndexes
routine