oracle / node-oracledb

Oracle Database driver for Node.js maintained by Oracle Corp.
http://oracle.github.io/node-oracledb/
Other
2.24k stars 1.07k forks source link

Loop request has increased execution time #1529

Closed agungsv closed 1 year ago

agungsv commented 1 year ago
  1. What versions are you using?

Database : Oracle DB 11.2.0.3.0 process.platform : linux process.version : v16.14.0 process.arch : x64 versionString : 5.2.0 oracleClientVersionString : 18.5.0.0.0

  1. Describe the problem

I want to know how to run parallel query execution correctly. I am currently using connection pool and run request from async for-loop in another API server using axios, I made sure the caller API run for-loop in async and I confirmed that the request made in parallel, but it looks like the results are returned in sequence because the execution time increases by each request (see below first three and last three requests).

I expect the execution time is similar across requests because they are all requested in parallel, because if I run the requests individually, each of them return under 300 ms.

GET /inventory-items/352331/price?pricelist_id=9036&uom_code=PCS 200 262.090 ms - 247
GET /inventory-items/348414/price?pricelist_id=556764&uom_code=SET 200 270.480 ms - 251
GET /inventory-items/348411/price?pricelist_id=556764&uom_code=SET 200 274.747 ms - 251
.........
GET /inventory-items/363343/price?pricelist_id=9036&uom_code=BOX 200 860.603 ms - 244
GET /inventory-items/363341/price?pricelist_id=9036&uom_code=BOX 200 914.519 ms - 249
GET /inventory-items/342674/price?pricelist_id=9036&uom_code=PCS 200 1798.504 ms - 250
  1. Include a runnable Node.js script that shows the problem.

Pool Creation : Pretty much similar to https://github.com/oracle/node-oracledb/blob/main/examples/connectionpool.js

Loop Request in another API:

const getInvItemPrice = async (orderLine) => {
const queryParamsPrice = {
  pricelist_id : orderLine.pricelist_id,
  uom_code : orderLine.uom_code
};

let getInvItemPriceRes = await axios.get('/item-price/'+orderLine.item_id, {params : queryParamsPrice});

if(getInvItemPriceRes.data.data && getInvItemPriceRes.data.data.length){
  const invItemPrice = getInvItemPriceRes.data.data[0];

  linePrice = (invItemPrice.OPERAND_INCL_TAX) ? invItemPrice.OPERAND_INCL_TAX : 0;
  linePriceExclTax = (invItemPrice.OPERAND) ? invItemPrice.OPERAND : 0;
};
};

for(const orderLine of getOrderLinesRes){
  promises.push(getInvItemPrice(orderLine));
}

Oracle Modal :

findItemPriceByPricelist: async (inventoryItemId, pricelistId, uomCode) => {
    let connection;
    try{
      connection = await oracledb.getConnection();

      const sql = "[SELECT SQL]";

      const bind = {
        inventoryItemId: inventoryItemId,
        pricelistId: pricelistId,
        itemUomCode: uomCode
      };

      const query = await connection.execute(sql, bind);

      return query.rows;
    } catch (err){
      throw err;
    } finally{
      if (connection){
        await connection.close();
      }
    }
  }
cjbj commented 1 year ago

How big's the pool? How big is UV_THREADPOOL_SIZE?

agungsv commented 1 year ago

Hi @cjbj , I set the poolMin to 0, poolMax to 50, and poolIncrement to 1, with UV_THREADPOOL_SIZE is set to 54. The example above has total of 48 requests

anthony-tuininga commented 1 year ago

A few comments/questions:

agungsv commented 1 year ago

Hi @anthony-tuininga ,

agungsv commented 1 year ago

Hi @anthony-tuininga , I've tested with increased poolMin and the results are still the same, the execution time still increase by each loop. There are some weird results though, some have lower execution time but printed in terminal after the higher one. See log below.

GET /inventory-items/348411/price?pricelist_id=556764&uom_code=SET 200 23.261 ms - 251
GET /inventory-items/359842/price?pricelist_id=556764&uom_code=SET 200 23.671 ms - 250
GET /inventory-items/348414/price?pricelist_id=556764&uom_code=SET 200 25.488 ms - 251
.........
GET /inventory-items/352331/price?pricelist_id=9036&uom_code=PCS 200 194.997 ms - 247
GET /inventory-items/363355/price?pricelist_id=9036&uom_code=BOX 200 169.819 ms - 247
GET /inventory-items/363344/price?pricelist_id=9036&uom_code=SET 200 179.821 ms - 243
.........
GET /inventory-items/357155/price?pricelist_id=9036&uom_code=BOX 200 253.766 ms - 252
GET /inventory-items/363341/price?pricelist_id=9036&uom_code=BOX 200 293.007 ms - 249
GET /inventory-items/357187/price?pricelist_id=9036&uom_code=BOX 200 336.574 ms - 248

Don't mind the lower execution time, I simplify the query so it return much more faster for testing.

Edit : I increased the poolMin to 20 and 40 with same result. The only difference is it does run faster on first request.

cjbj commented 1 year ago

There's not a lot of detail for us to comment on. Are you running Node.js as a client and server? (Is that what you mean by 'another API'?) We can't check you have set UV_THREADPOOL_SIZE correctly - and it's easy to get wrong. Nor can we see your timing code. It looks like you are including the GET times, but you should start by checking the times that the driver itself took for the DB operations.

agungsv commented 1 year ago

Hi @cjbj , Yes I run two separate Node.js projects, one that do the loop request using axios, and one that receive the request and connect to Oracle DB. I do set UV_THREADPOOL_SIZE correctly, at least it does print the value when I console.log(process.env.UV_THREADPOOL_SIZE), which return 54.

For the timing results, that is from Morgan, I set it using app.use(logger('dev')).

I'm sorry for the lack of details, I'm still new on parallel handling and confused. So I take that from node-oracledb side, using the pool coonection setting should've allow requests to run in parallel for each pool.getConnection call, right? If so I'll that as an answer and I'll try to look for other probable cause.

cjbj commented 1 year ago

I do set UV_THREADPOOL_SIZE correctly, at least it does print the value when I console.log(process.env.UV_THREADPOOL_SIZE), which return 54.

If you check our doc, that's no guarantee it was set correctly ! Node.js doesn't make it easy for us, so it's better that we see how you set it.

One connection can only execute one SQL statement at a time. But if you open multiple connections, you can use them in parallel. (But see @anthony-tuininga's last bullet point).

agungsv commented 1 year ago

Hi @cjbj ,

For the way I set the UV_THREADPOOL_SIZE, I set it in .env file, and call it using nodemon -r dotenv/config ./bin/www. See below for my .env file :

#App variables
NODE_ENV=Development
APP_PORT=3001
STR_LIMIT=10mb
UV_THREADPOOL_SIZE=54

#Redis variables
REDIS_HOST=localhost
REDIS_PORT=7001
REDIS_EXPIRE=86400
REDIS_PREFIX=TestDev

#Database variables (all required)
DB_CONN_STRING=XXXXXX
DB_USER=XXXXX
DB_PASSWORD=XXXX
DB_POOL_MAX=50
DB_POOL_INCREMENT=1
DB_POOL_MIN=0
DB_POOL_TIMEOUT=60

I do follow the doc, and if I check the process using ps -ef | grep node, I can see two main processes (both are from Oracle API project). Below is the pstack output for process node -r dotenv/config ./bin/www:

Thread 62 (Thread 0x7fe978026600 (LWP 28257) "node"):
#0  0x00007fe977ed8635 in clock_nanosleep@GLIBC_2.2.5 () from /lib64/libc.so.6
#1  0x00007fe977edcea7 in nanosleep () from /lib64/libc.so.6
#2  0x00007fe94d345cc3 in kpucpincrtime () from /usr/lib/oracle/18.5/client64/lib/libclntsh.so
#3  0x00007fe977e8cded in start_thread () from /lib64/libc.so.6
#4  0x00007fe977f12370 in clone3 () from /lib64/libc.so.6
Thread 61 (Thread 0x7fe92dbd2640 (LWP 28256) "node"):
#0  0x00007fe977e899d9 in __futex_abstimed_wait_common () from /lib64/libc.so.6
#1  0x00007fe977e8c1d0 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libc.so.6
#2  0x00007fe97b8f836d in uv_cond_wait () from /lib64/libuv.so.1
#3  0x00007fe97b8ebb6e in worker () from /lib64/libuv.so.1
#4  0x00007fe977e8cded in start_thread () from /lib64/libc.so.6
#5  0x00007fe977f12370 in clone3 () from /lib64/libc.so.6
...........
Thread 10 (Thread 0x7fe9577fe640 (LWP 28205) "node"):
#0  0x00007fe977e899d9 in __futex_abstimed_wait_common () from /lib64/libc.so.6
#1  0x00007fe977e8c1d0 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libc.so.6
#2  0x00007fe97b8f836d in uv_cond_wait () from /lib64/libuv.so.1
#3  0x00007fe97b8ebb6e in worker () from /lib64/libuv.so.1
#4  0x00007fe977e8cded in start_thread () from /lib64/libc.so.6
#5  0x00007fe977f12370 in clone3 () from /lib64/libc.so.6
Thread 9 (Thread 0x7fe957fff640 (LWP 28204) "node"):
#0  0x00007fe977e899d9 in __futex_abstimed_wait_common () from /lib64/libc.so.6
#1  0x00007fe977e8c1d0 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libc.so.6
#2  0x00007fe97b8f836d in uv_cond_wait () from /lib64/libuv.so.1
#3  0x00007fe97b8ebb6e in worker () from /lib64/libuv.so.1
#4  0x00007fe977e8cded in start_thread () from /lib64/libc.so.6
#5  0x00007fe977f12370 in clone3 () from /lib64/libc.so.6
Thread 8 (Thread 0x7fe964bff640 (LWP 28203) "node"):
#0  0x00007fe977e899d9 in __futex_abstimed_wait_common () from /lib64/libc.so.6
#1  0x00007fe977e8c1d0 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libc.so.6
#2  0x00007fe97b8f836d in uv_cond_wait () from /lib64/libuv.so.1
#3  0x00007fe97b8ebb6e in worker () from /lib64/libuv.so.1
#4  0x00007fe977e8cded in start_thread () from /lib64/libc.so.6
#5  0x00007fe977f12370 in clone3 () from /lib64/libc.so.6
Thread 7 (Thread 0x7fe97b93e640 (LWP 28201) "node"):
#0  0x00007fe977e899d9 in __futex_abstimed_wait_common () from /lib64/libc.so.6
#1  0x00007fe977e94c50 in __new_sem_wait_slow64.constprop.0 () from /lib64/libc.so.6
#2  0x00007fe97b8f83f2 in uv_sem_wait () from /lib64/libuv.so.1
#3  0x00007fe979034cd1 in node::inspector::(anonymous namespace)::StartIoThreadMain(void*) () from /lib64/libnode.so.93
#4  0x00007fe977e8cded in start_thread () from /lib64/libc.so.6
#5  0x00007fe977f12370 in clone3 () from /lib64/libc.so.6
Thread 6 (Thread 0x7fe96effd640 (LWP 28200) "node"):
#0  0x00007fe977e899d9 in __futex_abstimed_wait_common () from /lib64/libc.so.6
#1  0x00007fe977e8c1d0 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libc.so.6
#2  0x00007fe97b8f836d in uv_cond_wait () from /lib64/libuv.so.1
#3  0x00007fe978f8d083 in node::(anonymous namespace)::PlatformWorkerThread(void*) () from /lib64/libnode.so.93
#4  0x00007fe977e8cded in start_thread () from /lib64/libc.so.6
#5  0x00007fe977f12370 in clone3 () from /lib64/libc.so.6
Thread 5 (Thread 0x7fe96f7fe640 (LWP 28199) "node"):
#0  0x00007fe977e899d9 in __futex_abstimed_wait_common () from /lib64/libc.so.6
#1  0x00007fe977e8c1d0 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libc.so.6
#2  0x00007fe97b8f836d in uv_cond_wait () from /lib64/libuv.so.1
#3  0x00007fe978f8d083 in node::(anonymous namespace)::PlatformWorkerThread(void*) () from /lib64/libnode.so.93
#4  0x00007fe977e8cded in start_thread () from /lib64/libc.so.6
#5  0x00007fe977f12370 in clone3 () from /lib64/libc.so.6
Thread 4 (Thread 0x7fe96ffff640 (LWP 28198) "node"):
#0  0x00007fe977e899d9 in __futex_abstimed_wait_common () from /lib64/libc.so.6
#1  0x00007fe977e8c1d0 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libc.so.6
#2  0x00007fe97b8f836d in uv_cond_wait () from /lib64/libuv.so.1
#3  0x00007fe978f8d083 in node::(anonymous namespace)::PlatformWorkerThread(void*) () from /lib64/libnode.so.93
#4  0x00007fe977e8cded in start_thread () from /lib64/libc.so.6
#5  0x00007fe977f12370 in clone3 () from /lib64/libc.so.6
Thread 3 (Thread 0x7fe974dfe640 (LWP 28197) "node"):
#0  0x00007fe977e899d9 in __futex_abstimed_wait_common () from /lib64/libc.so.6
#1  0x00007fe977e8c1d0 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libc.so.6
#2  0x00007fe97b8f836d in uv_cond_wait () from /lib64/libuv.so.1
#3  0x00007fe978f8d083 in node::(anonymous namespace)::PlatformWorkerThread(void*) () from /lib64/libnode.so.93
#4  0x00007fe977e8cded in start_thread () from /lib64/libc.so.6
#5  0x00007fe977f12370 in clone3 () from /lib64/libc.so.6
Thread 2 (Thread 0x7fe9755ff640 (LWP 28196) "node"):
#0  0x00007fe977f1194e in epoll_wait () from /lib64/libc.so.6
#1  0x00007fe97b8feb0e in uv.io_poll.part () from /lib64/libuv.so.1
#2  0x00007fe97b8e872a in uv_run () from /lib64/libuv.so.1
#3  0x00007fe978f8fd07 in node::WorkerThreadsTaskRunner::DelayedTaskScheduler::Run() () from /lib64/libnode.so.93
#4  0x00007fe977e8cded in start_thread () from /lib64/libc.so.6
#5  0x00007fe977f12370 in clone3 () from /lib64/libc.so.6
Thread 1 (Thread 0x7fe97b74a0c0 (LWP 28195) "node"):
#0  0x00007fe977f1194e in epoll_wait () from /lib64/libc.so.6
#1  0x00007fe97b8feb0e in uv.io_poll.part () from /lib64/libuv.so.1
#2  0x00007fe97b8e872a in uv_run () from /lib64/libuv.so.1
#3  0x00007fe978e2d232 in node::SpinEventLoop(node::Environment*) () from /lib64/libnode.so.93
#4  0x00007fe978f591b1 in node::NodeMainInstance::Run(int*, node::Environment*) () from /lib64/libnode.so.93
#5  0x00007fe978f59720 in node::NodeMainInstance::Run(node::EnvSerializeInfo const*) () from /lib64/libnode.so.93
#6  0x00007fe978ec70ad in node::Start(int, char**) () from /lib64/libnode.so.93
#7  0x00007fe977e29510 in __libc_start_call_main () from /lib64/libc.so.6
#8  0x00007fe977e295c9 in __libc_start_main_impl () from /lib64/libc.so.6
#9  0x0000555b22a050f5 in _start ()

And below is for process node /usr/local/bin/nodemon -r dotenv/config ./bin/www :

Thread 11 (Thread 0x7f573dffb640 (LWP 28194) "node"):
#0  0x00007f57510899d9 in __futex_abstimed_wait_common () from /lib64/libc.so.6
#1  0x00007f575108c1d0 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libc.so.6
#2  0x00007f5754a2a36d in uv_cond_wait () from /lib64/libuv.so.1
#3  0x00007f5754a1db6e in worker () from /lib64/libuv.so.1
#4  0x00007f575108cded in start_thread () from /lib64/libc.so.6
#5  0x00007f5751112370 in clone3 () from /lib64/libc.so.6
Thread 10 (Thread 0x7f573e7fc640 (LWP 28193) "node"):
#0  0x00007f57510899d9 in __futex_abstimed_wait_common () from /lib64/libc.so.6
#1  0x00007f575108c1d0 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libc.so.6
#2  0x00007f5754a2a36d in uv_cond_wait () from /lib64/libuv.so.1
#3  0x00007f5754a1db6e in worker () from /lib64/libuv.so.1
#4  0x00007f575108cded in start_thread () from /lib64/libc.so.6
#5  0x00007f5751112370 in clone3 () from /lib64/libc.so.6
Thread 9 (Thread 0x7f573effd640 (LWP 28192) "node"):
#0  0x00007f57510899d9 in __futex_abstimed_wait_common () from /lib64/libc.so.6
#1  0x00007f575108c1d0 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libc.so.6
#2  0x00007f5754a2a36d in uv_cond_wait () from /lib64/libuv.so.1
#3  0x00007f5754a1db6e in worker () from /lib64/libuv.so.1
#4  0x00007f575108cded in start_thread () from /lib64/libc.so.6
#5  0x00007f5751112370 in clone3 () from /lib64/libc.so.6
Thread 8 (Thread 0x7f573f7fe640 (LWP 28191) "node"):
#0  0x00007f57510899d9 in __futex_abstimed_wait_common () from /lib64/libc.so.6
#1  0x00007f575108c1d0 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libc.so.6
#2  0x00007f5754a2a36d in uv_cond_wait () from /lib64/libuv.so.1
#3  0x00007f5754a1db6e in worker () from /lib64/libuv.so.1
#4  0x00007f575108cded in start_thread () from /lib64/libc.so.6
#5  0x00007f5751112370 in clone3 () from /lib64/libc.so.6
Thread 7 (Thread 0x7f5754a70640 (LWP 28189) "node"):
#0  0x00007f57510899d9 in __futex_abstimed_wait_common () from /lib64/libc.so.6
#1  0x00007f5751094c50 in __new_sem_wait_slow64.constprop.0 () from /lib64/libc.so.6
#2  0x00007f5754a2a3f2 in uv_sem_wait () from /lib64/libuv.so.1
#3  0x00007f5752234cd1 in node::inspector::(anonymous namespace)::StartIoThreadMain(void*) () from /lib64/libnode.so.93
#4  0x00007f575108cded in start_thread () from /lib64/libc.so.6
#5  0x00007f5751112370 in clone3 () from /lib64/libc.so.6
Thread 6 (Thread 0x7f5747fff640 (LWP 28188) "node"):
#0  0x00007f57510899d9 in __futex_abstimed_wait_common () from /lib64/libc.so.6
#1  0x00007f575108c1d0 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libc.so.6
#2  0x00007f5754a2a36d in uv_cond_wait () from /lib64/libuv.so.1
#3  0x00007f575218d083 in node::(anonymous namespace)::PlatformWorkerThread(void*) () from /lib64/libnode.so.93
#4  0x00007f575108cded in start_thread () from /lib64/libc.so.6
#5  0x00007f5751112370 in clone3 () from /lib64/libc.so.6
Thread 5 (Thread 0x7f574cffc640 (LWP 28187) "node"):
#0  0x00007f57510899d9 in __futex_abstimed_wait_common () from /lib64/libc.so.6
#1  0x00007f575108c1d0 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libc.so.6
#2  0x00007f5754a2a36d in uv_cond_wait () from /lib64/libuv.so.1
#3  0x00007f575218d083 in node::(anonymous namespace)::PlatformWorkerThread(void*) () from /lib64/libnode.so.93
#4  0x00007f575108cded in start_thread () from /lib64/libc.so.6
#5  0x00007f5751112370 in clone3 () from /lib64/libc.so.6
Thread 4 (Thread 0x7f574d7fd640 (LWP 28186) "node"):
#0  0x00007f57510899d9 in __futex_abstimed_wait_common () from /lib64/libc.so.6
#1  0x00007f575108c1d0 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libc.so.6
#2  0x00007f5754a2a36d in uv_cond_wait () from /lib64/libuv.so.1
#3  0x00007f575218d083 in node::(anonymous namespace)::PlatformWorkerThread(void*) () from /lib64/libnode.so.93
#4  0x00007f575108cded in start_thread () from /lib64/libc.so.6
#5  0x00007f5751112370 in clone3 () from /lib64/libc.so.6
Thread 3 (Thread 0x7f574dffe640 (LWP 28185) "node"):
#0  0x00007f57510899d9 in __futex_abstimed_wait_common () from /lib64/libc.so.6
#1  0x00007f575108c1d0 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libc.so.6
#2  0x00007f5754a2a36d in uv_cond_wait () from /lib64/libuv.so.1
#3  0x00007f575218d083 in node::(anonymous namespace)::PlatformWorkerThread(void*) () from /lib64/libnode.so.93
#4  0x00007f575108cded in start_thread () from /lib64/libc.so.6
#5  0x00007f5751112370 in clone3 () from /lib64/libc.so.6
Thread 2 (Thread 0x7f574e7ff640 (LWP 28184) "node"):
#0  0x00007f575111194e in epoll_wait () from /lib64/libc.so.6
#1  0x00007f5754a30b0e in uv.io_poll.part () from /lib64/libuv.so.1
#2  0x00007f5754a1a72a in uv_run () from /lib64/libuv.so.1
#3  0x00007f575218fd07 in node::WorkerThreadsTaskRunner::DelayedTaskScheduler::Run() () from /lib64/libnode.so.93
#4  0x00007f575108cded in start_thread () from /lib64/libc.so.6
#5  0x00007f5751112370 in clone3 () from /lib64/libc.so.6
Thread 1 (Thread 0x7f575493f0c0 (LWP 28183) "node"):
#0  0x00007f575111194e in epoll_wait () from /lib64/libc.so.6
#1  0x00007f5754a30b0e in uv.io_poll.part () from /lib64/libuv.so.1
#2  0x00007f5754a1a72a in uv_run () from /lib64/libuv.so.1
#3  0x00007f575202d232 in node::SpinEventLoop(node::Environment*) () from /lib64/libnode.so.93
#4  0x00007f57521591b1 in node::NodeMainInstance::Run(int*, node::Environment*) () from /lib64/libnode.so.93
#5  0x00007f5752159720 in node::NodeMainInstance::Run(node::EnvSerializeInfo const*) () from /lib64/libnode.so.93
#6  0x00007f57520c70ad in node::Start(int, char**) () from /lib64/libnode.so.93
#7  0x00007f5751029510 in __libc_start_call_main () from /lib64/libc.so.6
#8  0x00007f57510295c9 in __libc_start_main_impl () from /lib64/libc.so.6
#9  0x000055af462660f5 in _start ()

Yes I do understand @anthony-tuininga's point, which is why I mentioned that if my current setting of node-oracledb is correct, I can accept that as an answer and assume there's something wrong either on my machine, or what I expect from single process node.

anthony-tuininga commented 1 year ago

Did you run your test multiple times (without restarting the process) or increase poolMin? I suspect that is the source of most of your performance woes!

agungsv commented 1 year ago

Hi @anthony-tuininga ,

Yes I ran it multiple times, both without restarting and run once then restart. All my results above is 5th test run log after restart. I also did test with increased poolMin and default pool setting, but default pool setting had slightly worse result, and higher poolMin value seems doesn't do much except faster on first loop request.

I'll try to create a runnable repo later.

agungsv commented 1 year ago

Sorry for late reply, here's repo for the Oracle API :

https://github.com/agungsv/oracle-api-test

pvenkatraman commented 1 year ago

@agungsv I tried examples/webapp.js and ab utility which bombards specified concurrent hits. Also added console.time () & console.timeEnd() only for connection.execute () call - to measure the DB activity (SQL executon) duration. In my run I see each run takes different duration and ranges 0.39ms to 18.305ms. This is normal as each execution can take different duration. Also, ab command shows:

Processing: 16 37 121.2 28 1984 Waiting: 12 31 120.7 22 1979 Total: 16 38 121.2 28 1985

Which implies the threads are waiting (possibly for a free thread to be available or such reason).

Observations: UV_THREADPOOL_SIZE = 54 DB_POOL_MAX = 4 DB_POOL_MIN = 0

You can increase DB_POOL_SIZE to closer to 50 as that many threads are available. And increase DB_POOL_MIN also.

In my run poolMin = 10 poolMax = 100 poolIncrement = 10

agungsv commented 1 year ago

Hi @pvenkatraman ,

Sorry for late reply, and thank you for the suggestion. I will try to implement and re-test it again later.

As for my original question, I also found that the limitation is from node thread, so can take your test result as an answer. I'll probably have to run the node in cluster in PM2 or try other way.

As the issue turn out to be not from node-oracledb package, this question can be closed.