IBM / node-odbc

ODBC bindings for node
MIT License
148 stars 77 forks source link

[BUG] Long running request memory issue #409

Closed kansasturnpike closed 2 weeks ago

kansasturnpike commented 2 weeks ago

Describe your system

Describe the bug Long-running requests cause container memory to increase until no more allocated memory is available and node throws memory error.

Expected behavior Memory should remain consistent and not increase until the query is returned from the DB.

To Reproduce Steps to reproduce the behavior:

  1. Connect to database with options:
    db2Config: {
    connectionString: `DRIVER=IBM i Access ODBC Driver;SYSTEM=${process.env.DB2_HOST};UID=${process.env.DB2_USER};PWD=${process.env.DB2_PASS};CCSID=1208;naming=1;dbq=,${process.env.DB2_DATA},${process.env.DB2_OBJECTS},${process.env.JAVA};CMT=0;CONNTYPE=1`,
    connectionTimeout: 90,
    initialSize: 6,
    incrementSize: 6,
    maxSize: 48,
    shrink: false,
    reuseConnections: false
    }
  2. Run query that takes > 140 second (requires large number of records)

Code ODBC service file:

import {
    Injectable,
    InternalServerErrorException,
    OnModuleInit
} from '@nestjs/common'
import { ConfigService } from '@nestjs/config'
import { Pool, PoolParameters, Result, pool } from 'odbc'
import { LoggerService } from '../logger/logger.service'

@Injectable()
export class OdbcConnectService implements OnModuleInit {
    db2: Pool
    constructor(private readonly configService: ConfigService) {}
    async onModuleInit() {  // <--- called on system init and mounted in IoC container
        try {
            this.db2 = await this.connect()
        } catch (error) {
            console.log('ODBC Connection Init', error)
        }
    }

    async connect(): Promise<Pool> {
        const config = this.configService.get<PoolParameters>('db2Config') // <----- reference to connection config above
        return await pool(config)
    }

    async query(
        sql: string,
        params: Array<number | string>
    ): Promise<Result<unknown>> {
        try {
            const connection = await this.db2.connect()
            const result = await connection.query(sql, params)
            await connection.close()
            return result
        } catch (error) {
            LoggerService.error('query error', error)
            throw new InternalServerErrorException()
        }
    }

Memory Error from console:

<--- Last few GCs --->
[1:0x555558935990]   215466 ms: Mark-Compact (reduce) 2046.7 (2084.1) -> 2046.2 (2084.3) MB, 1531.08 / 0.00 ms  (average mu = 0.721, current mu = 0.005) allocation failure; scavenge might not succeed
<--- JS stacktrace --->
FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
----- Native stack trace -----
 1: 0x555555e412f3  [node]
 2: 0x55555621b1b4 v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, v8::OOMDetails const&) [node]
 3: 0x55555621b473 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, v8::OOMDetails const&) [node]
 4: 0x555556411d7b  [node]
 5: 0x55555642b813 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [node]
 6: 0x555556405e33 v8::internal::HeapAllocator::AllocateRawWithLightRetrySlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [node]
 7: 0x5555564074c9 v8::internal::HeapAllocator::AllocateRawWithRetryOrFailSlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [node]
 8: 0x5555563e93ae v8::internal::Factory::AllocateRaw(int, v8::internal::AllocationType, v8::internal::AllocationAlignment) [node]
 9: 0x5555563ddd90 v8::internal::FactoryBase<v8::internal::Factory>::AllocateRawArray(int, v8::internal::AllocationType) [node]
10: 0x5555563ddf08 v8::internal::FactoryBase<v8::internal::Factory>::NewFixedArrayWithFiller(v8::internal::Handle<v8::internal::Map>, int, v8::internal::Handle<v8::internal::Oddball>, v8::internal::AllocationType) [node]
11: 0x55555669555a v8::internal::Handle<v8::internal::NameDictionary> v8::internal::HashTable<v8::internal::NameDictionary, v8::internal::NameDictionaryShape>::New<v8::internal::Isolate>(v8::internal::Isolate*, int, v8::internal::AllocationType, v8::internal::MinimumCapacity) [node]
12: 0x55555669c9f7 v8::internal::Handle<v8::internal::NameDictionary> v8::internal::HashTable<v8::internal::NameDictionary, v8::internal::NameDictionaryShape>::EnsureCapacity<v8::internal::Isolate>(v8::internal::Isolate*, v8::internal::Handle<v8::internal::NameDictionary>, int, v8::internal::AllocationType) [node]
13: 0x55555669e83e v8::internal::Handle<v8::internal::NameDictionary> v8::internal::Dictionary<v8::internal::NameDictionary, v8::internal::NameDictionaryShape>::Add<v8::internal::Isolate>(v8::internal::Isolate*, v8::internal::Handle<v8::internal::NameDictionary>, v8::internal::Handle<v8::internal::Name>, v8::internal::Handle<v8::internal::Object>, v8::internal::PropertyDetails, v8::internal::InternalIndex*) [node]
14: 0x5555566b75be v8::internal::BaseNameDictionary<v8::internal::NameDictionary, v8::internal::NameDictionaryShape>::Add(v8::internal::Isolate*, v8::internal::Handle<v8::internal::NameDictionary>, v8::internal::Handle<v8::internal::Name>, v8::internal::Handle<v8::internal::Object>, v8::internal::PropertyDetails, v8::internal::InternalIndex*) [node]
15: 0x55555667bbd1 v8::internal::LookupIterator::ApplyTransitionToDataProperty(v8::internal::Handle<v8::internal::JSReceiver>) [node]
16: 0x5555566a8045 v8::internal::Object::TransitionAndWriteDataProperty(v8::internal::LookupIterator*, v8::internal::Handle<v8::internal::Object>, v8::internal::PropertyAttributes, v8::Maybe<v8::internal::ShouldThrow>, v8::internal::StoreOrigin) [node]
17: 0x5555567dc008 v8::internal::Runtime::SetObjectProperty(v8::internal::Isolate*, v8::internal::Handle<v8::internal::Object>, v8::internal::Handle<v8::internal::Object>, v8::internal::Handle<v8::internal::Object>, v8::internal::StoreOrigin, v8::Maybe<v8::internal::ShouldThrow>) [node]
18: 0x555556238547 v8::Object::Set(v8::Local<v8::Context>, v8::Local<v8::Value>, v8::Local<v8::Value>) [node]
19: 0x555555f72350 napi_set_property [node]
20: 0x7ffff59da9c8 process_data_for_napi(Napi::Env, StatementData*, Napi::Array) [/app/node_modules/odbc/lib/bindings/napi-v6/odbc.node]
21: 0x7ffff59dfb91 QueryAsyncWorker::OnOK() [/app/node_modules/odbc/lib/bindings/napi-v6/odbc.node]
22: 0x7ffff59d0117 Napi::AsyncWorker::OnAsyncWorkComplete(napi_env__*, napi_status, void*) [/app/node_modules/odbc/lib/bindings/napi-v6/odbc.node]
23: 0x555555f98c77  [node]
24: 0x555555f9eefa node::ThreadPoolWork::ScheduleWork()::{lambda(uv_work_s*, int)#2}::_FUN(uv_work_s*, int) [node]
25: 0x555556b2eab6  [node]
26: 0x555556b32c13  [node]
27: 0x555556b47aab  [node]
28: 0x555556b33993 uv_run [node]
29: 0x555555ee3c1a node::SpinEventLoopInternal(node::Environment*) [node]
30: 0x55555603b1ae node::NodeMainInstance::Run(node::ExitCode*, node::Environment*) [node]
31: 0x55555603b794 node::NodeMainInstance::Run() [node]
32: 0x555555f93c70 node::Start(int, char**) [node]
33: 0x7ffffec0a5d0  [/lib64/libc.so.6]
34: 0x7ffffec0a680 __libc_start_main [/lib64/libc.so.6]
35: 0x555555ee09c5 _start [node]

Additional context I have tried adding the timeout option to .query, but that doesn't seem to be working as expected either. When adding { timeout: 90 } for a 90 second timeout, it kills the request almost immediately with an error message of:

"state": "HY000",
"code": -666,
"message": "[IBM][System i Access ODBC Driver][DB2 for i5/OS]SQL0666 - SQL query exceeds specified limit or threshold."

unixODBC info:

unixODBC 2.3.9
DRIVERS............: /etc/odbcinst.ini
SYSTEM DATA SOURCES: /etc/odbc.ini
FILE DATA SOURCES..: /etc/ODBCDataSources
USER DATA SOURCES..: /opt/app-root/src/.odbc.ini
SQLULEN Size.......: 8
SQLLEN Size........: 8
SQLSETPOSIROW Size.: 8

Docker Image Config:

FROM registry.access.redhat.com/ubi9/nodejs-20:9.5-1730498564 as default
ENV TZ="America/Chicago"

USER 0

RUN curl https://public.dhe.ibm.com/software/ibmi/products/odbc/rpms/ibmi-acs.repo | tee /etc/yum.repos.d/ibmi-acs.repo

RUN dnf install --refresh ibm-iaccess -y
RUN dnf install unixODBC unixODBC-devel -y

RUN npm i -g yarn

RUN mkdir /app
RUN chown 1001:0 /app

USER 1001

# Project files
WORKDIR /app
COPY . .
COPY package.json ./
COPY yarn.lock ./
RUN yarn install
RUN yarn build

CMD [ "node", "dist/main" ]
kansasturnpike commented 2 weeks ago

Here are the ODBC Trace logs if you want to review them 2024-11-13_odbc.log.gz

kansasturnpike commented 2 weeks ago

I believe I understand what is going on now. The cause of the heap memory issue was caused by the response payload being so large that it was filling up the memory. The reason for the immediate return when the timeout was set is because DB2 estimates the response time of a query and if that response is likely to exceed the limit set it throws an error immediately.