sagiegurari / simple-oracledb

Extend capabilities of oracledb with simplified API for quicker development.
Apache License 2.0
36 stars 4 forks source link

delay in fetching data issues #19

Closed behroozshafiabadi closed 7 years ago

behroozshafiabadi commented 7 years ago

Hi @sagiegurari i need your help i work in Api server that responsable to fetch data from db and send it. assume my table has 1,500,000 rows it take about 50 seconds to server receive all data from db...is that normal ? in apps like dbForge...the execution time is about 1 second is it dependent to network infrastructure ? or any method in simple oracle db to make it faster ? thank you

sagiegurari commented 7 years ago

are you streaming it (option streamResults=true)?

behroozshafiabadi commented 7 years ago

no i give data with resultSet method

sagiegurari commented 7 years ago

can you show me code snippet because if you are not using stream, basically it means i'm fetching 1.5 million rows into the memory which is really bad. that is what streaming is for.

behroozshafiabadi commented 7 years ago

if i say using stream another branch of issue comes... that node js response.write ends suddenly! but if we reject streaming...50-60 seconds is horrible in normal mode :

bi.getConnection(c.dbConfig, function (err, connection) {
            if (err) {
                done(err);
            } else {
                connection.query(input.query, [], {resultSet: true}, function (err, data) {
                    if (err) {
                        done(err);
                    } else {
                        job.set('data', JSON.stringify(data), done);
                    }
                    dbi.releaseConnection(connection, function (err) {
                        if (err) {
                            done(err);
                        }
                    })
                })
            }
        });

in streaming mode :

dbi.getConnection(c.dbConfig, function (err, connection) {
            if (err) {
                done(err);
                log.error(config.e[1], err);
            } else {
                var checkForDataSend = true;
                if (input.amount == 'all') {
                    response.writeHead(200, {'Content-Type': 'text/plain'});
                    //response.writeContinue();
                    var stream = connection.query(input.query, [], {streamResults: true});
                    stream.on('data', function (row) {
                        checkForDataSend = response.write(JSON.stringify(row) + ",");
                    });
                    stream.on('end', function () {
                        //job.set('data', '', done);
                        console.log("now fetch data is ended");
                        response.end();
                        dbi.releaseConnection(connection, function (err) {
                            if (err) {
                                done(err);
                            }
                        })
                    });
                    stream.on('error', function (err) {
                        dbi.releaseConnection(connection, function (err) {
                            if (err) {
                                done(err);
                            }
                        })
                    });
                    response.on('close', function () {
                        console.log("response close event sent")
                    })
})
sagiegurari commented 7 years ago

your streaming code from quick glance looks correct from db point of view, but not from the response.write point of view. See nodejs docs for more info: stream.write Basically you are doing a 'write' before the response can handle more write operations. Specifically from nodejs docs

The return value indicates whether the written chunk was buffered internally and the buffer has exceeded the highWaterMark configured when the stream was created. If false is returned, further attempts to write data to the stream should be paused until the 'drain' event is emitted.

by the way, not part of this issue, but if you are using pool, you can do pool.run, if you are not (like I see above, which is less recommended but anyway...) you can do oracledb.run(connAttrs, action, callback); so you don't need to release the connection on your own

https://github.com/sagiegurari/simple-oracledb#OracleDB+run

behroozshafiabadi commented 7 years ago

thank for your nav you are amazing

behroozshafiabadi commented 7 years ago

@sagiegurari can you help me in this case ? of course it's not about this issue or even oracledb!! but i need your knowledgement... still with res drain my response close early before whole data fetched!!! this is so dreadful have you any issue how this action happening?

var stream = connection.query(input.query, [], {streamResults: true});
                    stream.on('data', function (row) {
                        if (checkForDataSend) {
                            checkForDataSend = response.write(JSON.stringify(row) + ",",'utf-8');
                        } else {
                            tmpData += JSON.stringify(row) + ",";
                        }
                    });
                    response.on('drain', function () {
                        checkForDataSend = true;
                        response.write(tmpData,'utf-8');
                        tmpData = '';
                    });
                    stream.on('end', function () {
                        //job.set('data', '', done);
                        console.log("now fetch data is ended");
                        selectCounter--;
                        if (selectCounter === 0)
                            stopFlagForInsert = false;
                        response.end();
                        dbi.releaseConnection(connection, function (err) {
                            if (err) {
                                done(err);
                            }
                        })
                    });
sagiegurari commented 7 years ago

on drain you are supposed to write the temp data you have (you commented that out). on stream.end you just need to close the connection if you can't yet write to the response in wihch case you need to end the response in the drain that comes after the close. that is why pipe is a better solution than all those event listeners. Here is a good example how to transform data and using pipe https://www.sandersdenardi.com/readable-writable-transform-streams-node/

behroozshafiabadi commented 7 years ago

but stream type of oracle db is object mode that i think cant be piped if i not wrong!

sagiegurari commented 7 years ago

Im not sure i've heard of that but try to google a bit