kadler / db2sock-test

1 stars 0 forks source link

DB2Sock Toolkit #10

Open kadler opened 6 years ago

kadler commented 6 years ago

Original report by Steven Scott (Bitbucket: fresche_sscott, GitHub: Unknown).


Hey guys,

I've been toying around with the start of a replacement toolkit that utilizes db2sock's REST interface. What I have so far is basic CMD, QSH and PGM calling.

I've written it in TypeScript, as a fully-JS module, no native code compilation required. I'm currently aiming for a more "functional" approach, compared to the existing toolkit's "array"-based approach to passing parameters around.

At the moment, I'm working on getting authorization to post it online under the MIT license, and I'm hoping to have it up within the next few weeks.

I wanted to open this task so that there can be public discourse and knowledge that it's something being worked on.

kadler commented 5 years ago

Original comment by Brian Jerome (Bitbucket: bjerome, GitHub: brianmjerome).


@aaronbartell It seems using a data structure as a host variable is not possible (according to this page). Would this even be the right approach for what we need?

We'd like to avoid mapping the program params (as a CLOB xml or CHAR json) since it could get complicated for the legacy stuff we need to support (all the issues db2sock has had with it too). I'm unsure the best approach for passing array DS as IN/OUT/INOUT like @danny-hcs mentioned earlier. Have you given it more thought?

kadler commented 5 years ago

Original comment by Danny Roessner (Bitbucket: droessner, GitHub: danny-hcs).


@aaronbartell Thanks, I'll look into it a little further, but that could potentially work.

kadler commented 5 years ago

Original comment by Aaron Bartell (Bitbucket: aaronbartell, GitHub: aaronbartell).


Take a look at this article.

kadler commented 5 years ago

Original comment by Danny Roessner (Bitbucket: droessner, GitHub: danny-hcs).


@aaronbartell I had not considered that. This is mostly calling legacy RPG that contains multiple inputs including array data structures as well as multiple outputs also with array data structures. Is a stored procedure an option for that?

kadler commented 6 years ago

Original comment by Aaron Bartell (Bitbucket: aaronbartell, GitHub: aaronbartell).


@danny-hcs Have you considered putting a stored procedure directly over the RPG vs. going through XMLSERVICE? Both calls end up going through the database, but a direct call to the stored procedure will have significantly less overhead (no XML, no dynamic call composition, etc).

kadler commented 6 years ago

Original comment by Danny Roessner (Bitbucket: droessner, GitHub: danny-hcs).


@ThePrez The remote implementations we have tried so far has not been great in terms of performance (especially for RPG programs that have a large footprint) so we are looking for an option that gets us better performance. (We are using IWS in production now and using that as a comparison). I'm not too familiar with ODBC transport. Is it something that is production ready on the IBM i or at least close to?

kadler commented 6 years ago

Original comment by Jesse G (Bitbucket: ThePrez, GitHub: ThePrez).


@danny-hcs, an ODBC transport would get you relatively close to what you seek. An in-process call (I assume that's what you mean by "native" in this context) has numerous risks spanning security and stability. I would recommend an ODBC transport instead, to give tasks isolation. I would expect performance to remain reasonable. If you would like to perform some performance testing with ODBC, I'd like to work with you.

kadler commented 6 years ago

Original comment by Danny Roessner (Bitbucket: droessner, GitHub: danny-hcs).


@ThePrez Thank you for clarifying. @aaronbartell Our main interested in db2sock is being able to make RPG calls using JSON (via node.js). It would be nice to have a native implementation for this (rather than a remote procedure call).

kadler commented 6 years ago

Original comment by Jesse G (Bitbucket: ThePrez, GitHub: ThePrez).


@aaronbartell, the ODBC support does not negate the need for db2sock. ODBC gives database access, but not program calls, shell calls, etc. It would simply provide another transport. For instance, as this half-baked PR attempts: https://bitbucket.org/litmis/db2sock/issues?status=new&status=open

kadler commented 6 years ago

Original comment by Aaron Bartell (Bitbucket: aaronbartell, GitHub: aaronbartell).


At PowerUp18 I had the honor of doing one of @ThePrez 's presentations and in it was foreshadowing of things to come; namely an ODBC driver that runs on IBM i that all PASE langs would be able to make use of. ODBC has been around a long time and is very stable/performant. This would essentially allow us to use the LUW odbc npm on IBM i and could potentially negate the need for db2sock(n1). And then a Node.js iToolkit interface would eventually be written over the ODBC driver APIs (could really just be a separate data provider like @fresche_sscott authored in his project)

n1 - I hesitate to say this because there are things Tony accomplished with db2sock that most likely still hold relevance to certain projects.

kadler commented 6 years ago

Original comment by Brian Jerome (Bitbucket: bjerome, GitHub: brianmjerome).


@aaronbartell Curious as to what it would take for just @danny-hcs and I to request a higher priority :-)

kadler commented 6 years ago

Original comment by Aaron Bartell (Bitbucket: aaronbartell, GitHub: aaronbartell).


@danny-hcs

Any updates on this? We're currently using nodejs-idb-pconnector with db2sock remote to make RPG calls. Would be nice to have the option for native for better performance.

I haven't gotten to it yet. Hoping a customer requests it so I can move it up in the priority list.

kadler commented 6 years ago

Original comment by Danny Roessner (Bitbucket: droessner, GitHub: danny-hcs).


@aaronbartell Any updates on this? We're currently using nodejs-idb-pconnector with db2sock remote to make RPG calls. Would be nice to have the option for native for better performance.

kadler commented 6 years ago

Original comment by Danny Roessner (Bitbucket: droessner, GitHub: danny-hcs).


I will put something together (eventually) to test the idea.

Yes please. We are very excited about these tools and are looking to get something into production in the very near future. Development tests are looking better and better, just need to iron out these connector decisions.

kadler commented 6 years ago

Original comment by Aaron Bartell (Bitbucket: aaronbartell, GitHub: aaronbartell).


In fact, you froze me to inaction.

Ooops. Didn't mean to do that. Carry on. I will put something together (eventually) to test the idea.

kadler commented 6 years ago

Original comment by Tony Cairns (Bitbucket: rangercairns, GitHub: rangercairns).


Thoughts?

None helpful from me. In fact, you froze me to inaction. “Nobody move! I dropped me brain!” – Jack Sparrow (Pirates of Caribbean)

kadler commented 6 years ago

Original comment by Aaron Bartell (Bitbucket: aaronbartell, GitHub: aaronbartell).


I am not in charge of litmis/nodejs-idb-connector decisions. Maybe ask Jess G. IBM Rochester Open Source architect about 'plans'.

The efforts of db2sock are being watched closely (with excitement) by a variety of the repo core-committers. Need to have it out of beta mode before we can start replacing things (I have a number of customers with Node.js in production).

Might need to consider using OO interface concepts to allow for implementation selection; like we did for ruby-itoolkit. We have three Node.js DB2 implementations at play. The current nodejs-idb-connector, the nodejs-idb-pconnector (n1), and eventually nodejs-idb-db2sock.

n1 - same as nodejs-idb-connector except it uses Promises and async/await.

I've been waiting for API interfaces to settle down before I dove into this db2sock project to see if doing OO interfaces is even a reality based on each interface being exactly similar.

Thoughts?

kadler commented 6 years ago

Original comment by Tony Cairns (Bitbucket: rangercairns, GitHub: rangercairns).


I was able to test the toolkitasync.js and it looks pretty good.

First, clarity for other readers. You are using my personal compiled 'experimental' async node db2ia test driver from Yips new Super Driver page (below). Yes, at present, this new Yips experimental node db2 driver only works with new db2sock (new libdb400.a).

Are there more plans with this?

I am not in charge of litmis/nodejs-idb-connector decisions. Maybe ask Jess G. IBM Rochester Open Source architect about 'plans'.

Would updates just consist of replacing the db2ia.node?

Maybe not 'replace' per say. That is, would be easy enough to modify my personal source db2ia copy to run on either old libdb400.a (PASE shipped), and/or, new db2sock libdb400.a (yips download). Simply use PASE dynamic load functions check 'true' async toolkitasync.js was available (aka, SQL400JsonAsync function available).

"If you were waiting for the opportune moment, that was it" (Jack Sparrow, Pirates of Caribbean).

kadler commented 6 years ago

Original comment by Brian Jerome (Bitbucket: bjerome, GitHub: brianmjerome).


@rangercairns I was able to test the toolkitasync.js and it looks pretty good. Are there more plans with this? Would updates just consist of replacing the db2ia.node?

The /QOpenSys/usr/lib/libdb400.a in this project is modified to work with this, so the litmis/nodejs-idb-connector will no longer work (also uses libdb400.a). I have to keep switching (in development of course) between the two libdb400.a files for my dev projects (testing the two still). I'd like to keep using this instead.

kadler commented 6 years ago

Original comment by Tony Cairns (Bitbucket: rangercairns, GitHub: rangercairns).


... /opt/freeware/lib/libstdc++.a is not an archive or the file could not be read properly.

Repeat. Before losing forest from trees (below). This indicates your libstdc++.a is corrupted. Aka, most often not binary FTP to IBM i.

.. license program or PTFs ... yum install libstdcplusplus ...

To be clear, only 'test drivers' are placed on Yips. They are not PTFs or official released OPS yum installed software via RPMs (beta).

In case of node db2a, node development convention for 'extensions', wizards of developers used C++ (g++). Nothing wrong per say with C++. However, C++ creates strong (unbreakable strong), relationship with matching compile (g++) and runtime (libstdc++.a). Basically, you get OO with C++ (name mangling method names, etc.) , but, you are locked into the matching runtime and maybe even version dependent (made an offer you can't refuse).

You may need to install per Tony's instructions from YiPs, depending on how the binary was built.

I am using g++/gcc 4.8.3 and related version binaries like libstdc++.a. You may see your version of gcc, etc. like below.

#!bash

bash-4.3$ ls -l /opt/freeware/lib/*std*
lrwxrwxrwx    1 db2sock  0                44 May 23 2017  /opt/freeware/lib/libstdc++.a -> gcc/powerpc-ibm-aix6.1.0.0/4.8.3/libstdc++.a

bash-4.3$ g++ --version
g++ (GCC) 4.8.3
Copyright (C) 2013 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

bash-4.3$ gcc --version
gcc (GCC) 4.8.3
Copyright (C) 2013 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

Someday when yum is top self PASE on IBM i, all will be in the past. Go Kevin go (IBM)!!!

Thanks for testing in today's world of PASE.

kadler commented 6 years ago

Original comment by Jesse G (Bitbucket: ThePrez, GitHub: ThePrez).


@brianmjerome, 'yum install libstdcplusplus' will be the "correct way." (of course in beta currently) You may need to install per Tony's instructions from YiPs, depending on how the binary was built.

kadler commented 6 years ago

Original comment by Brian Jerome (Bitbucket: bjerome, GitHub: brianmjerome).


@rangercairns Do you have a reference for the correct way to load libstdc++ on to the iBM i for OS v7.2 or 7.3? It seems it was not included with the license program or PTFs.

kadler commented 6 years ago

Original comment by Brian Jerome (Bitbucket: bjerome, GitHub: brianmjerome).


@rangercairns Thanks Tony. Something must be corrupt on that partition running AIX 6.1. We installed the tools on a partition running AIX 7.1 instead and it's working now.

kadler commented 6 years ago

Original comment by Tony Cairns (Bitbucket: rangercairns, GitHub: rangercairns).


File /opt/freeware/lib/libstdc++.a is not an archive or the file could not be read properly

This nearly always means you did not FTP files in binary. Aka, libstdc++.a was not FTP'd in binary to your IBM i.

kadler commented 6 years ago

Original comment by Brian Jerome (Bitbucket: bjerome, GitHub: brianmjerome).


@rangercairns We have reinstalled all the open source tools. The gcc binaries are now in /opt/freeware/lib but it's saying there is a format error.

#!bash

-bash-4.3$ node toolkitasync.js
module.js:597
  return process.dlopen(module, path._makeLong(filename));
                 ^

Error: Could not load module /QOpenSys/QIBM/ProdData/OPS/Node6/os400/db2i/bin/db2ia.node.
        Dependent module /opt/freeware/lib/libstdc++.a(libstdc++.so.6) could not be loaded.
        File /opt/freeware/lib/libstdc++.a is not an
          archive or the file could not be read properly.
System error: Exec format error
Could not load module /QOpenSys/QIBM/ProdData/OPS/Node6/os400/db2i/bin/db2ia.node.
        Dependent module /QOpenSys/QIBM/ProdData/OPS/Node6/os400/db2i/bin/db2ia.node could not be loaded.
    at Error (native)
    at Object.Module._extensions..node (module.js:597:18)
    at Module.load (module.js:487:32)
    at tryModuleLoad (module.js:446:12)
    at Function.Module._load (module.js:438:3)
    at Module.require (module.js:497:17)
    at require (internal/module.js:20:19)
    at Object.<anonymous> (/QOpenSys/QIBM/ProdData/OPS/Node6/os400/db2i/lib/db2a.js:5:18)
    at Module._compile (module.js:570:32)
    at Object.Module._extensions..js (module.js:579:10)
-bash-4.3$ 
kadler commented 6 years ago

Original comment by Tony Cairns (Bitbucket: rangercairns, GitHub: rangercairns).


Trial db2sock, db2a and libstdc++ ... toolkit

I compile my test db2a with gcc, so you need gcc runtime. I posted zip to Yips for download.

Yips Super Driver - test w/db2sock - db2ia.node, (libgcc_s.a libstdc++.a).

#!bash

db2sock_toolkit1.zip - yips db2 module for db2sock (w/ toolkit):
> cd /QOpenSys/QIBM/ProdData/OPS/Node6/./os400/db2i/bin/
> cp db2ia.node db2ia.node-save
> cp /path/new/db2ia.node db2ia.node

yips libstdc++.zip - additional gcc binaries
/opt/freeware/lib/libgcc_s.a
/opt/freeware/lib/libstdc++.a
kadler commented 6 years ago

Original comment by Brian Jerome (Bitbucket: bjerome, GitHub: brianmjerome).


@ThePrez Thanks for looking into this and getting back so quickly! We're trying on another OS version at the moment so I will try this out later. :-)

kadler commented 6 years ago

Original comment by Jesse G (Bitbucket: ThePrez, GitHub: ThePrez).


Was able to recreate and resolve via 'yum install libstdcplusplus6', so should also work for you

kadler commented 6 years ago

Original comment by Jesse G (Bitbucket: ThePrez, GitHub: ThePrez).


@brianmjerome, I'm not sure, but can you try doing a 'yum install libstdcplusplus6'? That object naming notation implies it was built with our new (beta) set of shared libraries. Again, not sure, but it's worth a try.

kadler commented 6 years ago

Original comment by Brian Jerome (Bitbucket: bjerome, GitHub: brianmjerome).


@rangercairns It looks like our OS 7.1 (AIX6.1) is missing some modules (like libstdc++.a & libc.a) when I try using that new db2ia.node. Also using Node v6.12.2. What versions are you using?

2-1-2018 7-34-33 AM.png

kadler commented 6 years ago

Original comment by Danny Roessner (Bitbucket: droessner, GitHub: danny-hcs).


@rangercairns This looks great! Very clean and simple. Just wanted to comment and say that I appreciate all the work you've been doing with this. Will hopefully get a chance to thoroughly test this over the next couple weeks, but this already looks very promising.

kadler commented 6 years ago

Original comment by Tony Cairns (Bitbucket: rangercairns, GitHub: rangercairns).


Ok, I built a new node6 db2ia.node binary for db2sock SQL400Json(Async). See the new db2a functions toolkitSync(json) and toolkit(json, callback) below. You can find the full c code source and binary on Yips below (if you want to try it).

Install.

#!bash

** Install
[@
1) Must use with db2sock (follow install links)
... then ...
2) unzip everything db2sock_toolkit(n).zip
3) save the original node6 db2ia.node 
> cp /QOpenSys/QIBM/ProdData/OPS/Node6/./os400/db2i/bin/db2ia.node /QOpenSys/QIBM/ProdData/OPS/Node6/./os400/db2i/bin/db2ia.node-save
4) copy the new db2ia.node
cp db2ia.node /QOpenSys/QIBM/ProdData/OPS/Node6/./os400/db2i/bin/db2ia.node

Example run.

#!bash

bash-4.3$ node --version
v6.9.1

== sync json toolkit call ===

bash-4.3$ cat toolkitsync.js 
var db = require('/QOpenSys/QIBM/ProdData/OPS/Node6/os400/db2i/lib/db2a')
var dbconn = new db.dbconn();  // Create a connection object.
dbconn.conn("*LOCAL");  // Connect to a database.
json = '{"pgm":[{"name":"HELLOSRV", "lib":"DB2JSON", "func":"HELLO"},{"s":{"name":"char", "type":"128a", "value":"Hi there"}}]}';
console.log(dbconn.toolkitSync(json));
bash-4.3$ node toolkitsync.js 
{"script":[{"pgm":["HELLOSRV","DB2JSON","HELLO",{"char":"Hello World"}]}]}

== async json toolkit call ==

bash-4.3$ cat toolkitasync.js 
var db = require('/QOpenSys/QIBM/ProdData/OPS/Node6/os400/db2i/lib/db2a')
var dbconn = new db.dbconn();  // Create a connection object.
dbconn.conn("*LOCAL");  // Connect to a database.
json = '{"pgm":[{"name":"HELLOSRV", "lib":"DB2JSON", "func":"HELLO"},{"s":{"name":"char", "type":"128a", "value":"Hi there"}}]}';
dbconn.toolkit(json,
  function (result) {
    console.log(result);
  }
);
setTimeout(function() {
}, 800);

bash-4.3$ node toolkitasync.js 
{"script":[{"pgm":["HELLOSRV","DB2JSON","HELLO",{"char":"Hello World"}]}]}
bash-4.3$

BTW -- I just used basic technology already in node db2a. Someday we will need to replace with something better and faster (i think). Maybe that will happen with the Jesse G initiative (also gives our co-op something to copy for the toolkit call to SQL400Json(Async) ... just to help out).

kadler commented 6 years ago

Original comment by Tony Cairns (Bitbucket: rangercairns, GitHub: rangercairns).


Remote SQL400JsonAsync ... not following the pack

All is not lost to those that see the glass half empty ...

So, we all know that Linux, Unix, Windows (LUW) will never implement a non-standard API like SQL400JsonAsync. Of course! You can never lead, only follow.

Option 1 (LUW npm, whatever):

However, we can build a simple binary wrapper and call all same ODBC/CLI functions that make up SQL400JsonAsync. In fact, the source code is right here in the db2sock project. Aka, we just have to add new makefile(s) for LUW and take the db2sock chunks we need to get the whole thing running on Linux, Unix, Windows.

For you technical geeks, we simply need to 'async' / thread wrapper a normal ODBC stored procedure call to the existing db2json.db2procj (parm i/o), or db2json.db2projr (result set), and ... bingo ... instant SQL400JsonAsync running on Linux, Unix, Windows.

Option 2 (pure node , whatever):

I suspect there are 'async' work APIs build right into node (other languages). So, you could create your own SQL400JsonAsync. Thast is, simply use the script language 'async work' framework, then call the normal driver DB2 ODBC APIs (client access ODBC, DB2 Connect, etc.) same way SQL400JsonAsync (see db2sock code).

Note: Technical mechanics exactly the same as c code version ... we simply need to 'async' / thread wrapper/framework a normal ODBC stored procedure call to the existing db2json.db2procj (parm i/o), or db2json.db2projr (result set), and ... bingo ... instant SQL400JsonAsync running on Linux, Unix, Windows.

Aka, for those not clear on simple task. You can already make a pure node (other language) SQL400JsonAsync by using any 'async work' framework as described above calling ODBC. Right now (today).

We are programmers. Our world ends only at imagination.

kadler commented 6 years ago

Original comment by Tony Cairns (Bitbucket: rangercairns, GitHub: rangercairns).


node experiment db2 driver radically different ....

First, this is all open source, so there is no reason we hide these lab experiments ...

So, folks, inside lab we plan experimenting with completely replacing db2a driver with toolkit call built-in db2. We even have a guy signed up to give it a try (our new co-op). That is, theory, a high percentage of node script use cases is any db2 operation in node is simply converted to json ( JSON.stringify). Why do we have a db2a middle man???

#!bash

stmt.execSync(sql, function(result) {
       console.log("Result: %s", JSON.stringify(result));
});

The experiment proposes nearly 90% of node db2 operations follow the same pattern of connect, prepare, execute (w/parms), fetch, followed by JSON.stringify on each row. Therefore, theory, we could simply use toolkit built json based in/out interface with one 'async' call to hanlde the vast majority of use cases (SQL400JsonAsync in db2sock).

#!bash

$ cat j0601_query_qcustcdt.json
{"query":[{"stmt":"select * from QIWS/QCUSTCDT where LSTNAM=? or LSTNAM=?"},
        {"parm":[{"value":"Jones"},{"value":"Vine"}]},
        {"fetch":[{"rec":"all"}]}
       ]}

Of course, have to test performance is up to par. We think may be ok because all the same ODBC/CLI operations you do by hand in your script are handled in one 'asnyc' call (no mess, no fuss).

fetch pagination (fetch all, fetch 1, fetch 2, fetch 7) ...

Of course, would be a disaster if the json result set was thousands of records. So, yes, we will handle fetch block pagination. That is, you can fetch a little or a lot oo all. The 'statement' handle is returned on any json prepare/execute. The php sample below (running sample), shows how the handle can be used to fetch paginate (fetch multiple json blocks). The result set statement will close on last record fetched. Or ...you may close the result set early by handle.

ALL connection pooling and locking to avoid 'async' overrun of QSQSRVR jobs is built into the db2sock driver. In the sample below default user profile is used with "qual":'"fred", therefore the connection pool element/handle associated with "fred" will handle all the json requests. Important, it will also serialize the request (mutex lock), to avoid overrunning the QSQSRVR job handling your result set.

BTW -- People don't understand that db2 operations are not all thread safe, so many times current db2a 'async' interfaces appear to broken vis 'missing records', 'bad data', etc. (... nope it's your node script and lack of connection/statement pooling). Anyway, db2sock 'async' interface take care of the whole mess for you. You just write toolkit(ish) code.

#!bash

function myjson1() {
$clob =
'{"connect":[
  {"qual":"fred"},
  {"query":[{"stmt":"select * from QIWS/QCUSTCDT"}, 
            {"fetch":[{"rec":2}]},
            {"fetch":[{"rec":2}]}
           ]}
]}
';
return $clob;
}
function myjson2($handle) {
$clob =
'{"connect":[
  {"qual":"fred"},
  {"query":[{"handle":'.$handle.'},
            {"fetch":[{"rec":"all"}]}
           ]}
]}
';
return $clob;
}
function myjson3($handle) {
$clob =
'{"connect":[
  {"qual":"fred"},
  {"close":[{"handle":'.$handle.'}]}
]}
';
return $clob;
}

BUG -- I need to add RECORD not found to back of every last record, so people do not have to worry about closing on the simple "fetch":"all" case, and, other end of records use cases.

Just want you to know what we are thinking here in the lab.

kadler commented 6 years ago

Original comment by Aaron Bartell (Bitbucket: aaronbartell, GitHub: aaronbartell).


@brianmjerome Send an email to aaronbartell@gmail.com and I will point you to the beta.

kadler commented 6 years ago

Original comment by Jesse G (Bitbucket: ThePrez, GitHub: ThePrez).


Absolutely. He has earned it! :-)

kadler commented 6 years ago

Original comment by Aaron Bartell (Bitbucket: aaronbartell, GitHub: aaronbartell).


@ThePrez, can @brianmjerome be given access to Node.js v8?

kadler commented 6 years ago

Original comment by Brian Jerome (Bitbucket: bjerome, GitHub: brianmjerome).


@fresche_sscott Thanks for the explanation..

If we have Node v6.12.2 running on our IBM i then the N-API interface wouldn't be supported, correct? According to this community request Node 8.x isn't released yet but is planned for some future release.

kadler commented 6 years ago

Original comment by Tony Cairns (Bitbucket: rangercairns, GitHub: rangercairns).


BTW -- Jesse G. - Rochester IBM i Open Source Architect opened an issue on litmis/nodejs-idb-connector conformance to N-API interface (node db2a driver). This 're-write' may be a good place start adding other db2sock 'real async' fast calls like toolkit SQL400JsonAsync. We maybe not have to wait too long until can try out whole idea including new toolkit to tune up the performance.

kadler commented 6 years ago

Original comment by Steven Scott (Bitbucket: fresche_sscott, GitHub: Unknown).


That is, if your toolkit becomes db2 dependent on 'sync' db2procj/db2procjr behaviour ... well ... you may get to do it all again (re-write).

That's another reason I'm starting with the REST interface, as I can use it in an asynchronous manner from the start, and (Hopefully) won't accidentally implement things that end up breaking when used with the asynchronous DB2 node module. All of my work so far has actually been developed and tested on a Windows desktop, with only DB2Sock and the REST interface being set up on the IBM i.

kadler commented 6 years ago

Original comment by Tony Cairns (Bitbucket: rangercairns, GitHub: rangercairns).


... should be pretty easy (So long as the expected JSON in, and returned JSON out match what the REST interface provide)

I agree. Ultimatley, no json difference between SQL400JsonAsync vs. REST vs. remote db2procj/db2procjr or any other json 'transport' (assuming ILE ccsid does not bite us too hard).

In fact, feel free to use stored procs db2proj (i/o parm) and db2projr (result set). These are designed remote access, but they are not 'async'. However, you could build your whole node toolkit db2 transport from a laptop. Later, when db2a ready (re-write), adding local IBM i call to SQL400JsonAsync. However, warning, be careful, evolution of db2a to 'async' has been bumpy. That is, if your toolkit becomes db2 dependent on 'sync' db2procj/db2procjr behaviour ... well ... you may get to do it all again (re-write).

kadler commented 6 years ago

Original comment by Steven Scott (Bitbucket: fresche_sscott, GitHub: Unknown).


@brianmjerome The db2json.db2 "file" is actually just a FastCGI interface that db2sock implements, which allows RESTful access to the async DB2 driver. It's implemented and documented in the "fastcgi" folder: https://bitbucket.org/litmis/db2sock/src/master/toolkit/fastcgi/?at=master

I initially implemented the REST interface because it made development easier (And I don't believe we currently have an async DB2 module that uses DB2Sock). I implemented the "provider" as an interface though, so adding a secondary interface that uses a direct Node DB2 module (When a proper async one that uses Db2Sock is implemented) should be pretty easy (So long as the expected JSON in, and returned JSON out match what the REST interface provide).

kadler commented 6 years ago

Original comment by Tony Cairns (Bitbucket: rangercairns, GitHub: rangercairns).


nodejs-idb-connector / src / db2ia /

Yes.

today db2a (hack async) ...

Today async is using uv_queue_work for 'fine grain' ODBC/CLI APIs like PrepareAsync, ExecuteAsync, FetchAllAsync. You use a worker thread , where, uv_queue_work takes a while to 'acquire' (long relative admin set-up/out), and end up back node interpreter many times to do a 'query'. Essentially each uv_queue_work request is "emulating" CLI ansync APIs, because old PASE libdb400.a has no async APIs (not db2sock).

db2sock better db2a (maybe rewrite) ...

A better way db2sock, supports real async CLI/ODBC APIs. In fact we are build better 'wide grain' APIs that will connect, prepare, fetch, all 'async' in one single call API (SQL400DoALLQueryFetchAsync -- not written yet). Thereby, the db2a code can simply throw out all code 'fine grain' PrepareAsync, ExecuteAsync, FetchAllAsync (monkey business), and run in one call SQL400DoALLQueryFetchAsync.

For example (new toolkit CLI API) ...

In the case of db2sock, example toolkit, the new CLI API SQL400JsonAsync runs the entire toolkit call with json in/out. There will be no need to mess with a bunch of uv_queue_work request work. Aka, rewrite db2a to enable a simple API ToolkitJsonAsync(jsonin, callback).

Does db2sock 'async' work?? ... yes ... but not all tested (or written) ...

I implemented db2sock 'async' example in the php ibm_db2 driver. Not callback (node will use callback), but join/reap 'async' as PHP not really threaded ( db2_toolkit_json_async / db2_toolkit_json_reap). All works with one call to SQL400JsonAsync and SQL400JsonJoin.

#!bash

-- sync (normal) --
<?php
require_once('connection.inc');
$jsonin = '{"pgm":[{"name":"HELLOSRV", "lib":"DB2JSON", "func":"HELLO"},
        {"s":{"name":"char", "type":"128a", "value":"Hi there"}}
       ]}';
$conn1 = db2_connect($database, $user, $password);
$jsonout = db2_toolkit_json( $conn1, $jsonin);
var_dump($jsonout);
?>

-- or async (new) --

<?php
require_once('connection.inc');
$jsonin = '{"pgm":[{"name":"HELLOSRV", "lib":"DB2JSON", "func":"HELLO"},
        {"s":{"name":"char", "type":"128a", "value":"Hi there"}}
       ]}';
$conn1 = db2_connect($database, $user, $password);
echo getmypid() . " about to call db2_toolkit_json_async ...\n";
$stmt = db2_toolkit_json_async( $conn1, $jsonin );
do {
  echo getmypid() . " read paper, walk dog, get rss feeds, play stock market\n";
} while (!db2_toolkit_ready( $stmt ));
$jsonout = db2_toolkit_json_reap( $stmt );
var_dump($jsonout);
?>

the talk ...

You see, i suggest, we need stop thinking like previous century ODBCosaurus/CLIosaurus and let new db2sock driver do all 'async' (thread) and callback work. Aka, work with this db2sock Open Source project and save huge time and effort (and run secure with locks).

Remote? No. LUW driver will not implement SQL400JsonAsync. However, db2json.db2procj/r is shipped with db2sock, so we will still allow/enable alternative remote call ODBCosaurus/CLIosaurus (was that a meteor i just saw?). Aka, flexible alternative remote, exactly reason we recommend design separation of toolkit classes (call pgm, cmd, qsh, etc.) and transport classes (db2sock, db2proj, REST, etc.).

kadler commented 6 years ago

Original comment by Brian Jerome (Bitbucket: bjerome, GitHub: brianmjerome).


@rangercairns I'm not familiar with the C code to totally rewrite the db2a driver. Is the source from litmis/nodejs-idb-connector?

kadler commented 6 years ago

Original comment by Tony Cairns (Bitbucket: rangercairns, GitHub: rangercairns).


Abstract json "transport" away from the "toolkit" call (different classes)

Guys, may already be doing (not read code), but unsolicited suggestion ...

Speaking with Alan Seiden today (php toolkit). One best suggestion/practice is abstract json 'transport' (http, asyn db2 SQL400JsonAsync, stored proc DB2PRORCj, ssl, web socket, etc.), away from 'toolkit' proper (call pgm, cmd, qsh, etc.). That is, maintain a very loose class coupling between "new toolkit class" and "new transport class". Both Alan and I regret php toolkit was DB2 centric 'connection', which, lead to unnatural "connection" interfaces to support new transports like REST (http).

BTW -- For node toolkit, you really need a change to db2a driver to call SQL400JsonAsync(callback) to fix many async problems db2 driver/toolkit. This may be a total re-write of db2 node (c code). For 'temporary', you can use a non-async API like db2json.db2procj (or db2procjr), for 'test' toolkit. I believe db2sock toolkit proper json should stay same, but new things added per demand (... i hope).

kadler commented 6 years ago

Original comment by Brian Jerome (Bitbucket: bjerome, GitHub: brianmjerome).


@fresche_sscott I took a look at your implementation, and it looks like a good start. Though, my back-end isn't using REST api so it would add an unnecessary HTTP call if I used it. I'm not familiar too much with DB2 unfortunately -- what exactly is the db2json.db2 file to which the remote URL is pointing?

kadler commented 6 years ago

Original comment by Steven Scott (Bitbucket: fresche_sscott, GitHub: Unknown).


Hey guys,

Just wanted to give a heads up that I've published the current work in progress code for the toolkit I've been working on. You can find it at this repo: https://bitbucket.org/freschesolutions/db2sock-itoolkit/overview

Feedback and pull requests are welcome. Development may be a bit on the slow side due to limited time I can commit to it, but I will be working on improving it and supporting more things.

kadler commented 6 years ago

Original comment by Brian Jerome (Bitbucket: bjerome, GitHub: brianmjerome).


@fresche_sscott I'd be happy to offer feedback when it's posted. Looking forward to it!

kadler commented 6 years ago

Original comment by Steven Scott (Bitbucket: fresche_sscott, GitHub: Unknown).


I'm hoping to have something published by the end of next week. The interface I've come up with initially is along the lines of (From one of my tests written in TypeScript):

#!typescript

        const tk = new Toolkit('http://x.x.x.x/db2json.db2');

        let pgm = tk.program('HELLO', 'DB2JSON');
        pgm.addParam('char', '128a', 'Hi there');

        tk.add(pgm);
        let result = tk.run((result: any) => {
            let data = result['script'][0]['pgm'][2];
            expect(JSON.stringify(data)).to.equal('{"char":"Hello World"}');
            done();
        });

This is just the initial implementation, and once posted, feedback and contributions will definitely be welcome. My initial thoughts were to try avoiding just passing in plain objects, and aim for more of a functional approach to adding parameters. At the moment I don't have any response processing being done, which is why the result value is being accessed as a plain object.

kadler commented 6 years ago

Original comment by Brian Jerome (Bitbucket: bjerome, GitHub: brianmjerome).


@rangercairns Thanks -- I think your comments helped. I've run a couple preliminary tests with Option (2) and the buffer size will be a non-issue.