IBM / db2sock-ibmi

An asynchronous PASE Db2 and IBM i integration library
MIT License
4 stars 7 forks source link

DB2Sock Toolkit #10

Closed kadler closed 3 years ago

kadler commented 6 years ago

Original report by Steven Scott (Bitbucket: fresche_sscott, GitHub: Unknown).


Hey guys,

I've been toying around with the start of a replacement toolkit that utilizes db2sock's REST interface. What I have so far is basic CMD, QSH and PGM calling.

I've written it in TypeScript, as a fully-JS module, no native code compilation required. I'm currently aiming for a more "functional" approach, compared to the existing toolkit's "array"-based approach to passing parameters around.

At the moment, I'm working on getting authorization to post it online under the MIT license, and I'm hoping to have it up within the next few weeks.

I wanted to open this task so that there can be public discourse and knowledge that it's something being worked on.

kadler commented 6 years ago

Original comment by Tony Cairns (Bitbucket: rangercairns, GitHub: rangercairns).


Buffer Size?

This topic is toolkit json (db2 driver aother topic). Buffer size depends on send/receive transport you choose to use with json toolkit (rest, db2, etc.).

db2sock/toolkit/proc/crtsql.cmd

0) Base in/out size of a "single program/cmd/qsh/etc." call is 15M. However, multiple calls per json request are possible, so this topic is more about output by transport.

#!bash

CREATE PROCEDURE DB2JSON.DB2PROC(
INOUT PGMTMPL BLOB(15M)) 
LANGUAGE C NOT DETERMINISTIC 
MODIFIES SQL DATA 
EXTERNAL NAME 'DB2JSON/DB2PROC(iCall400)' 
PARAMETER STYLE GENERAL;

db2sock/toolkit/cgi and db2sock/toolkit/fastcgi (both Apache and nginx)

1) REST -- Output buffer size is essentially infinite using REST interface (Apache+fastcgi+db2sock, or nginx+db2sock). That is, there is no limit of REST return bytes beyond administrator settings of the web site server (also time limits).

db2sock/db2/PaseCliAsync.h (the CLI API to json toolkit)

2) PASE db2 (libdb400.a) - Output buffer size is essentially infinite using SQL400Json. That is completely up to language toolkit provider to establish "how big" or "infinite".

#!c

== traditional cli style interface (synchronous) ==

SQLRETURN SQL400Json( SQLHDBC  hdbc, 
SQLCHAR * injson, SQLINTEGER  inlen, 
SQLCHAR * outjson, SQLINTEGER  outlen );

== NEW non-traditional async interfaces (for likes of node) ==

typedef struct SQL400JsonStruct { SQLRETURN sqlrc; 
SQLHDBC  hdbc; 
SQLCHAR * injson; SQLINTEGER  inlen; 
SQLCHAR * outjson; SQLINTEGER  outlen; 
void * callback; } SQL400JsonStruct;

== join async (async for languages like php, python, etc.) ===

SQL400JsonStruct * SQL400JsonJoin (pthread_t tid, SQLINTEGER flag);

== callback async (async for languages like node) ==
pthread_t SQL400JsonAsync ( SQLHDBC  hdbc, 
SQLCHAR * injson, SQLINTEGER  inlen, 
SQLCHAR * outjson, SQLINTEGER  outlen, 
void * callback );

Note (hdbc): A default connection/hdbc is provided whenever a toolkit interface does not pass hdbc (hdbc==0). This allows connection pooling to occur 'anywhere'. Aka, use built in connection pooling via json only. Most everyone likely to use this optoin. Or, you may build your own c program connection pooling passing the hdbc connection from you own c programs. (This is all about flexibility, only limited by your imagination building your toolkit).

Stored procedure convenience for local/remote Pase or ILE (db2sock/toolkit/procj)

2.1) DB2 result set (local/remote) -- Output buffer size is essentially infinite for db2 result set returns (DB2PROCJR - normal and DB2PROCJH - hex).

2.2) DB2 i/o parm (local/remote) -- Output buffer size is 15M for db2 i/o parm returns (DB2PROCJ).

#!bash

CREATE PROCEDURE DB2JSON.DB2PROCJ(
INOUT JSONBUF CLOB(15M)) 
LANGUAGE C NOT DETERMINISTIC 
MODIFIES SQL DATA 
EXTERNAL NAME 'DB2JSON/DB2PROCJ(iJson400)' 
PARAMETER STYLE GENERAL;

CREATE PROCEDURE DB2JSON.DB2PROCJR(
IN JSONBUF CLOB(15M)) 
LANGUAGE RPGLE NOT DETERMINISTIC 
MODIFIES SQL DATA 
Result Sets 1 
EXTERNAL NAME 'DB2JSON/DB2PROCJR(iJson400R)' 
PARAMETER STYLE GENERAL;

CREATE PROCEDURE DB2JSON.DB2PROCJH(
IN JSONBUF CLOB(15M)) 
LANGUAGE RPGLE NOT DETERMINISTIC 
MODIFIES SQL DATA 
Result Sets 1 
EXTERNAL NAME 'DB2JSON/DB2PROCJR(iJson400H)' 
PARAMETER STYLE GENERAL;

Anything you like ... get creative (you)

There really is not limit to buffer size. In fact, there is not limit to transports of the json requets. You could choose to write a ssl socket of your own. Maybe make up a queue interface using IBM i. On and on and on ... you are only limited by your own imagination.

kadler commented 6 years ago

Original comment by Tony Cairns (Bitbucket: rangercairns, GitHub: rangercairns).


Connection Pooling?

Connection pooling is built into new db2sock database driver (libdb400.a). Aka, you can 'choose' to let libdb400.a take care of connection pooling (also known as 'persistent' connections).

json toolkit use persistent connection

From a json interface perspective you only need surround your request with a 'qualified' connection. Every json packact/request you send should have the same 'qualified' key.

see source/toolkit/parser-json/README.md

Here is 'persistent' db2 json request. Each request re-uses connect "qual":"fred".

#!bash

{"connect":[
  {"qual":"fred"},
  {"query":[{"stmt":"select * from QIWS/QCUSTCDT"}, 
            {"fetch":[{"rec":2}]},
            {"fetch":[{"rec":2}]},
           ]}
]}
Note: A statement handle is returned in output of a "query".
"{"script":[{"query":[{"handle":3},

fetch "rec":"all" will close statement last record.
{"connect":[
  {"qual":"fred"},
  {"query":[{"handle":'.$handle.'},
            {"fetch":[{"rec":"all"}]}
           ]}
]}

-- or close early --

close "handle":3 will close statement.
{"connect":[
  {"qual":"fred"},
  {"close":[{"handle":'.$handle.'}]}
]}

Sample a 'persistent pgm json request. Each request re-uses connect "qual":"fred".

#!bash

{"connect":[
  {"qual":"fred"},
  {"pgm":[{"name":"HELLO",  "lib":"DB2JSON"},
        {"s":{"name":"char", "type":"128a", "value":"Hi there"}}
       ]}
]}

next request to same open connection ...

{"connect":[
  {"qual":"fred"},
  {"pgm":[{"name":"HELLO",  "lib":"DB2JSON"},
        {"s":{"name":"char", "type":"128a", "value":"Hi there"}}
       ]}
]}

The samples above are using a default persistent connection (current profile). You may 'choose' to provide a fully qualified connection to use alternative profiles (see PaseJson.c).

#!c

/* {"connect":{"db":"DB","uid":"UID","pwd":"PWD","qual":"QUAL","iso":"nc|uc|cs|rr|rs","libl":"mylib yourlib","curlib":"mylib"}}
 * Note: connection is provided toolkit services (if missing). Aka, connect(null, null, null)
 */
char * json_conn_attr [] = {"db","uid","pwd","qual","iso","libl","curlib", NULL};
int json_conn_tool [] = {TOOL400_CONN_DB,TOOL400_CONN_UID,TOOL400_CONN_PWD,TOOL400_CONN_QUAL,TOOL400_CONN_ISOLATION,TOOL400_CONN_LIBL, TOOL400_CONN_CURLIB};
kadler commented 6 years ago

Original comment by Brian Jerome (Bitbucket: bjerome, GitHub: brianmjerome).


What I'm currently looking for is to make a simple PGM call with JSON input and receive JSON output.

Example Program Call:

#!javascript
...
let program;

...
// Node receives program name/lib/input

// Create Program object
program = new Program({  // Program class defined elsewhere
    name: programName, // These property values are assigned elsewhere
    lib: programLib, 
    data: jsonInput
});

...

db2sock.callProgram(program, (response) => {
    // Handle json output response
    ...
});

Essentially, Node will take in a program name and JSON input, then running async callProgram will execute the program and return the response in the callback function.

Buffer Size?

How is the io buffer size handled? I've seen a few tests that are defaulted to 512K. Some cases I would need 4K or 15MB depending on output size.

Connection Pooling?

If db2sock is using a socket connection, is database pooling necessary? It'd be nice to not make a new connection every time a program is called. If there is no pooling done I can write it on the Node side.

Other Note

It'd be great when Node v8 is supported on IBM i to utilize async functions to write Promise-based code.