kadler / db2sock-test

1 stars 0 forks source link

Get Involved #2

Closed kadler closed 6 years ago

kadler commented 7 years ago

Original report by Chris Hird (Bitbucket: ChrisHird, GitHub: ChrisHird).


Would like to contribute

kadler commented 6 years ago

Original comment by Tony Cairns (Bitbucket: rangercairns, GitHub: rangercairns).


Thanks for your help supporting this project.

I am closing this issue because topics have become so varied that we will not be able to find information searching issues. That is, I suspect people may want look through issues titles and have a look at resolutions and chat.

Please open topical new issues from now forward. Thanks for all help to date.

kadler commented 6 years ago

Original comment by Tony Cairns (Bitbucket: rangercairns, GitHub: rangercairns).


status

Becoming apparent, not 'everything json' will be done by Holiday end. So, you are welcome to use db2sock support "as is", post issues, etc. However, look to next year to 'see' more key functions. In fact, request list grows as people in/out lab test potential of db2sock Open Source.

Happy Holidays!

kadler commented 6 years ago

Original comment by Tony Cairns (Bitbucket: rangercairns, GitHub: rangercairns).


"Mulligan" -- (in informal golf) an extra stroke allowed after a poor shot, not counted on the scorecard.

Anyway ...

"Mulligan" ... pagination (next 80 records)

Most obvious answer. Pagination (next 80 record). Simply use natural 'path search' of web servers.

#!bash
===
./db2jsonngix -start -connect /tmp/alice.sock ./db2jsonfcgi
url=http://myibmi/goask.alice
(db2 REST traffic to /tmp/alice.sock)
===
location ~ \.alice$ {
    fastcgi_pass   unix:/tmp/alice.sock;
    fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
    include        fastcgi_params;
}

===
./db2jsonngix -start -connect /tmp/bert.sock ./db2jsonfcgi
url=http://myibmi/goask.bert
(db2 REST traffic to /tmp/bert.sock)
===
location ~ \.bert$ {
    fastcgi_pass   unix:/tmp/bert.sock;
    fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
    include        fastcgi_params;
}

===
./db2jsonngix -start -connect /tmp/ernie.sock ./db2jsonfcgi
url=http://myibmi/goask.ernie
(db2 REST traffic to /tmp/ernie.sock)
===
location ~ \.ernie$ {
    fastcgi_pass   unix:/tmp/ernie.sock;
    fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
    include        fastcgi_params;
}

Above nginx.conf we have three different db2 'daemons'. The url request suffix (/goask.alice, /goask.bert, /goask.ernie), directs db2 rest json connect(null, null, null, 'qual') + 'stmt' to the correct waiting result set.

Missing today ... 'stmt' handle "will be" in the return json, so client has all information for the hash connect('db','uid','pwd','qual') + 'stmt'. Of course, likely to be connect(null, null, null, 'qual') + 'stmt' to the correct waiting result set (next 80 records).

Mulligan. Works. No mess.

kadler commented 6 years ago

Original comment by Tony Cairns (Bitbucket: rangercairns, GitHub: rangercairns).


... sleep better with daemons

This is all theory ... so ... (deleted see Mulligan).

FYI -- If you create a UNIX socket of type SOCK_STREAM (listenSock = socket(AF_UNIX, SOCK_STREAM, 0)), and accept connections on it, then each time you accept a connection, you get a new file descriptor (as the return value of the accept system call). This file descriptor reads data from and writes data to a file descriptor in the client process. Thus it works just like a TCP/IP connection.

kadler commented 6 years ago

Original comment by Tony Cairns (Bitbucket: rangercairns, GitHub: rangercairns).


In general I am in favor of not including authentication in the tool (db2sock) and instead let the layer above (Nginx, Apache, Node.js, etc) accomplish authentication ... documentation

Ok.

The connection process should handle the authority etc. all of which can be stored in the connection information?

Yes. In fact, currently, db2sock SQL400Connect (new aggregate), persistent connection is key="db2"+db+uid+pwd+qual. Therefore returning to same 'connection', the key must match (with same stmt handle ... of course). The theory is that you know your uid/password, so the other information is just which database and qualified key connection.

Simple. So it would seem.

However, json, dear json, especially REST json, default connect(null, null, null) seems practical over passing any password around (+/- ssl). Witness, all db2sock/tests/json tests simply use implicit default connect(null, null, null). Implication, of course, any 'right to use" REST interface must be stopped/allowed at web server front door, aka, Nginx, Apache (Basic Auth, etc.).

To point, db2sock ramification of connect(null,null,null) + qual, is simply key="qual". Therefore, web server 'authentication' best be working to hold back hordes of hackers running db2 queries. Mmm, well, i suppose you either trust web server authentication or not.

... new key attached for the next pre-fetch result set?

I suspect not. That is key="db2"+db+uid+pwd+qual along with the stmt handle would return you to the correct waiting result set (next 80 records). There may be an additional + daemon_name to key depending implementation of "persistent private connection" ... but ... too early to understand.

single db2sock 'web interface' instance running that all clients communicate with, or will each app have their own 'web interface' that listens on ...

Previous 'key' discussion is the root technology of "be there daemons" related question (above). There are factors no matter which direction.

#!bash

http request json -> (nginx/Apache fastcgi) ->(*)

== fastcgi one db2sock all db2 web work (easy memory based) ==
(1)->key="_db2_"+db+uid+pwd+qual->stmt(s) (one  db2sock)

== fastcgi db2sock manager(s) route work many daemons (IPC based) ==
(n)->key="_db2_"+deamon+db+uid+pwd+qual->stmt(s)
-> daemon(1) db2sock->"_db2_"+db+uid+pwd+qual->stmt(s)
-> daemon(2) db2sock->"_db2_"+db+uid+pwd+qual->stmt(s)
:
->daemon(n) db2sock->"_db2_"+db+uid+pwd+qual->stmt(s)

Note: 
32,000 conn + stmts 'resources' allowed any given daemon
(db2 limit 32K 'handles' per job/process ... not due to db2sock)

Answer: Frankly, risking great peril of 'all answers will be revealed' narrative, I just don't know yet. I suspect your own thoughts are good as mine.

kadler commented 6 years ago

Original comment by Chris Hird (Bitbucket: ChrisHird, GitHub: ChrisHird).


I agree lots of questions to be answered as to the best way to handle the aggregation.

Having a persistent connection should allow some management as we will know the connection id and maybe we have a key process that allows the data to be pre-fetched and delivered to the user on key presentation. Once delivered it has a new key attached for the next pre-fetch result set? The key and data has to be tied to the connection id so if the connection is dropped all of the garbage is collected and cleaned up. The key will also be connection relevant so a request for the same data over a different connection should be impossible?? The connection process should handle the authority etc. all of which can be stored in the connection information? I am with leaving the authorization to the web server side of the fence, if it decides you have authority why should lower process be required to re-interpret the authority.

kadler commented 6 years ago

Original comment by Aaron Bartell (Bitbucket: aaronbartell, GitHub: aaronbartell).


Therein, db2sock can simply provide 'keyed' means to return to the waiting result set for next 80 records. Aka, pagination, not security.

I am not sure. What do you think?

Is there meant to be a single db2sock 'web interface' instance running that all clients communicate with, or will each app have their own 'web interface' that listens on a predefined specific-to-application port. I ask because I am curious how authorization will work (vs authentication). Am I correct to assume that the user that starts the db2sock 'web interface' is the user that will run the SQL statements; and that user will obviously need authority to the DB2 tables in the SQL statement?

My question might be a moot point in that the situation would cause one approach over the other (i.e. if you want to run db2sock under different profiles then you need to start multiple db2sock 'web interfaces').

In general I am in favor of not including authentication in the tool (db2sock) and instead let the layer above (Nginx, Apache, Node.js, etc) accomplish authentication. With that said, it would be good to document sample authentication mechanisms (i.e. Basic Auth) so people can be up and running quickly. The community can take care of creating that documentation.

kadler commented 6 years ago

Original comment by Tony Cairns (Bitbucket: rangercairns, GitHub: rangercairns).


This immediately brings up the question of how the data would be handled should an aggregate call be made? Do we get all of the data in a single burst or is your "fetch" statement going to be the key to the aggregation? I have done some performance balancing previously by only fetching a set number of records and then filling in additional ones as the page down key is pressed (bit like UIM Manager).

Yes. Agree. I too have pondered 'pagination' of records by fetch. A balancing act between 'stateless' json requests vs. 'state full' json requests. Aka, idea of collecting the next 80 records of a waiting result set.

I have some ideas for db2sock, mostly around 'persistent private connection'. Aka, have 'key' will find your connection and waiting result set. However, I do not have 'IPC' code needed rendered into db2sock (yet). Of course, security implications of 'key' picking up where left off ... mmm ... maybe job of Kerberoes (web server admin, not db2sock???). Not fully formed yet.

If you give it some thought ... I would like to understand if 'web interface' (socket), leaves authentication/security job to the web server (Basic Auth, Kerberos, etc.). Therein, db2sock can simply provide 'keyed' means to return to the waiting result set for next 80 records. Aka, pagination, not security.

I am not sure. What do you think?

kadler commented 6 years ago

Original comment by Aaron Bartell (Bitbucket: aaronbartell, GitHub: aaronbartell).


No. I suffer from geek 'think, code, test, then document (i guess)'.

I don't want to slow down the good progress happening by asking for docs. I was mostly wanting to gain perspective as to the latest. I will stick to the commit history for now.

kadler commented 6 years ago

Original comment by Chris Hird (Bitbucket: ChrisHird, GitHub: ChrisHird).


Yes I will be on board to test as required. Critical eye is a given.. I will also need to go through the libdb400 source and get familiar so the critical eye has some insight into the process.

I am now looking at the php build requirements for PASE so I can add php-fpm to nginx on the IBM i. Exciting times and lots to learn. Will probably know a lot more about PASE environment and how it all fits together before I am done :-)

kadler commented 6 years ago

Original comment by Tony Cairns (Bitbucket: rangercairns, GitHub: rangercairns).


Do you have the latest syntax documented anywhere? (db2sock json manual)

No. I suffer from geek 'think, code, test, then document (i guess)'. Many json tests (db2sock/tests/json). Ultimately, working json tests decide manual content. Mmm ... maybe i could generate a user manual (says a geek).

kadler commented 6 years ago

Original comment by Tony Cairns (Bitbucket: rangercairns, GitHub: rangercairns).


If the question is 'should we offer ODBCosaurus rex for the sake of continuity?' My response is the dinosaurs became extinct for very good reason... I am with you on the need to move on.

Great. 1 + 2 = 3 vs. ODBCosaurus rex. I like our odds as 'meteor men'.

"Jt is ywrite that euery thing Hymself sheweth in the tastyng" (14th century proverb).

I suggest we finish a few aggregate extensions to DB2 CLI architecture. This case json based, test with a modified ibm_db2, see we actually like the result (proof of a pudding). I assume you will test with critical eye (is needed).

Thanks.

kadler commented 6 years ago

Original comment by Aaron Bartell (Bitbucket: aaronbartell, GitHub: aaronbartell).


The following simple aggregate json API reduces 75 'traditional' calls to 1 call across (new db2sock API). I repeat '1 call', not 75 calls across REST DB2 'new' API.

{"query":[{"stmt":"select * from QIWS/QCUSTCDT where LSTNAM=? or LSTNAM=?"}, {"parm":[{"value":"Jones"},{"value":"Vine"}]}, {"fetch":[{"rec":"all"}]} ]}

Do you have the latest syntax documented anywhere? I am able the glean a lot from the unit tests, though one I was particularly curious about was SQL stored procedures.

Also, in theory, the community, in they so chose, could create a Rails, Node.js, PHP, Python, etc ORM over this json interface. It would take some ODBCosaurus-abstraction-layer massaging, but I believe it could be done.

I am very curious what performance will look like. Specifically, what if it was good enough that this same interface was used for both 1-tier and 2-tier apps. As long as this is developed in layers we will be able to switch out transport implementations as technology progresses (i.e. today WebSockets, tomorrow http/2, Thurs??). Said another way, both tier approach should be able to work with a single interface.

Tony, I have a great deal of fun reading your responses...

Same. :-)

kadler commented 6 years ago

Original comment by Chris Hird (Bitbucket: ChrisHird, GitHub: ChrisHird).


Tony, another thought as I ponder the points you raised.

With the code you posted it needs to fetch each record in isolation, so the call to the server is made each time the db2_fetch_assoc() is called (I can only imagine the code required to manage the cursor etc in the file). This immediately brings up the question of how the data would be handled should an aggregate call be made? Do we get all of the data in a single burst or is your "fetch" statement going to be the key to the aggregation? I have done some performance balancing previously by only fetching a set number of records and then filling in additional ones as the page down key is pressed (bit like UIM Manager).

Chris...

kadler commented 6 years ago

Original comment by Chris Hird (Bitbucket: ChrisHird, GitHub: ChrisHird).


Tony, I have a great deal of fun reading your responses... My view is we need to look at what modern programmers need. I fully support the aggregate route. One of the issues we need to address is the performance of web based interactions to IBM i, they are only going to become more important as we move forward with the modernization of IBM i applications. Changing from 75 requests to 1 must be a good thing, even it it does upset the old one step at a time CLI process.

If the question is 'should we offer ODBCosaurus rex for the sake of continuity?' My response is the dinosaurs became extinct for very good reason... I am with you on the need to move on.

kadler commented 6 years ago

Original comment by Tony Cairns (Bitbucket: rangercairns, GitHub: rangercairns).


I am definitely on the json over rest to ibm_db2 scripts path

Allow me to play unfair with so called rest full db2 cli ambitions. Be assured, not personal, simply full db2 cli 'used car' truth in lending.

Ok, let's talk about hypothetical performance of rest full db2 cli interface.

tracing db2 cli ... “How much wood could a woodchuck chuck ... ” By Mother Goose.

The following program is typical using ibm_db2. Simple query, fetch results.

#!bash

zzsimple.php:

<?php

require_once('connection.inc');

$conn = db2_connect($db,$username,$password);

$result = db2_exec($conn, "select * from staff");

while ($row = db2_fetch_assoc($result)) {
printf ("%5d  ",$row['ID']);
printf ("%-10s ",$row['NAME']);
printf ("%5d ",$row['DEPT']);
printf ("%-7s ",$row['JOB']);
printf ("%5d ", $row['YEARS']);
printf ("%15s ", $row['SALARY']);
printf ("%10s ", $row['COMM']);
print "\n";
}

?>

output:
bash-4.3$ php zzsimple.php 
   10  Sanders       20 Mgr         7        18357.50            
   20  Pernal        20 Sales       8        18171.25     612.45 
:
  340  Edwards       84 Sales       7        17844.00    1285.00 
  350  Gafney        84 Clerk       5        13030.50     188.00 
(35 records returned)

Using TRACE=0n ability of new libdb400, we see number calls across REST full db2 CLI counting SQLxxx APIs traced. Aka, transition calls over full DB2 CLI API.

#!bash
bash-4.3$ export TRACE=on
bash-4.3$ php zzsimple.php                          
   10  Sanders       20 Mgr         7        18357.50            
      :
  350  Gafney        84 Clerk       5        13030.50     188.00 
bash-4.3$ unset TRACE      
bash-4.3$ grep tbeg /tmp/libdb400_trace_9080643    
SQLAllocHandle.9080643.1513093563.1.tbeg +++success+++
:                        
bash-4.3$ grep -c tbeg /tmp/libdb400_trace_9080643 
75

(75 transitions script to db2 using SQLxxx APIs)

We see (grep -c tbeg above), 75 CLI transitions/calls from script to DB2 using fine grained DB2 CLI APIs (SQLxxx tbeg). This means 75 REST calls across our wild, wild, west internet to reach our server for only 35 records. Uf Da!!!

Now, unfair, extrapolate this simple task using full REST DB2 CLI by typical web usage of 10-1000 hits/second. Well, astronomical traffic over full DB2 CLI is perhaps an understatement. Performance by full DB2 CLI REST may simply break the backs of engineers attempting to build faster CPUs and wider bandwidth nets (bloody programmers).

Yes. I told you. I consider results to be eye opening (if unfair).

better using DB2 aggregate APIs ... especially json REST DB2

The following simple aggregate json API reduces 75 'traditional' calls to 1 call across (new db2sock API). I repeat '1 call', not 75 calls across REST DB2 'new' API.

#!json

{"query":[{"stmt":"select * from QIWS/QCUSTCDT where LSTNAM=? or LSTNAM=?"},
        {"parm":[{"value":"Jones"},{"value":"Vine"}]},
        {"fetch":[{"rec":"all"}]}
       ]}

CLIosaurus rex speaks ...

Of course, ramifications of moving beloved existing interfaces like ibm_db2, to use aggregate json APIs is unsettling (see CLIosaurus rex below). Worse, any change driver level (libdb400.a), impacts profoundly wonderful abstractions like Ruby Rails, ZF2 Database, etc. My view, ALL abstractions like Rails have been medieval questing to find the best wall paper to top fine grain ODBC APIs. Aka, surprising to me, nobody willing to stand up and say ODBC/CLI emperor is not wearing clothes.

However, any movement in evolution comes with ramifications. Example, whole idea of 'check SQL error' becomes more game of 'trust good 90% time', but will hunt for error in pile of return codes (should it ever fail). The fundamental premise of coding to 'failure' DB2 CLI is challenged. Aka, instead, design accepting real world production code rarely fails.

More heresy, only during development is every SQL statement intensive error code checking any value. You may take some comfort. The idea 'aggregating' errors is not actually new. In fact, relatively 'low level' ibm_db2 driver already makes many SQL calls behind the curtain, only offering up enough errors to keep a developer happy (art before science).

Yes, yes, what of our children (baby rails, baby ZF2 database)?

BTW -- Fairly, I am not exactly sure what happens to baby rails or ZF2.

ODBCosaurus rex (CLIosaurus rex) ... a parting thought

We mark 17 years into 21st century, and yet, ODBCosaurus rex is still driving our thinking toward full REST DB2 CLI. ODBC/CLIosaurus rex not evolving, missing overdue great meteor impact in minds of script writers. The one simple truth 'CLI architecture does not fit the scripting mission' .

the big question ...

Am i changing your thinking on full db2 cli rest (ibm_db2 'as is')??? Do you want to go aggregate for 1 call (better than ibm_db2), instead of bowing to CLIosaurus rex demanding 75 calls (ibm_db2 'as is')?

BTW -- Again, I apologise. In my mind our REST DB2 CLI chat is analogous to movie 'The Matrix", where I am asking you to choose the red pill (better than ibm_db2) or the blue pill (ibm_db2 'as is'). I am indeed being unfair, but only for kind purpose.

kadler commented 6 years ago

Original comment by Chris Hird (Bitbucket: ChrisHird, GitHub: ChrisHird).


Tony, yes I understand the issues and your analogy of a jet plane in a stadium. Also understand the grief which will inevitably come from updating the DB2 API set to encompass JSON support and async etc. but they are important for any integrated solution in today's IOT world, will that change? I have no clue but for me its something I am facing today and I would like to get a solution.

kadler commented 6 years ago

Original comment by Chris Hird (Bitbucket: ChrisHird, GitHub: ChrisHird).


Hmmmm lots to think about here... I am definitely on the jason over rest to ibm_db2 scripts path. But that is only half the battle as I would like to have additional command/program call capabilities etc. Once the db2 stuff is down. The work you have done with db2sock should be able to be extended to deliver some of that functionality.

kadler commented 6 years ago

Original comment by Tony Cairns (Bitbucket: rangercairns, GitHub: rangercairns).


You can vote (of course) ...

Do you understand jet plane inside a stadium problem for DB2 CLI/ODBC json? Aka, understand why DB2 CLI/ODBC APIs are too small in mission, and, never will be right for async requests and/or json rest request??

So, we feel 'right sizing' DB2 aggregate APIs is needed to do good job with DB2 json and/or async json 'drivers'. However, grief is allowed. That is, giving way on simple REST DB2 CLI driver runs 'exactly' like my ibm_db2 script today.

Yep. It is a big deal changing DB2 APIs to fit the real need of json. Exactly why I put this off for a while to test simpler mechanics of json toolkit (much less controversial).

kadler commented 6 years ago

Original comment by Tony Cairns (Bitbucket: rangercairns, GitHub: rangercairns).


db2 json interface ... not your father's CLI ... probably ... i think

So, bluntly, DB2 CLI over rest json is like running a jet plane inside a football stadium. The distance around stadium too small for jet to work well. Same is true for the fine glandular DB2 CLI APIs. That is, in a json DB2 API, we really want bigger 'aggregate' actions, like, connect, query, fetch in one json call (below). The 'better API' below our REST DB2 json jet plane flies to Europe and back with records in tow.

#!json

{"query":[{"stmt":"select * from QIWS/QCUSTCDT where LSTNAM=? or LSTNAM=?"},
        {"parm":[{"value":"Jones"},{"value":"Vine"}]},
        {"fetch":[{"rec":"all"}]}
       ]}

In fact, even in binary call CLI API mode (ibm_db or ibm_db2), async interfaces absolutely stink because of fine grain DB2 calls. Witness the unnatural acts taken in node ibm_db to try to force DB2 CLI to co-operate with async DB2 requests. BTW -- MySql is not much better if you check out php async, aka, they gave up the ODBC ghost and simply async'd query API (close eyes, go home, hide at work).

Anyway, picking a better 'aggregate' DB2 CLI API is really what is needed to have good performing rest DB2 json requests. In fact, this is exactly what my Chief OS archtect wants me to do, so we can build a reasonable nodejs json-based driver (Jesse G.).

cheap ... mmm ... cheap driver rest ibm_db2 driver you say??

Yes. I understand desire to have a off box/remote simple solution db2 full CLI rest json driver (nginx+db2jsonnginx). Forget laptop driver at all, just use json over rest to IBM i to run your ibm_db2 scripts. Yep. I know how to do this. In fact, talked to Jesse G. about this exactly. The ultimate cheap solution with no installation of anything beyond some php (or node, or python, etc.).

However, aggregate driver rest DB2 cli++ was priority one.

kadler commented 6 years ago

Original comment by Chris Hird (Bitbucket: ChrisHird, GitHub: ChrisHird).


Tony

I have run the run_cgi_basic_auth.php script and all looks good. :-) Performance was very good as far as i can tell but won't really know until I can get the query stuff running (I need the cart). Sorry I am just hacking my way around this until i know a bit more .. I would like to help if possible so let me know where I can assist and i will get something in my plans..

kadler commented 6 years ago

Original comment by Chris Hird (Bitbucket: ChrisHird, GitHub: ChrisHird).


OK I think I have the solution based on your input above.

kadler commented 6 years ago

Original comment by Chris Hird (Bitbucket: ChrisHird, GitHub: ChrisHird).


Tony

OK ran the tests on the Linux Server and all failed (db2_connect() not found) This is probably because running php as a CLI on the Linux server is expecting the local server to run the requests? I need to force the tests to run on the Linux Server and it know that the request is to be passed to the IBM i service? From the browser that is easy as NGINX is configured to send all requests to the IBM i, php CLI on the other hand knows nothing of NGINX or IBM i. If I put the test subdirectory into the NGINX root and run as a browser request through the NGINX port that should work? Am I on the right path?

kadler commented 6 years ago

Original comment by Tony Cairns (Bitbucket: rangercairns, GitHub: rangercairns).


I am interested in the db2 query stuff as much as anything as it is going to help with some customer work I am doing. Eventually would like to look at other IBM i objects and program calls etc.

We have the horse before the cart in your case. That is, we have the toolkit json interface, but do not have total db2 json interface (only one small test j0601_query_qcustcdt.json previous mention).

kadler commented 6 years ago

Original comment by Tony Cairns (Bitbucket: rangercairns, GitHub: rangercairns).


For example:

On my laptop. I run the tests.

#!bash

my laptop:
> env | grep PHP
PHP_DB=UT28P63
PHP_URL=http://ut28p63/frog.db2
PHP_PWD=NICE2DB2
PHP_UID=DB2

> cd db2sock/tests/php
== hello test ==
> php test0004_hello_pgm_cgi_basic_auth.php
== run all json tests (requires IBM i ILE LIB DB2JSON for RPG tests) ==
> php run_cgi_basic_auth.php

my ibm i:
nginx.conf
location ~ \.db2$ {
    fastcgi_pass   127.0.0.1:9002;
    fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
    include        fastcgi_params;
}
kadler commented 6 years ago

Original comment by Chris Hird (Bitbucket: ChrisHird, GitHub: ChrisHird).


OK I see your point. I will copy the test structure to the Linux Server and run directly from there.

I am interested in the db2 query stuff as much as anything as it is going to help with some customer work I am doing. Eventually would like to look at other IBM i objects and program calls etc.

Chris...

kadler commented 6 years ago

Original comment by Tony Cairns (Bitbucket: rangercairns, GitHub: rangercairns).


I don't even have the db2sock directory anywhere? I just downloaded the latest zip file you mentioned, copied the .a files into the QOpenSys\usr\lib directory and went from there?

Yes. This is the minimum set of files. Aka, you only need the 'driver binaries' to run on any IBM i (/QOpenSys/usr/lib).

I can see within the zip file the tests directory under the libdb400-1.1.0-sg5 directory. Can you clarify what I am missing?

The optional tests can be loaded anywhere. Load them on your laptop Linux (if you want). You are already talking to IBM i nginx+db2jsonnginx from Linux, so just run the php tests on laptop (you will need to maintain tests directory structure).

kadler commented 6 years ago

Original comment by Chris Hird (Bitbucket: ChrisHird, GitHub: ChrisHird).


Tony

I think I have missed something? The export request seems to be pointing to the http root then a file call db2json.db2? I don't even have the db2sock directory anywhere? I just downloaded the latest zip file you mentioned, copied the .a files into the QOpenSys\usr\lib directory and went from there? I can see within the zip file the tests directory under the libdb400-1.1.0-sg5 directory. Can you clarify what I am missing?

Chris...

kadler commented 6 years ago

Original comment by Tony Cairns (Bitbucket: rangercairns, GitHub: rangercairns).


I would like to know what services are available? Are they exactly the same as the old db2400 (bb)? Do we have any documentation available I can run through?

So, clarity.

toolkit josn -- yes -- We only have toolkit json (so far). That is, ok to run all json tests using db2sock/tests/php/run_cgi_basic_auth.php using nginx+db2jsonnginx.

db2 json -- no -- A full db2 json driver is not available (yet). Next post will discuss 'full db2 json' driver. There is one test using start of db2 json driver, j0601_query_qcustcdt.json. This only gives a rough idea of db2 json.

#!json

cat tests/json/j0601_query_qcustcdt.json 
{"query":[{"stmt":"select * from QIWS/QCUSTCDT where LSTNAM=? or LSTNAM=?"},
        {"parm":[{"value":"Jones"},{"value":"Vine"}]},
        {"fetch":[{"rec":"all"}]}
       ]}
kadler commented 6 years ago

Original comment by Chris Hird (Bitbucket: ChrisHird, GitHub: ChrisHird).


Nice! I configured the Linux NGINX server to call the IBM i db2jsonnginx server and it works exactly the same :-) I did add a new service to listen on a different port (9003) as the other only listened on 127.0.0.1:9002 so now it has 2 services running one for internal and one for external. I would like to know what services are available? Are they exactly the same as the old db2400 (bb)? Do we have any documentation available I can run through? Performance so far seems pretty good. I am impressed...

Chris...

kadler commented 6 years ago

Original comment by Tony Cairns (Bitbucket: rangercairns, GitHub: rangercairns).


You can run entire php suite of tests over REST to nginx+db2jsonngix. Aka, REST nginx+db2jsonngix replaces ibm_db2/odbc driver for the toolkit tests.

#!bash

> export PHP_URL=http://myibmi/db2/db2json.pgm  (ILE-CGI - works partial)
> export PHP_URL=http://myibmi/db2json.db2  (fastcgi- apache or nginx - works good)
--- optional ---
> export PHP_DB=MYIBMI (*LOCAL)
> export PHP_UID=MYUID
> export PHP_PWD=MYPWD

Run all tests_json ...
> php run.php

Run driver tests_json ...
> cd db2sock/tests/php
> php run_ibm_db2_set.php
> php run_ibm_db2_io.php
> php run_odbc.php
> php run_odbc.php
> php run_cgi_basic_auth.php <-- run this one

Note: Basic auth is ignored by nginx (do not have to change).

One at a time ...
> php test0000_do_thing32
kadler commented 6 years ago

Original comment by Chris Hird (Bitbucket: ChrisHird, GitHub: ChrisHird).


Just to add to the above, I simply ran a test for http://sas4:1984/test.db2 and the result back was {"ok":false,"reason":"empty"} So it looks like its working but I obviously need to understand a little more to progress the testing :-) Yes I did change the port for NGINX and yes it seems to be bloody fast!

kadler commented 6 years ago

Original comment by Chris Hird (Bitbucket: ChrisHird, GitHub: ChrisHird).


Hi Tony

OK I have set up the environment on the IBM i with nginx running against port 80. I have reconfigured so that .db2 requests run to the db2jsonngix service running against 127.0.0.1:9002 as per your instructions above. I am now confused at to the tests you ran as you have a PHP script which should tie up with the test scripts? Can you explain a little more about how I should set up a test for the db2 extension?

kadler commented 6 years ago

Original comment by Tony Cairns (Bitbucket: rangercairns, GitHub: rangercairns).


summary

The information above will show you how to set up nginx as localhost:port or unix domain socket. I believe php fpm supports both options, so we have proved nginx<>fastcgi is working.

Secondary, we have demonstrated using toolkit json REST to multiple configurations including, ILE-CGI (httpd,conf), fastcgi (fastcgi.conf), and nginx (nginx.conf). I ran all versions from my Linux laptop to IBM i REST using download tests_php.All worked great, except, ILE-CGI (httpd.conf) had a few CCSID(ish) issues. Therefore i recommend either Apache<>fastcgi or niginx<>fasctcgi as both performed reasonably with json REST.

kadler commented 6 years ago

Original comment by Tony Cairns (Bitbucket: rangercairns, GitHub: rangercairns).


-> nginx fix namelen-1 unix domain socket

#!bash

nginx.conf

# pass db2 json to FastCGI server listening on 127.0.0.1:9002
# ./db2jsonngix -start -connect 127.0.0.1:9002 ./db2jsonfcgi
#
location ~ \.db2$ {
    fastcgi_pass   127.0.0.1:9002;
    fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
    include        fastcgi_params;
}
# pass db3 json to FastCGI server listening on db3.sock
# ./db2jsonngix -start -connect /tmp/db3.sock ./db2jsonfcgi
# Note: Using chroot /QOpenSys/db2sock for development db2sock,
#       but running nginx as root /QOpenSys/QIBM/ProdData/OPS/tools/bin/nginx.
location ~ \.db3$ {
    fastcgi_pass   unix:/QOpenSys/db2sock/tmp/db3.sock;
    fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
    include        fastcgi_params;
}
kadler commented 6 years ago

Original comment by Chris Hird (Bitbucket: ChrisHird, GitHub: ChrisHird).


Tony

I am in the UK at the moment so I will take a look at this when I get back.

Chris..

kadler commented 6 years ago

Original comment by Tony Cairns (Bitbucket: rangercairns, GitHub: rangercairns).


nginx add config

1.1.0-sg2 - Added nginx configuration for REST json to new driver and toolkit. Very simple to use see source/fastcgi README.md

#!bash

http (ut28p63 == your machine)
http://ut28p63/db2json.db2

===
start db2jsonfcgi (using db2jsonngix)
===
bash-4.3$ cd /QOpenSys/usr/lib
bash-4.3$ ./db2jsonngix -start -connect 127.0.0.1:9002 ./db2jsonfcgi

===
start nginx
===
/QOpenSys/QIBM/ProdData/OPS/tools/bin/nginx -c /home/ADC/nginx/conf/nginx.conf

===
config (nginx.conf)
===
bash-4.3$ pwd
/home/ADC/nginx/conf
bash-4.3$ ls
fastcgi_params - /QOpenSys/QIBM/ProdData/OPS/tools/conf
mime.types - /QOpenSys/QIBM/ProdData/OPS/tools/conf
nginx.conf - (see below)

bash-4.3$ cat nginx.conf 

:
# pass db2 json to FastCGI server listening on 127.0.0.1:9002
# ./db2jsonngix -start -connect 127.0.0.1:9002 ./db2jsonfcgi
#
 location ~ \.db2$ {
     fastcgi_pass   127.0.0.1:9002;
     fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
     include        fastcgi_params;
}
# pass db3 json to FastCGI server listening on /home/ADC/nginx/logs/db3.soc
# ./db2jsonngix -start -connect /tmp/db3.sock ./db2jsonfcgi
# Note: Bug db2jsonngix db3.sock (db2jsonngix), matches db3.soc (nginx).
#       Also, using chroot /QOpenSys/db2sock for development db2sock,
#       but running nginx as root /QOpenSys/QIBM/ProdData/OPS/tools/bin/nginx.
 location ~ \.db3$ {
     fastcgi_pass   unix:/QOpenSys/db2sock/tmp/db3.soc;
     fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
     include        fastcgi_params;
}
:

The same php test works for ILE CGI, Apache fastcgi, or nginx fastcgi (see source/tests_php). Note: Basic auth is ignored by nginx. I did not bother to read up on this option for nginx.

#!php

<?php
// export PHP_URL=http://ut28p63/db2/db2json.pgm  (ILE-CGI - works partial)
// export PHP_URL=http://ut28p63/db2json.db2  (fastcgi-PASE -- unix socket)
// export PHP_URL=http://ut28p63/db2json.db2  (nginx -- 127.0.0.1:9002)
// export PHP_URL=http://ut28p63/db2json.db3  (nginx -- unix socket)
$url        = getenv("PHP_URL"); 
// nginx ignores basic auth
$user       = getenv("PHP_UID"); // export PHP_UID=MYID
$password   = getenv("PHP_PWD"); // export PHP_MYPWD

$clob = myjson();
print("Input:\n");
var_dump($clob);
print("Output:\n");

$context  = stream_context_create(
  array('http' =>
    array(
      'method'  => 'POST',
      'header'  => "Content-type: application/x-www-form-urlencoded\r\n".
                   "Authorization: Basic " . base64_encode("$user:$password"),
      'content' => $clob
    )
  )
);
$ret = file_get_contents($url, false, $context);
var_dump($ret);

function myjson() {
$clob =
'{"pgm":[{"name":"HELLO",  "lib":"DB2JSON"},
        {"s":{"name":"char", "type":"128a", "value":"Hi there"}}
       ]}';
return $clob;
}

?>
kadler commented 6 years ago

Original comment by Tony Cairns (Bitbucket: rangercairns, GitHub: rangercairns).


Ok, ran a quick test to check lpp ops nginx fastcgi working ... yes (small java test).

This java test is known as an 'external' fastcgi server test (stand alone fastcgi).

I did NOT run a unix domain socket fasctcgi test (like zend server apache fastcgi php)

#!bash

http://ut28p63/fred.jt

FastCGI-HelloJava stdio

request number 1 running on host localhost
request number 1 running on host /fred.jt 

FastCGI-HelloJava stdio

request number 2 running on host localhost
request number 2 running on host /fred.jt 
#!bash

        location ~ \.jt$ {
            root           htdocs;
            fastcgi_pass   127.0.0.1:9000;
            fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
            include        fastcgi_params;
        }
#!bash

bash-4.3$ /QOpenSys/QIBM/ProdData/OPS/tools/bin/nginx -v                                 
nginx version: nginx/1.10.3
bash-4.3$ /QOpenSys/QIBM/ProdData/OPS/tools/bin/nginx -c /home/ADC/nginx/conf/nginx.conf

bash-4.3$ ps -ef | grep nginx
     adc [7153496 (bb)](https://bitbucket.org/litmis/db2sock/commits/7153496)       1   0 12:08:19      -  0:00 /QOpenSys/QIBM/ProdData/OPS/tools/bin/nginx -c /home/ADC/nginx/conf/nginx.conf 
     adc [7153497 (bb)](https://bitbucket.org/litmis/db2sock/commits/7153497) [7153496 (bb)](https://bitbucket.org/litmis/db2sock/commits/7153496)   0 12:08:19      -  0:00 /QOpenSys/QIBM/ProdData/OPS/tools/bin/nginx -c /home/ADC/nginx/conf/nginx.conf 

htdocs/external/tests/TinyFCGI.java

#!java

class TinyFCGI { 
 public static void main (String args[]) {  
  int count = 0;
   while(new FCGIInterface().FCGIaccept()>= 0) {
   count ++;
   System.out.println("Content-type: text/html\n\n");
   System.out.println("<html>");
   System.out.println("<head><TITLE>FastCGI-Hello Java stdio</TITLE></head>");
   System.out.println("<body>");
   System.out.println("<H3>FastCGI-HelloJava stdio</H3>");
   System.out.println("<br>request number " + count + " running on host " + System.getProperty("SERVER_NAME"));
   System.out.println("<br>request number " + count + " running on host " + System.getProperty("SCRIPT_NAME"));
   System.out.println("</body>");
   System.out.println("</html>"); 
   }
  }
}
#!bash

bash-4.3$ java -DFCGI_PORT=9000 TinyFCGI &
kadler commented 6 years ago

Original comment by Chris Hird (Bitbucket: ChrisHird, GitHub: ChrisHird).


Here is the build on my system

nginx version: nginx/1.10.3

built by gcc 4.8.2 (GCC)

built with OpenSSL 1.0.2j 26 Sep 2016 (running with OpenSSL 1.0.2h 3 May 2016)

TLS SNI support enabled

configure arguments: --prefix=/QOpenSys/QIBM/ProdData/OPS/tools --sbin-path=/QOpenSys/QIBM/ProdData/OPS/tools/bin/nginx --with-poll_module --with-threads --with-ipv6 --with-http_v2_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_ssl_module --with-http_auth_request_module --with-http_slice_module --with-http_secure_link_module --with-http_degradation_module --with-http_random_index_module --with-http_flv_module --with-http_mp4_module --with-debug --with-cpu-opt=ppc --with-cc-opt=-Wno-sign-compare --with-ld-opt='-Wl,-brtl -Wl,-blibpath:/QOpenSys/QIBM/ProdData/OPS/tools/lib/libgcc-4.8.2:/QOpenSys/QIBM/ProdData/OPS/tools/lib:/QOpenSys/usr/lib -L/QOpenSys/QIBM/ProdData/OPS/tools/lib/libgcc-4.8.2'

That seems to imply NO FastCGI capabilities.

Chris…

kadler commented 6 years ago

Original comment by Chris Hird (Bitbucket: ChrisHird, GitHub: ChrisHird).


Good catch, I would have thought it would be standard but maybe not for the LPP OPS version?

kadler commented 6 years ago

Original comment by Tony Cairns (Bitbucket: rangercairns, GitHub: rangercairns).


Mmmm ... i did a quick read up on fastcgi with ngix ...

nginx fastcgi proxy

Warning ... a few unanswered questions about LPP OPS version of nginx with respect to supporting fastcgi (any). Please feel free, but I need to chat builder LPP OPS nginx (not around now).

kadler commented 6 years ago

Original comment by Chris Hird (Bitbucket: ChrisHird, GitHub: ChrisHird).


As usual Tony you are 10 steps ahead of me! Not sure of the implications and how to implement at this point.? But it looks interesting enough to give it a run through? I assume I could have the NGINX running on one server and the fastcgi_pass linked to any IP/port? (Still delving into the NGINX capabilities)

Chris..

kadler commented 6 years ago

Original comment by Tony Cairns (Bitbucket: rangercairns, GitHub: rangercairns).


db2 ... ngix -> any language (php fpm)-> db2 driver (ibm_db2, etc.) -> db2sock (libdb400.a, libtk400.a, libjson400.a, etc.)

Ok, db2 side should be trivial. That is after you get php(fpm) running, just a normal db2_connect, etc, (or pdo_ibm, odbc, etc.).

rest ... ngix fastcgi db2sock (need a main)

Mmm ... almost work out of the can (db2jsonfcgi). Unfortunately we need a slightly different main zfcgi to start db2jsonngix. Aka, we need to be able to start start 'standalone' on a given port (like 9000 below). Not hard to do I think, but depends on interest (yours). Do you have any interest in a json interface over rest to ngix/db2sock (toolkit)???

#!bash

location ~ \.db2$ {
    fastcgi_pass  127.0.0.1:9000;
}
kadler commented 6 years ago

Original comment by Chris Hird (Bitbucket: ChrisHird, GitHub: ChrisHird).


Tony

Interesting…I think option 1 is what I need as I will need php anyhow for other purposes. We have NGINX on IBM I but no php-fpm as far as I know so that is where I need to start looking.

Thanks

kadler commented 6 years ago

Original comment by Tony Cairns (Bitbucket: rangercairns, GitHub: rangercairns).


Looking at NGINX on IBM i and running php7.0-fpm (not available today). Would your solution tie into this cleanly?

Yes. Web server makes no difference. Language makes no difference as db2sock supports any db2 caller or rest caller (php, node, python, go, etc.).

#!bash

nginx fastcgi php (fpm)

 # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
 #
 location ~ \.php$ {
 include snippets/fastcgi-php.conf;

 # With php7.0-cgi alone:
 # fastcgi_pass 127.0.0.1:9000;
 # With php7.0-fpm:
 fastcgi_pass unix:/run/php/php7.0-fpm.sock;
 }

Better question ... 'how many ways db2sock??'

1) db2: nginx->php(fpm) ... where php drivers ibm_db2, pdo_ibm, odbc to db2sock libdb400.a. Therefore IBM db2sock DB2 (CLI) and, db2sock toolkit, including json interface, etc.

2) rest: nginx->PGM-db2jsonfcgi(pase fastcgi) ... should be able to set-up db2sock fastcgi without php (pase program source db2sock/fastcgi/db2jsonfcgi).

kadler commented 6 years ago

Original comment by Chris Hird (Bitbucket: ChrisHird, GitHub: ChrisHird).


Tony

Looking at NGINX on IBM i and running php7.0-fpm (not available today). Would your solution tie into this cleanly?

kadler commented 6 years ago

Original comment by Tony Cairns (Bitbucket: rangercairns, GitHub: rangercairns).


For those wondering ... when will db2sock SQLJson400 be available?? Well, plan is during September (this September 2017). Actually hope to have at least the correct framework up to repository sometime next week (litmis/db2sock).

Again, the one up in repository is not correct, simply a toy for me to work out general ideas. Aka, do not rely on the odd json like 'dcl-s' and 'dcl-ds' ... i was only working through some of the conversion code (string 2 packed/zoned, etc.).

kadler commented 7 years ago

Original comment by Tony Cairns (Bitbucket: rangercairns, GitHub: rangercairns).


The json/xml parsers are complete user replaceable. To wit, no pre-defined format for JSON or XML, all up to your parser. The only parser requirement to toolkit is parser input ke/value pairs, and parser callbacks for output of IBM i calls (cmd, pgm, etc.). Take a look at toolkit-base/README.md for details (not complete).

A default parser will be provided (not complete yet), but any toolkit parser(s) dynamically loaded based on env vars (README toolkit-parser-json, toolkit-parser-xml). The dynamic load occurs on first use of SQL400Json or SQL400Xml call, also 'async' capable already, like all DB2 interfaces in the new driver (libdb400.a).

Again, much to do, remove parser junk in PaseTool.c (key, val only), formalise a real json default, duplicate xmlservice format for default xml interface, etc.

Happy Fourth of July.

kadler commented 7 years ago

Original comment by Tony Cairns (Bitbucket: rangercairns, GitHub: rangercairns).


Update -- I called my first hello world PGM and hello world SRVPGM with toolkit (libtkit400.a) on top of db2 driver (libdb400.a). You can see how PASE and ILE work together (toolkit-base/PaseTool.c and ILE-PROC/db2proc.c). This simple pattern replaces all of XMLSERVICE (someday). There is much work left, but works faster than XMLSERVICE, even without tuning (suggest wait until late summer for perf test).

BTW -- The sample JSON is only for testing (libjson400.a). The json format is completely wrong, clunky, not pretty, etc., but i needed to check design parts worked together. I am targeting fall to complete.

kadler commented 7 years ago

Original comment by Tony Cairns (Bitbucket: rangercairns, GitHub: rangercairns).


Hi guys, We are having json format in and out discussions here. At the moment we can't do pgm calls. I am going to try to get something running end 2 end soon just so you can see major technology differences to xmlservice (2 weeks). However, i would caution json may change to better suit toolkit builders as we test (play).