IBM / db2sock-ibmi

An asynchronous PASE Db2 and IBM i integration library
MIT License
4 stars 7 forks source link

Get Involved #2

Closed kadler closed 6 years ago

kadler commented 7 years ago

Original report by Chris Hird (Bitbucket: ChrisHird, GitHub: ChrisHird).


Would like to contribute

kadler commented 7 years ago

Original comment by Tony Cairns (Bitbucket: rangercairns, GitHub: rangercairns).


If i may, 'your vote' is exactly why i am doing db2sock super driver in an Open Source project. Your vote is worth much more that 2 cents.

3) Various 'cache invalidate' mechanisms can range from timestamp file, number of seconds cache, number of use count cache, etc. The above example is 'database'. The 'cache' mechanism very likely can be also used for toolkit calls. Again, user acceptance trick will be controls for 'invalidate cache scheme', 'mark not candidate', and so on ...

Yes, SQL400Json 'cache controls' are key. I am with you 100%. The 'cache' is feature to assist people with common web task, not to force people to accept cache for all things. Therein, #3 (above).

Optional read ... Tony "Ranger" Cairns, who?

If you look at db2sock c code with honesty, you will see c code is good (check out fast trace -- near art c code). Obviously, i do not need any help writing c code.

However, most importantly, i abhor intellectual bullies, most precisely, any design like you mention that has 'no back door' (no controls SQL400Json in my terminology). Again, your vote, chat, etc, should avoid resulting errors caused by 'no choice' APIs.

BTW -- My humour can be singular. 'Grasshopper', etc. I mean no offensive. Also i have tried very hard to write more text to explain what i am thinking (longer), but i may slip back to terse in a spot that you need clarification. Please be assured, questions are not stupid. In fact, questions result in the best of projects.

kadler commented 7 years ago

Original comment by Aaron Bartell (Bitbucket: aaronbartell, GitHub: aaronbartell).


The above example is 'database'. The 'cache' mechanism very likely can be also used for toolkit calls. Again, user acceptance trick will be controls for 'invalidate cache scheme', 'mark not candidate', and so on.

Just wanted to add my $0.02. I lived through a few years of the Hibernate (Java) ORM. It attempted to "help" in areas like lazy loading and cacheing, and it worked sometimes. For the times it didn't work it was difficult to get it to not load from cache (no simple back doors). My hope is we'd make it very clear and easy for how to not use cache.

kadler commented 7 years ago

Original comment by Tony Cairns (Bitbucket: rangercairns, GitHub: rangercairns).


Cool. We must protect the reputation of all good vendors. Thanks for you help.

I will post an entry here when i have more of 'semi-architecture' SQL400JSON API working (JSON in/out DB2 and toolkit).

Optional read ...

You will find I am not fettered by the past. This applies to project design.

Radical simple design ...

I think new SQL400Json interface should have a built in memory cache for performance.

The argument case for cache (on-line sales) ...

Vast majority of 'high volume' web pages are actually cached data. That is, 'high volume' on-line sales sites encourage you scroll, click and buy like crazy fast, then at check-out you get a message 'item #3 - Pink Blender is no longer available'. Technically, we IBM i database people know, this inexcusable 'last moment' offence against 'database truth' is due to caching. I would ask you to take a moment of unfettered thought about 'how to make money on the web'. I suspect you will come to conclusion that an 'unlikely' cache miss about Pink Blenders is worth the risk of possible customer satisfaction 'fallout'. Mathematically speaking, a warehouse full of 2000 pink blenders offers a web page design where probability theory favours cache human scroll, click, buy performance over 'data truth'. Even more 'funny', when an unlikely cache miss event occurs, web users have been instinctively trained to re-select the order to include the 'red blender' instead (Pavlov's dogs if you will allow).

Most DB2 IBM i people have a great deal of difficulty accepting this 'web' truth. However, If you accept the argument, many, many, types of cached data responses are perfectly fine.

Specific ...

1) SQL400Json will be used context web workloads 95% of time (cache is good).

2) About 80-90% of DB2 queries are read only (cache is good).

3) Various 'cache invalidate' mechanisms can range from timestamp file, number of seconds cache, number of use count cache, etc.

Implementation ...

Truly simple mechanism, save JSON output in memory 'keyed' and send match query data back to the client based on the 'cache invalidate' scheme.

The above example is 'database'. The 'cache' mechanism very likely can be also used for toolkit calls. Again, user acceptance trick will be controls for 'invalidate cache scheme', 'mark not candidate', and so on.

Ok ... just one 'radical thought among hundreds' ... welcome to my world Chris. Look forward to your input.

kadler commented 7 years ago

Original comment by Chris Hird (Bitbucket: ChrisHird, GitHub: ChrisHird).


Tony

OK I was just giving you my position, no offence meant. I will delete the entries with any mention of vendors. Sorry this turned out this way, not my intention to cause a problem or swing favor with any vendor.

kadler commented 7 years ago

Original comment by Tony Cairns (Bitbucket: rangercairns, GitHub: rangercairns).


Chris, I have to ask that you please refrain from bringing vendor products and vendor names into this discussion. Discussion along this line will result in immediate end to our experiment.

kadler commented 7 years ago

Original comment by Tony Cairns (Bitbucket: rangercairns, GitHub: rangercairns).


Well, observationally speaking, you are most comfortable with php. I believe we want your most comfortable expertise language. I think any version of PHP on IBM i will be just fine. Also, any web server(s) you choose are also fine (more the better). However, i need you to take the following action with your IBM i version of PHP.

Action: I need to know which version of php you are running on IBM i so that i may build a DB2 pecl driver to match (> php -v).

Goal: I believe much of our experiment interaction will be performance. Great! To this end, i suggest we are best served running at the highest possible speed transport. This implies a db2 driver transport like ibm_db2 using php syntax rendered JSON over new 'semi-architecture' SQL400Json API (new this driver).

Personally speaking: Should be interesting, i have been working on multiple ideas for SQL400Json API including a 'run here' inside php job (screaming fast), as well as traditional QSQSRVR job (not too bad in Norwegian speak), REST, and, of course async/callback (php poll).

When: I will work on the JSON stuff next week. I will post an append when ready for test. Thanks for your help.

Summary: Looking forward to hearing your observations. Who knows, maybe we can bring your php work back into IBM i instead of Linux.

Optional read:

Note 1 - The 'slow REST' version of db2 super driver libdb400.a will work with anything language 'as is' (see php tests git source). Also any web server with small wrapper (see RPG CGI - yes ILE, but same JSON wrapper any web server). In fact, you may want to try REST interface from your Linux machine to see how REST interface performs. Here again, your observation may prove fruitful to better the super driver.

Note 2 - Unfortunately, node db2 driver has some serious issues 'evolution' has not yet fixed. The only way it runs well today is by stalling the node interpreter via the 'Sync' interfaces. Not really a good candidate for this experiment. Please don't ask for details, most people barely understand node, and this driver 'short coming' list is very long (but we may fix 'a lot' with this super driver).

kadler commented 7 years ago

Original comment by Tony Cairns (Bitbucket: rangercairns, GitHub: rangercairns).


Practical matters ...

I have only minimal JSON code in this driver at the moment. I will be returning to this driver next week to add much more in the 'toolkit' JSON side (you volunteered). I will post an append to this issue to notify the new JSON is ready for a look (*).

(*) Of course, you are welcome to try things out any time. I do not want to waste your time when much is not yet available in the driver.

Thanks again.

kadler commented 7 years ago

Original comment by Tony Cairns (Bitbucket: rangercairns, GitHub: rangercairns).


Thanks Chris. We have many ambitious goals for the new PASE super driver. You will be an excellent candidate to help out. I promise we will not take up all you valuable time. We really could use another pair of eyes on the toolkit JSON functions, performance, etc.