rqlite / rqlite-js

Javascript client library for rqlite, the lightweight, distributed database built on SQLite.
MIT License
97 stars 11 forks source link

use of some interesting npm modules #20

Open velociwabbit opened 3 years ago

velociwabbit commented 3 years ago

This is a question and a bit of a hopefully not too cheeky subtext

1) It is unclear to me why JSON. stringify is insufficient and you import qs.stringify. Does this have to do with how the data is parsed in the Rqlite module?

2) it is unclear to me what Axios provides over the much cleaner npm-fetch module. Is this a legacy issue or is there something specific about Axios that you require?

3) there are many, many parameters to fetching that seem to be unnecessary or rarely used. Is this just a completeness issue or is there something specific about Rqlite that requires such comprehensiveness?

4) nearly every variable in the HttpRequest module is indirectly called even within the module which makes no sense to me. Within a class even in java or CSharp accessing variables directly is preferred to adding the performance hit of turning it into functional-like code.

For example, there is getTotalHosts() which calls (and returns) this.getHosts().length which in turn returns this.hosts array. This seems like a great deal of code for a very simple request. Is there some specific reason why it is done this way or is this code ported from some other language verbatim?

The reason why I am asking this is that Rqlite is a great idea but has implicit complexity and before I adopt it and your library I want to make sure that I understand what I am in for.

Thanks

otoolep commented 3 years ago

Thanks for the feedback. Pull requests are always welcome, and improvements to open-source code are important. Fork away, and propose changes.

justinmorant commented 3 years ago

Thanks for the feedback @velociwabbit my answers to questions are in line below. Feel free to submit a PR for anything you feel better meets you use case. If you have any performance or implementation concerns RQLiteJS provide both unit and integration tests to help more fully demonstrate an concerns or harden any changes. This is covered in the documentation.

  1. It is unclear to me why JSON. stringify is insufficient and you import qs.stringify. Does this have to do with how the data is parsed in the Rqlite module?

The library qs is used for URL query string encoding / decoding not JSON parsing. NodeJS URL native query string support was not added until later versions of NodeJS. Rather then reimplement fixes or bracketed URL query parameter parsing a library that has been around and used in many production projects was chosen. To support older versions of NodeJS to use RQLiteJS a library or polyfill is required.

  1. it is unclear to me what Axios provides over the much cleaner npm-fetch module. Is this a legacy issue or is there something specific about Axios that you require?

Similar to the query string answer above, but more related to HTTP connection handling, there are many use cases which would need to be manually reimplemented if native NodeJS http requests were implemented so the Axios library was used instead. In the previous version of RQLiteJS another "popular" HTTP library was used, but support for the library ended. Axios was chosen due to its larger community and larger contributor base to minimize the risk of the library becoming unsupported.

  1. there are many, many parameters to fetching that seem to be unnecessary or rarely used. Is this just a completeness issue or is there something specific about Rqlite that requires such comprehensiveness?

Could you provide a link to a line in the code to provide clarity? I will need more context since the RQLite provides many options and RQLiteJS is a client library, it should provide as many of those options to the application as possible. It is up to the application to use those options or not depending on its requirements. The more implementation that is present in the RQLiteJS library the easier time time IDEs have auto completing Class instances in application code.

  1. nearly every variable in the HttpRequest module is indirectly called even within the module which makes no sense to me. Within a class even in java or CSharp accessing variables directly is preferred to adding the performance hit of turning it into functional-like code.

The original version of RQLiteJS used functional programming, but the need to maintain state arose after RQLite redirect handling was found to be missing from RQLiteJS. To support state management more easily / cleanly the functional code was ported to a Class based implementation. Many of the functions were already written / tested and the focus was on adding redirect handling and because it ended up being so close to complete, round robin reads. There is a lot of performance optimization in the Javascript V8 engine such as code inlining which means there is often little to no overhead in a function / method call vs direct variable access.

If you take another language such as PHP (I can't recall Java's implementation and I haven't worked in CSharp in a long time) I could see where direct variable access makes sense since functions pass by value and do not pass by reference by default. This would lead to higher memory usage plus the time it takes to copy the variable. Between the V8 optimizations and the default pass by reference nature of Javascript direct variable access over function calls hasn't been much of issue in my experience. Much of my time it spent using functional programming inside Javacript (both NodeJS and the Broswer) where there are use cases where the number of function calls is a few orders of magnitude higher compared to RQLite JS and Javascript performance has never been an issue.

It might be worth noting that since RQLite is SQL under the hood if the application is making thousands or 10s of thousands of individual API individual requests per transaction it might be better to refactor the RQLite API requests into fewer, but more complex SQL statements. Causing fewer calls to the SQL planner often ends up in the biggest performance gains after correctly indexing vs making more SQL calls.

velociwabbit commented 3 years ago

Thanks for the detailed response. I kind of figured that somewhere in the mix was a functional language.

It has been my experience that Javascript classes are somewhere between a Delorean and the first Tesla ... (cool to look at but when you really examine the value add... better in the movies :) )

All those darned millennial cs grads temporarily hijacked the language.

I am rewriting the code just so I understand it better... not that it will be better but I always rewrite key repo code before I commit to using the repo (except react-dom which is a hairball). My version is targeted to be about 150 loc breaking every formalization rule (no abstracting things like 'Get' for example and heavy use of {...{...obj},...{}} etc ).

The principle I try to follow is to touch variables once and only once if possible and inlining is ok as long as the page is short.

No doubt you will be horrified at the result but hopefully, it will be fast and easy to understand for us mere mortals. (inheritance makes everything so mysterious to me when I am trying to figure out what is going on.)

What is probably worth doing is a typescript version... if I don't fail at this endeavor (which is a big if...) I will convert it to typescript for those who are into that sort of thing.

velociwabbit commented 3 years ago

A couple of further questions. I fear I am missing a bit of the subtly of these classes so please my apologies in advance if I make ignorant statements...

1 Both DataResult and DataResults appear to be a wrapper around the SQLite results object. Most of the accessors (toString etc) can be performed on the results object itself in the same way that it is performed on SQLite results. Is there something critical about these two objects that make them important to the API other than OO access concepts? Most node users are used to manipulating JSON results themselves.

2 The APIClient appears to make two calls: "post" and "get" which in the base itself make one call with slightly different parameters to fetch... Then when I look at DataAPIClient, StatusAPIClient, and BackupAPIClient they are all ultimately translatable to that same fetch command with very little difference between the three.

In the example code that is provided, I do not see any critical examples where this differentiation is providing significant functionality that is not easily translated into native fetch calls by modifying a few parameters. Is there something critical I am missing about this abstraction or could all six of these calls (backup, load, status, statusAllHosts, query, execute) be part of the base HTTP client?

The reason why I ask this is that by removing these subclasses I can remove all the getters and setters that are one or two lines and am left with a very simple "HTTPRequest" module that has a complex fetch statement where the other getters, setters, parameter testers, etc could be directly manipulated by the previous six calls.

The bulk of the code for the HttpRequest class appears parameter management which could be left to the discretion of the API user instead of error checked so thoroughly (and therefore limited differently from normal node fetch calls). This is not to say that all such parameter checks are not useful, but instead, some might be abdicated to the node developer who regularly uses fetch for many things and therefore is used to solving these problems.

Finally, within the fetch statement itself, it is hard for me to figure out what is now native to node vs specific to round-robin querying. For example, there are these types of parameter tests ( I have removed all the sets, maps, gets, etc so that only the parameters are listed here ) :

 const retryable =   [ 'DELETE', 'GET',  'HEAD', 'OPTIONS','PATCH',  'POST', 'PUT' ].indexOf( httpMethod) >= 0 
              && (   [408, 413, 429, 500, 502, 503, 504, 521, 522, 524].indexOf( e.response.status || -1) >=0  
                 || [  'EADDRINUSE',  'EAI_AGAIN',  'ECONNREFUSED',  'ECONNRESET'
                    ,  'ENETUNREACH',  'ENOTFOUND',  'EPIPE',  'ETIMEDOUT'].indexOf(e.code || -1) >=0 )

It is unclear to me why these are critical for the user to manipulate for 99% of the use cases.

If I can remove the DataResults and DataResult classes and consolidate the three API classes into one HttpRequest class where the parameter options are made clear and documented I can deliver a version of the API that is about 100 lines of code (not including parameter expansion to one parameter per line ).

In the Koa universe, there are many examples of such modules where if it is not exactly what you want, copying and modifying the single index.js file that is the repo is fast and easy.

No doubt it violates every OO coding convention but then again ... Javascript :)

velociwabbit commented 3 years ago

With a bit more investigation I think that I understand the fetch process and wanted to confirm it.

Before a fetch 1 user defines URLs of rqlite locations (with the leader being the first in the list)

2 setup other fixed parameters in the constructor

For each Fetch

1 Figure out the right URL 2 setup things like authorization etc 3 pass the parameters, headers, body, etc to the Axios

On success send back either the response.data or a response object that is in the form of a DataResults class

On Failure 1 Determine if the failure is recoverable (number, type, number of tries, number of redirects) 2 If not, throw a meaningful error and return 3 If redirect find the redirect host from the e.response.headers.location and return by call fetch again incrementing params 4 Otherwise increment retry and return by calling fetch again

For the subclasses query indirectly calls fetch with URL value of '/db/query' and correct HTTP method and query options execute indirectly calls fetch with url value of '/db/execute' with POST method and query options for an array status, backup are both get with their own URL values load is a post status all is the odd man out.

Am I missing anything related to RQLite? ( i know there are many HTTP params but I am going to use node-fetch with params similar to the fetch in the browser so these will change).

justinmorant commented 3 years ago

@velociwabbit Overall I think you are understanding the design in RQLiteJS and it appears as though you have answered most of your questions. Is there anything you still have lingering questions regarding? If so could you post those question broken up into bulleted questions to be answered individually. I will try and answer the questions I am seeing below.

In general RQLiteJS favors code that would facilitate reasonable automatic code completion in an IDE to help guide engineers in more quickly working with RQLite responses and errors over producing the smallest library possible, this also includes separation in the code (as you called out with the different API implementations) to add new RQLite APIs should they be created in the future or for each API to change without need to rewrite large sections of the library.

One question I think I am seeing is, why are there additional HTTP error codes and messages being handled for retry beyond the 301 or 301 redirects? Many HTTP responses are retry able as they would have never reached the origin server and are most likely the result of networking errors (not anything related to RQLite directly). If you are familiar with running application in the Cloud often networking can experience blips during the day and many requests would generally fail unnecessarily if they aren't retry. These blips could be something as simple as a DNS update, a load balancer going offline, or an AZ temporarily experiencing to much traffic.

There also seems to be a question about why backup is being handle separately, it responds with content type of octet/stream instead of application/json. A backup should often be a much larger response body vs application/json so I can see why RQLite uses a stream instead. This aligns well with a JS stream vs a promise that contains a JS object. The backup interface from a response and code perspective is not the same as the query or execute interface.

velociwabbit commented 3 years ago

I am thinking that admin functions like backup , dump status and statusall can be handled the way that they are handled now for the time being. From a code maintenance and ongoing development perspective these set of functions are mostly performed manually or at least they are heavy on admin action when they are performed.

The API you created is certainly much more robust than anything I would have even been able to attempt. It certainly makes a great deal of sense in a compiled environment. And I am personally comfortable that I understand how it works enough to use the API as it is right now.

That being said the core functionality in the fetch command is relatively succinct. It is the parameterization, encapsulation, and error handling that comprise most of the code.

I think I can reduce the error handling down to one functional method like this (but with a nicer error message) :

const noRetry = (hCount=1, xRedirect=0, xTry=0, status=-1,code = -1)=>
    xRedirect  < 10 &&     // this appears to be hard coded in your code as well
    xTry        < hCount * 3  &&  // host count
   ( [301,302,408, 413, 429, 500, 502, 503, 504, 521, 522, 524].indexOf(status) >= 0 ||
    [  'EADDRINUSE',  'EAI_AGAIN',  'ECONNREFUSED',  'ECONNRESET'
    ,  'ENETUNREACH',  'ENOTFOUND',  'EPIPE',  'ETIMEDOUT'].indexOf(code) >= 0)
   ? false
    : { err: ' Redirects = ' + xRedirect + ' Attempts = ' + xtry + ' status = ' + status +  ' code = ' + code } 

The parameters appear to be a function of axios as the interface. Doing them with node-fetch is consistent with how browsers handle fetch. The key parameter that makes it more complex is for keep-alive (which could be a boolean option instead of a whole object requirement).

Authorization is a bit trickier. AuthZ is the latest fad for resource authorization and it cannot be supported with the current interface. So providing a really simple fetch capability that the user can extend will afford them the ability to implement it on their own.

This leaves URL selection/ creation that assumes the hostIndex is a number and that leaderHostIndex is a number:

 const URL = (options)=>   options.uri 
?   options.uri 
:  this.hosts[ +hostIndex > 0 ?  +hostIindex % this.hosts.length  
               :  +this.leaderHostIndex  % this.hosts.length  ]    

and then finding the correct redirect host

 const newLeaderIndex = (hostToFind=  parseUrl(e?.response?.headers?.location))   =>{
         const newleader =  this.hosts.map(v=> parseUrl(v) )
                            .findIndex((v) => ['hostname', 'protocol', 'port', 'path']
                                .every((field) => hostToFind[field] === v[field]) )
        return newleader <  0 ? 0 :  newleader >  this.hosts.length ? this.hosts.length - 1 : newleader
}

and then bumping redirect or attempt numbers as they are passed back as part of the recursive try loop.

All of these can be performed within a try-catch block that should highlight the most probable causes of the errors with some text telling the user to check that all the values are numbers and that the return value from the location is a correctly formed URL or something like that.

Now that I have convinced myself it is doable I get to put my money where my mouth is :)

velociwabbit commented 3 years ago

This violates every linting rule imaginable but it has the one redeeming quality that you can see the entire set of code in one page in visual studio on a 1080p monitor.

I tried to keep most of the lengths < 132... (for me that is a miracle as I have been working on 4k screens now for some time).

node-fetch has many of the rail guards that your code built in so I only implemented the minimal.

I am going to load up 3 servers tonight and see how it does. (He said trepidatiously!)


import fetch     from 'node-fetch'
import { parse } from 'url'

const  sleep = (ms ) => ms < 2 ? null : new Promise(resolve => setTimeout(resolve, ms)) 

const stats = [301,302,408, 413, 429, 500, 502, 503, 504, 521, 522, 524                                                    ].reduce((prev,v)({...prev,[v]: true}),{})
const codes = ['EADDRINUSE', 'EAI_AGAIN', 'ECONNREFUSED', 'ECONNRESET', 'ENETUNREACH', 'ENOTFOUND', 'EPIPE', 'ETIMEDOUT'   ].reduce((prev,v)({...prev,[v]: true}),{})

const  createQuery = (options = {}) => [ 'level','pretty' ,'timings' ,'atomic' , 'transaction'  ]
            .reduce((acc, entry) => ({...acc, ...( options[entry][1] ? { [options[entry][0]] :  options[entry][1] } : {}) }), {}) 

export class HttpRequest {
  leaderIndex = 0
  activeIndex = 0
  hosts    = []
  constructor (hosts ) {
         if ( !hosts.length )   throw new Error('At least one host must be provided')
         this.hosts  =   [hosts].flat().join(',').split(',').filter(v=> v).map( v => ('' + v).trim().replace(/\/$/, '')  )
    }
    noRetry = (xRedirect=0, xTry, status,code)=>  xRedirect < 10 && xTry < this.hosts.length * 3  && ( stats[status] || codes[ code])
        ?  { err:  false , redirect : status === 301 || status === 302  }
        :  { err: 'Redirects = ' + xRedirect + ' Attempts = ' + xtry + ' status = ' + status +  ' code = ' + code } 

    newLeader = (location ) =>{
        const hostToFind =   parse(location)
        const newleader  =   this.hosts.map(v=> parse(v) )
                                 .findIndex((v) => ['hostname', 'protocol', 'port', 'path']
                                 .every((field) => hostToFind[field] === v[field]) )
       this.leaderIndex =   newleader <  0 ? 0 :  newleader >   this.hosts.length ?  this.hosts.length - 1 : newleader
       return  this.leaderIndex 
    }  
    goFish  = async (uri, GETorPOST, params, body, hostIx, xRedirect, xTry) => { 
        try {
          return  await fetch(  this.hosts[+hostIx % this.hosts.length] +  url  ,
                                {  method: GETorPOST 
                                ,  responseType: 'json' 
                                ,  params
                                ,  headers: { 'Accept': 'application/json;charset=utf-8' }
                                ,  ...(body ? { body: body} :{})
                                }).then(res=> res.JSON()) 
        } catch (e) {
            let res     =   this.noRetry(xRedirect,xTry, e.status, e.code)
            if (res.err)    throw new Error( err.err)  

            await  sleep( res.redirect ? 0 : (2 ** xTry) * 100 ) 

            return this.goFish( uri , GETorPOST , params, body 
                        ,  ! res.redirect  ? 1 +  hostIx     : this.newLeader(e?.response?.headers?.location ) 
                        ,    res.redirect  ? 1 +  xRedirect  : xRedirect
                        ,  ! res.redirect  ? 1 +  xTry       :  xTry   )
        }                                                                                                                          
    }
    query   = async (sql, options = {},url= '/db/query') => Array.isArray(sql)
                ? this.goFish(  url, 'POST' ,      createQuery(options)            , sql   ,  this.activeIndex++, 0,0) 
                : this.goFish(  url, 'GET'  , { ...createQuery(options) , q:  sql }, false ,  this.activeIndex++, 0,0) 

    execute  = async (sql   , options = {}) =>  this.query(Array.isArray(sql) ? sql : [sql], options , '/db/execute' )
    status   = async (sql='', options = {}) =>  this.query( sql                            , options , '/status'     ) 
 }
justinmorant commented 3 years ago

@velociwabbit there is an unit test and integration test (based on RQLite docker images) suite. You should be able to swap the current library code with what you are writing if you want to see if it works.

velociwabbit commented 3 years ago

Cool i will let you know how it goes.

Might be a few days...

velociwabbit commented 3 years ago

My code is working but I am a bit confused about one item

The way that query is set up I can detect if the SQL statement is an array or a single statement. If an array then I use POST otherwise I use GET.

So the question is: What does "db/execute" do that is not implicitly performed in an array-based query?

Also, I really like the CLI for windows. Very easy to use for loading data and other queries. It seems to me that "status" and "status all" are DevOps-based commands that would be well served with the CLI.

I am not a distributed database expert so please excuse my ignorance. Is there any reason why someone would query the status of the database in node?

velociwabbit commented 3 years ago

Think of this as the software equivalent of a burner phone... no fancy features but it works.

If you want to try it yourself (because I have no idea how many critical features I have removed) the code is below.

Query only, no 'atomic', just 'transaction' and 'level'. 'Pretty' and 'timing' are default and not an option (just to make the line shorter because I am compulsive).

Also, I convert all responses to rows of record objects like MySQL does.

Post is automatic through query so no need for "db/execute"

Assumes that it is always on the correct lead but will resend if told to by error redirection

Assumes all DevOps done through CLI

There are four or five lines that exceed 132 chars... but I am on a big screen so I don't care ; )

45 loc (56 with spacing).


import fetch     from 'node-fetch'
import { parse } from 'url'

export class HttpRequest {
    index   = 0
    hosts   = []
    xhost   = ['hostname', 'protocol', 'port', 'path']

    toObj   = (ary) => ary.reduce((prev,v)=>({...prev,[v]: true}),{})

    stats   = this.toObj([301, 302, 408, 413, 429, 500, 502, 503, 504, 521, 522, 524 ])
    codes   = this.toObj(['EADDRINUSE', 'EAI_AGAIN', 'ECONNREFUSED', 'ECONNRESET', 'ENETUNREACH', 'ENOTFOUND', 'EPIPE', 'ETIMEDOUT']) 

    sleep   =  ms  => ms > 0 ?  new Promise(resolve => setTimeout(resolve, ms)) : undefined

    constructor (h ){   
        this.hosts = ! h ? [] : [h].flat().join(',').split(',').filter(v=> v).map( v => ('' + v).trim().replace(/\/$/, ''))
        if (!this.hosts.length) throw new Error('At least one host must be provided')
    }
    parms   = (o={}) => ['level', 'transaction'].filter(v => o[v]?.[1]).reduce((prev, v) =>({...prev, [o[v][0]]: o[v][1]}),{}) 

    GSet    = (gop,p)=> gop !== 'GET' ? '' : Object.keys(p).reduce((str,key)=> str + (key === 'q' ? '&q=' + p[key] : '&'+ key),'?pretty&timings')  

    indexer = (hostToFind ) =>{ 
            const  lead  = this.hosts.map(v=> parse(v)).findIndex((v) => this.xhost.every((f) => hostToFind[f] === v[f]))
            return lead <  0 ? 0 :  lead  
    }  
    retry = (xRedir, xTry, s,c, cnt)=> ({err: xRedir >= 10 || xTry >= cnt || (!this.stats[s] && ! this.codes[c]), redirect: +s === 301 || +s === 302})  

    goFish  = async (GorP, params, body, hostIx, xRedir=0, xTry=0 ) => { let res  ;
            try {
                res =    await fetch( this.hosts[+hostIx % this.hosts.length] + '/db/query' + this.GSet(GorP, params  )
                            , { method: GorP, responseType: 'json' , headers:{'Accept': 'application/json;charset=utf-8'}
                            , ...( GorP === 'GET' ? {} : params  ) 
                            , ...(body ? { body :  typeof body === 'string' ? body :  JSON.stringify(body) } : {})})
                res =    await  res?.json() 
                res =    res.results ?  res.results[0] : false
                return    ! res?.values ? [] :  res?.values?.map(record => res?.columns?.reduce((rec,column,ix)=>({...rec,[column]: record[ix]}),{})) 

            }catch(e){  let res, rfish  
                res     =    this.retry(xRedir,xTry, e.status, e.code, this.hosts.length * 3)
                if (res.err) throw new Error( 'Error e = '+ e + '\nRedirects= ' + xRedir + '\nAttempts= ' + xTry + '\nstatus= ' + e.status + '\ncode= ' + e.code)  

                await this.sleep( res.redirect ? 0 : (2 ** xTry) * 100 ) 

                rfish =  await this.goFish(GorP , params, body 
                           ,  ! res.redirect  ? 1 +  hostIx  : this.indexer(parse(e?.response?.headers?.location) ) 
                           ,    res.redirect  ? 1 +  xRedir  : xRedir
                           ,  ! res.redirect  ? 1 +  xTry    : xTry )
                return rfish
        }                                                                                                                          
    }
    query   = async (sql, opts = { } ) => Array.isArray(sql)
                ? await this.goFish('POST',      this.parms(opts)                              , sql  ,  this.index++ ) 
                : await this.goFish('GET' , { ...this.parms(opts), q: encodeURIComponent (sql)}, false,  this.index++ ) 
 }
justinmorant commented 3 years ago

My code is working but I am a bit confused about one item

The way that query is set up I can detect if the SQL statement is an array or a single statement. If an array then I use POST otherwise I use GET.

So the question is: What does "db/execute" do that is not implicitly performed in an array-based query?

@velociwabbit the RQLite docs are located in the link which follows, I did not write RQLite I only implemented the RQLiteJS client. https://github.com/rqlite/rqlite/blob/master/DOC/DATA_API.md

Also, I really like the CLI for windows. Very easy to use for loading data and other queries. It seems to me that "status" and "status all" are DevOps-based commands that would be well served with the CLI.

I am not a distributed database expert so please excuse my ignorance. Is there any reason why someone would query the status of the database in node?

@velociwabbit the RQLiteJS client has the goal of implementing the entire RQLite API in Javascript so the status API was included. Just thinking of an example: if someone wanted to build a Javascript based GUI for RQLite in the Browser the status endpoint might be useful information to display. Since RQLiteJS uses Axios there is no requirement that this client be run in NodeJS a developer could decide to use it in the Browser instead.

justinmorant commented 3 years ago

Think of this as the software equivalent of a burner phone... no fancy features but it works.

If you want to try it yourself (because I have no idea how many critical features I have removed) the code is below.

Query only, no 'atomic', just 'transaction' and 'level'. 'Pretty' and 'timing' are default and not an option (just to make the line shorter because I am compulsive).

Also, I convert all responses to rows of record objects like MySQL does.

Post is automatic through query so no need for "db/execute"

Assumes that it is always on the correct lead but will resend if told to by error redirection

Assumes all DevOps done through CLI

There are four or five lines that exceed 132 chars... but I am on a big screen so I don't care ; )

45 loc (56 with spacing).


import fetch     from 'node-fetch'
import { parse } from 'url'

export class HttpRequest {
    index   = 0
    hosts   = []
    xhost   = ['hostname', 'protocol', 'port', 'path']

    toObj   = (ary) => ary.reduce((prev,v)=>({...prev,[v]: true}),{})

    stats   = this.toObj([301, 302, 408, 413, 429, 500, 502, 503, 504, 521, 522, 524 ])
    codes   = this.toObj(['EADDRINUSE', 'EAI_AGAIN', 'ECONNREFUSED', 'ECONNRESET', 'ENETUNREACH', 'ENOTFOUND', 'EPIPE', 'ETIMEDOUT']) 

    sleep   =  ms  => ms > 0 ?  new Promise(resolve => setTimeout(resolve, ms)) : undefined

    constructor (h ){   
        this.hosts = ! h ? [] : [h].flat().join(',').split(',').filter(v=> v).map( v => ('' + v).trim().replace(/\/$/, ''))
        if (!this.hosts.length) throw new Error('At least one host must be provided')
    }
    parms   = (o={}) => ['level', 'transaction'].filter(v => o[v]?.[1]).reduce((prev, v) =>({...prev, [o[v][0]]: o[v][1]}),{}) 

    GSet    = (gop,p)=> gop !== 'GET' ? '' : Object.keys(p).reduce((str,key)=> str + (key === 'q' ? '&q=' + p[key] : '&'+ key),'?pretty&timings')  

    indexer = (hostToFind ) =>{ 
            const  lead  = this.hosts.map(v=> parse(v)).findIndex((v) => this.xhost.every((f) => hostToFind[f] === v[f]))
            return lead <  0 ? 0 :  lead  
    }  
    retry = (xRedir, xTry, s,c, cnt)=> ({err: xRedir >= 10 || xTry >= cnt || (!this.stats[s] && ! this.codes[c]), redirect: +s === 301 || +s === 302})  

    goFish  = async (GorP, params, body, hostIx, xRedir=0, xTry=0 ) => { let res  ;
            try {
                res =    await fetch( this.hosts[+hostIx % this.hosts.length] + '/db/query' + this.GSet(GorP, params  )
                            , { method: GorP, responseType: 'json' , headers:{'Accept': 'application/json;charset=utf-8'}
                            , ...( GorP === 'GET' ? {} : params  ) 
                            , ...(body ? { body :  typeof body === 'string' ? body :  JSON.stringify(body) } : {})})
                res =    await  res?.json() 
                res =    res.results ?  res.results[0] : false
                return    ! res?.values ? [] :  res?.values?.map(record => res?.columns?.reduce((rec,column,ix)=>({...rec,[column]: record[ix]}),{})) 

            }catch(e){  let res, rfish  
                res     =    this.retry(xRedir,xTry, e.status, e.code, this.hosts.length * 3)
                if (res.err) throw new Error( 'Error e = '+ e + '\nRedirects= ' + xRedir + '\nAttempts= ' + xTry + '\nstatus= ' + e.status + '\ncode= ' + e.code)  

                await this.sleep( res.redirect ? 0 : (2 ** xTry) * 100 ) 

                rfish =  await this.goFish(GorP , params, body 
                           ,  ! res.redirect  ? 1 +  hostIx  : this.indexer(parse(e?.response?.headers?.location) ) 
                           ,    res.redirect  ? 1 +  xRedir  : xRedir
                           ,  ! res.redirect  ? 1 +  xTry    : xTry )
                return rfish
        }                                                                                                                          
    }
    query   = async (sql, opts = { } ) => Array.isArray(sql)
                ? await this.goFish('POST',      this.parms(opts)                              , sql  ,  this.index++ ) 
                : await this.goFish('GET' , { ...this.parms(opts), q: encodeURIComponent (sql)}, false,  this.index++ ) 
 }

@velociwabbit thanks for creating a minimal implementation of an Javascript based RQLite client. What is your goal now that this is complete? Would you like to create another client to be listed as available for RQLite? Are you proposing that this replace the current RQLiteJS library? Was this just a learning exercise as you mentioned previously?

velociwabbit commented 3 years ago

Not sure. I did it just to understand exactly what was required to use it for my own purposes. If there is anyone who thinks that it would be useful I will gladly try and dress it up a bit and see how badly it fails your testing... (which is no doubt very thorough indeed!)

This is NOT by any stretch a full implementation of what you did. It is if anything a useful learning tool for others who are code phobic like I am.

I am by nature and scars a minimalist coder. I used to be in the very small minority but it seems that the more npm modules people use the more they realize that for the most part code (even really good error management code) is the enemy of reliability.

The reason my above implementation is useful to me is that it is "plug-compatible " with my use of SQLite and MySQL in my app .

For the most part, I try to stay away from anything but the narrowest specification for any implementation of databases.

I used to use triggers and stored procedures a great deal... then I had to port from SQL Server to MySQL and then to Oracle and then I threw my laptop out the window :).

I have even modified the above to eliminate transactions and levels because I realized that if I needed transactions I really needed transactions and that means that I need someone to blame when things fail... This is IMHO what a full-blown and fully rented relational database is all about. (Insurance and fingerpointing in disguise).

As a sanity check, I looked at the sqlite3.js implementation to see what they did. It is about the same level of complexity (albeit with better formatting and more thoughtful error messages).

I plan on using RQlite in production in a few weeks with the above code as the interface. (all the admin will be with the .exe CLI).

This touches on an interesting philosophical discussion about software and especially middleware. On one end of the perspective are minimalist coders like me who count repos and code layers before trusting a solution and the other perspective that thinks that adding repos makes life easier.

There is probably room for both but what Iend up doing most of the time is start out using 3rd party software and then over time whittle it down to what i really need and what i can support when something breaks.

Horses for courses.

I am happy to do whatever you want (or nothing with this code).