metabench / nextleveldb-server

MIT License
1 stars 0 forks source link

Removal of Dated Comments - Put them here, discuss them here #4

Open metabench opened 6 years ago

metabench commented 6 years ago

` // 05/03/2018 - Being expanded greatly to provide functionality that will make a variety of processes easier to the client. // More advanced commands will be runnable on the server. // The server will make more use of the Model in order to act with more understanding.

// 18/03/2018 // Commands have been greatly expanded. // Finding that no field ids get persisted properly. // Need it so that a field with a '+' is (an autoincrementing) xas2 integer.

// 26/03/2018 // Have done more work on binary commands. // Have the foundation for a server connecting to another server to download all records in a table.

// 27/03/2018 // Getting to the stage where cumulative hashes of records could be useful. // Or key comparison tasks? // Could pause whichever stream gets ahead, and stream the output.

// Could check consistency with record range hashes. // Could be useful for comparing table structure in the core // Also for comparing tables such as currencies and markets to check for consistency.

// Could turn to JSON and then just compare the strings in JS to start with. // With the normalised records, need to check they are based on the same values.

// 31/03/2018 // Noticed that incrementors had not been correctly updated in some cases. That means new rows could have been created with a PK value of 0, overwriting other records. // May need some data recovery / checking if we are to use the data? // Maybe it ruined the Bitcoin data, as that is at index 0. // Could have startup checks on tables with autoincrementing keys, to see what the highest value is, and compare that with the incrementor field. // If (on startup) the incrementor is less than the highest key value, it sets it to the highest key value + 1 // Have clients listen to changes in the model (all the DB's core), so it can increment / update the index on the client side when it changes on the server. // Could reload the model, or process model updates by row. // Keeping the incrementors synced seems like a bit of a challenge.

// 12/05/2018 - Code can be simplified through better use of observables // fewer params (no decoding option) // could make lower level core functionality. `

metabench commented 6 years ago

No date, but removed from core-server:

// Private system DBs could be useful - they could best be used by subclasses of this. // Could provide auth. // Could keep log data on what maintenance / some other operations such as syncing have taken place // Could keep a table of the sync processes that are going on at present. This table could then be queried to give sync updates.

// Authentication is the next core feature of the server. // Could make it so that there is just an admin user for the moment with full permissions. // For some DB things, could also open it up so that any user can read, and there is a rate limiter and / or DOS protector.

// Authentication would enable this DB to run a CMS website.

// For the moment, need to get this deployed onto remote servers // Could also work on error logging in case of failure. // Possibly logging the errors to the DB itself.

// Then a multi-client would be useful, to monitor the status of these various servers. // Get servers better at tracking number of records put/got per second again.

// Need to separate out different collectors.

// Bittrex -> DB // Others -> DB

// Crypto-Data-Collector seems OK for one machine. // Maybe for coordinating a network too.

// Try collecting data for about 5 or so exchanges soon. // Need it so that the collectors can be coded separately, and then started up to collect data for the given DB.

// A&A is probably the highest priority though.

// The database server will start by loading its Model, or creating a new one if there are no records in the database.

// In general will unload code and complexity from the client-side. // Will make use of observables, and flexible functions with optional callbacks. // Will also have decoding options in a variety of places. Sometimes the data that gets processed on the server will need to be decoded.

// Got plenty more to do on the server level to ensure smooth data acquisition and sharing.

// Want to have server-side re-encoding of records, where it takes the records in one format, and if necessary reencodes them so that they go into the DB well. // That could involve foreign -> primary key lookups for some records, eg live asset data. //

// Crypto data collector should ensure a few tables according to some definitions. // Doing that when it starts up would be an effective way of doing it, and adding new tables will be incremental.

metabench commented 6 years ago

from server

// 05/03/2018 - Being expanded greatly to provide functionality that will make a variety of processes easier to the client. // More advanced commands will be runnable on the server. // The server will make more use of the Model in order to act with more understanding.

// 18/03/2018 // Commands have been greatly expanded. // Finding that no field ids get persisted properly. // Need it so that a field with a '+' is (an autoincrementing) xas2 integer.

// 26/03/2018 // Have done more work on binary commands. // Have the foundation for a server connecting to another server to download all records in a table.

// 27/03/2018 // Getting to the stage where cumulative hashes of records could be useful. // Or key comparison tasks? // Could pause whichever stream gets ahead, and stream the output.

// Could check consistency with record range hashes. // Could be useful for comparing table structure in the core // Also for comparing tables such as currencies and markets to check for consistency.

// Could turn to JSON and then just compare the strings in JS to start with. // With the normalised records, need to check they are based on the same values.

// 31/03/2018 // Noticed that incrementors had not been correctly updated in some cases. That means new rows could have been created with a PK value of 0, overwriting other records. // May need some data recovery / checking if we are to use the data? // Maybe it ruined the Bitcoin data, as that is at index 0. // Could have startup checks on tables with autoincrementing keys, to see what the highest value is, and compare that with the incrementor field. // If (on startup) the incrementor is less than the highest key value, it sets it to the highest key value + 1 // Have clients listen to changes in the model (all the DB's core), so it can increment / update the index on the client side when it changes on the server. // Could reload the model, or process model updates by row. // Keeping the incrementors synced seems like a bit of a challenge.

// 12/05/2018 - Code can be simplified through better use of observables // fewer params (no decoding option) // could make lower level core functionality.

// 03/06/2018 - Milestone reached! // Reliably sets up and adds to a more flexible DB with bittrex data. // Still could do with more use of batching put operations from an Active_Table. // A debounce/buffer could help to make this really straitforward in the API, so it will happen automatically when processing a JS loop. // The delay between those calls would be very small, and it can easily batch another operation. // However, not when we await the result.

// Client and server and messages would need to be adapted for batches.

// Working out how to do sharding in a really simple way would help. // Having a plan, and incorporating sharding piece by piece. // For the moment, making use of a single db would be good. // Sharding would definitely help, based on key. // Client that can refer to different machines for the records.

// Hard to get ajacency in these sharded records. // Assume they will be distributed about the place and would need to be rejoined. // Possibly, as they come in, add them to an in-memory B+ tree to use as a buffer. // Then we would keep track of which result stream is lowest, and what the lowest result from that stream is. // We can send up to that value, and that way we know we are returning data in sorted order. // The sorted-record-list would be useful here, maintaining a sorted list of all records. Then easy to return / send them in sorted order. // More functionality in the Model would help sharding operations, as the operations get modelled before they are done.