// Private system DBs could be useful - they could best be used by subclasses of this.
// Could provide auth.
// Could keep log data on what maintenance / some other operations such as syncing have taken place
// Could keep a table of the sync processes that are going on at present. This table could then be queried to give sync updates.
// Authentication is the next core feature of the server.
// Could make it so that there is just an admin user for the moment with full permissions.
// For some DB things, could also open it up so that any user can read, and there is a rate limiter and / or DOS protector.
// Authentication would enable this DB to run a CMS website.
// For the moment, need to get this deployed onto remote servers
// Could also work on error logging in case of failure.
// Possibly logging the errors to the DB itself.
// Then a multi-client would be useful, to monitor the status of these various servers.
// Get servers better at tracking number of records put/got per second again.
// Need to separate out different collectors.
// Bittrex -> DB
// Others -> DB
// Crypto-Data-Collector seems OK for one machine.
// Maybe for coordinating a network too.
// Try collecting data for about 5 or so exchanges soon.
// Need it so that the collectors can be coded separately, and then started up to collect data for the given DB.
// A&A is probably the highest priority though.
// The database server will start by loading its Model, or creating a new one if there are no records in the database.
// In general will unload code and complexity from the client-side.
// Will make use of observables, and flexible functions with optional callbacks.
// Will also have decoding options in a variety of places. Sometimes the data that gets processed on the server will need to be decoded.
// Got plenty more to do on the server level to ensure smooth data acquisition and sharing.
// Want to have server-side re-encoding of records, where it takes the records in one format, and if necessary reencodes them so that they go into the DB well.
// That could involve foreign -> primary key lookups for some records, eg live asset data.
//
// Crypto data collector should ensure a few tables according to some definitions.
// Doing that when it starts up would be an effective way of doing it, and adding new tables will be incremental.
//
// Could put keys through a 'manifold nexus' to find out where they have been sharded to.
// key, sharding def => shard inde
// ll_nextleveldb_server
// then have various
// core
// Then there could be isomorphic mixins that can work on either the client or the server, processing data.
// They would need to go in a different module.
// maintain
// check
// fix
// get (non-core get)
// eg get_table_index_records_by_arr_table_ids
// put (non-core put)
// sync
// Would be quite a large change to all of it.
// Want client-side function (or on server?) to get the last record in any table.
// This could be used at start-up to assign what the incrementor value should be
// A version of NextLevelDB_Server with safety checks upon start looks like it will be the next stage.
// Core - Would handle net io and opening the DB on disk.
// (Standard) - Would have most of the functionality.
// Safety
// Seems to deserve its own file. Not sure about using its own class. Safety checking on startup seems like distinctive functionality.
// P2P
// A server could have a number of remote connections.
// Being able to initiate and use remote connections would be a useful server-side piece of functionality.
// Then make it available to the client.
// Want to be able, through a client, to get one server to copy table records from another server
// That will be a useful way to start and test the sync.
// Then there will be other sync modes, where a server will automatically sync from another server.
// May be good to use a single Amazon server for that, or another cloud provider.
// A table of completed syncing operations would help.
// Also, syncing operations in progress.
// Could contain data about estimates.
// May need to address changing table numbers / updating all the relevant records for that.
// Introduce more flexibility about core / system tables?
// DB migration of table IDs seems necessary in order to have increased core table space.
// As an earlier work-around, could have it as non-core.
// That makes sense because it's specific to that system.
// Core records will be distributed over the system. It's the structure of the DB.
// All core records are system records I think.
// Then there will be node-level non-distributed records.
// Prime example being the current idea of sync tracking records. That is similar to server-level logs. Server-level info about what data other clients hold. Data about other peers on the network.
// Useful to keep it in the DB so that it can get resumed on start-up.
// Want to make this without breaking the current system.
// Adding the syncing table itself will change the core.
// Be able to add a syncing table to the remote dbs as part of maintenance
// But they don't need it.
// Rather than checking for identical models, could check for relevant values being the same for each table.
// Specific table IDs being the same.
// Could load up the servers so that they create the sync tables.
// Could work around the models being different by not doing such low level syncing, or doing further tests first.
// Could keep it out of the core.
// That way the core comparisons stay the same. That's the distributed core.
// The trouble is that it would be referenced within the core because it exists in the DB.
// Could check for those specific differences and OK them. We really don't want differences in the tables that are about to be synced on a low level.
// That seems like a decent way of doing it. Still ll_sync when there are some core differences, so long as the differences won't cause problems.
// Though, an identical distributed core makes sense.
// That would mean the same tables on all machines. Would mean we could not use incrementors there, or not the core kind.
// Local_Incrementor?
// Separating out the distributed parts from non-distributed parts.
// Making another db could work.
// a new sub-db called local. Keep things very separate.
// Possibly globally shared core tables would work OK.
// Incrementors would not be such an issue.
// Changing table ids would be cool.
// Shifting all table IDs up by one. Would need to know which field is ever a table id.
// That seems like it would be a decent way to have a new table added at a lower ID, or for syncing when the table id changed.
// Go through every single ll record, including indexes and incrementors, and update (+1 or +n) every table id that is at or above a certain number.
// Think this would need to suspend db writes, and then inform all clients that its model has changed.
// Notification to clients of db model changes would be useful.
// Only send it when the changes have been completed.
// model_change_update_subscribing_clients();
// clients would need to specifically open the subscription to db model changes.
// then the issue of changing the db records in correspondance to the model changes.
// A new version that has got more table space for system tables would be useful.
// The separate db could handle records such as syncing records, and per-server security.
// Handling DB upgrades would be nice.
// All DBs would get this syncing table.
// Have another field within the table record saying if it is a dist-core table, if it is a node-core table.
// An unconnected sub-database would be useful for recording local logs. Wont be available through the normal API.
// The p2p version would have the local database. Would list which ranges which have been downloaded.
// Local_System.
// That would be a part of the p2p server.
// Sync operations table
// Could log read frequencies to arrange caching - though I think Level handles that anyway.
// Could store blocks which are put into the local db / have been put into the local db.
// Soring row range blocks in a separate local db would definitely be cool.
// A task queue, including completed tasks and task status would definitely be of use.
// Having it in a separate but accessible DB would be very useful.
// It would have its own OO interface, it would not be synced with other DBs.
// Tasks
// Completed already - timestamp completed
// Running - timestamp started
// Yet to run / queued. Definitely have the queued items going in sequence.
// Not so sure about different priorities for the queue. Priorities mean some tasks could jump ahead of others.
// The queue could be more about monitoring the tasks that are set. May want various different sequences, with blocks of tasks to have in the queue.
// Probably just stick with an order of addition queue, and keep track of it.
// That should be enough.
// Tasks would cover the sync operations.
// If it has another sync operation to do, it could note the ranges of records synced in previous sync operations.
// Could do more work on the partial syncing without this, checking the latest key values.
// Definitely will do more syncing of tables.
// Will change the way bittrex data is added to make it more general.
// Generalising the bittrex case to other cases.
// Probably worth re-doing some code, specifically the tables.
// Maybe retire crypto-data-model, as we now use declarations that are loaded into the normal model.
// Returning hashes of data could be an output transformation / encoding.
// That way we could get hashes as output for any query, or one hash that covers all results given.
// Daily record blocks could be of a lot of use.
// Would log that it has downloaded all of the records for a given day.
// Client connection status, depending on how syncing is going?
// nextleveldb-sync-client
// would keep track of the sync status to some extent.
// Getting inter-table ranges would definitely be of use for syncing.
// core-server seems important
// worth moving isomorphic funcitions out of here, they could be executed on the client too.
// nextleveldb-isomorphic
// maybe nextleveldb-maintain
// but we may want to make the function calls to the server as a single function call, if possible
// so could do it the client-side way if there is no available server function.
// This would allow further functionality to first be developed and tested on the client-side, but it will use the same core functions.
// Need to define the core client and server functions.
// Core set
// Server callable functions
// Server side functions
// Core client functions will include functions that are made availale by the server?
from
server
// Private system DBs could be useful - they could best be used by subclasses of this. // Could provide auth. // Could keep log data on what maintenance / some other operations such as syncing have taken place // Could keep a table of the sync processes that are going on at present. This table could then be queried to give sync updates.
// Authentication is the next core feature of the server. // Could make it so that there is just an admin user for the moment with full permissions. // For some DB things, could also open it up so that any user can read, and there is a rate limiter and / or DOS protector.
// Authentication would enable this DB to run a CMS website.
// For the moment, need to get this deployed onto remote servers // Could also work on error logging in case of failure. // Possibly logging the errors to the DB itself.
// Then a multi-client would be useful, to monitor the status of these various servers. // Get servers better at tracking number of records put/got per second again.
// Need to separate out different collectors.
// Bittrex -> DB // Others -> DB
// Crypto-Data-Collector seems OK for one machine. // Maybe for coordinating a network too.
// Try collecting data for about 5 or so exchanges soon. // Need it so that the collectors can be coded separately, and then started up to collect data for the given DB.
// A&A is probably the highest priority though.
// The database server will start by loading its Model, or creating a new one if there are no records in the database.
// In general will unload code and complexity from the client-side. // Will make use of observables, and flexible functions with optional callbacks. // Will also have decoding options in a variety of places. Sometimes the data that gets processed on the server will need to be decoded.
// Got plenty more to do on the server level to ensure smooth data acquisition and sharing.
// Want to have server-side re-encoding of records, where it takes the records in one format, and if necessary reencodes them so that they go into the DB well. // That could involve foreign -> primary key lookups for some records, eg live asset data. //
// Crypto data collector should ensure a few tables according to some definitions. // Doing that when it starts up would be an effective way of doing it, and adding new tables will be incremental.
//
// Could put keys through a 'manifold nexus' to find out where they have been sharded to. // key, sharding def => shard inde
// ll_nextleveldb_server
// then have various // core
// Then there could be isomorphic mixins that can work on either the client or the server, processing data. // They would need to go in a different module.
// maintain // check // fix // get (non-core get) // eg get_table_index_records_by_arr_table_ids // put (non-core put) // sync
// Would be quite a large change to all of it.
// Want client-side function (or on server?) to get the last record in any table. // This could be used at start-up to assign what the incrementor value should be
// A version of NextLevelDB_Server with safety checks upon start looks like it will be the next stage.
// Core - Would handle net io and opening the DB on disk. // (Standard) - Would have most of the functionality. // Safety // Seems to deserve its own file. Not sure about using its own class. Safety checking on startup seems like distinctive functionality. // P2P
// A server could have a number of remote connections. // Being able to initiate and use remote connections would be a useful server-side piece of functionality. // Then make it available to the client. // Want to be able, through a client, to get one server to copy table records from another server // That will be a useful way to start and test the sync. // Then there will be other sync modes, where a server will automatically sync from another server.
// May be good to use a single Amazon server for that, or another cloud provider.
// A table of completed syncing operations would help. // Also, syncing operations in progress. // Could contain data about estimates.
// May need to address changing table numbers / updating all the relevant records for that. // Introduce more flexibility about core / system tables?
// DB migration of table IDs seems necessary in order to have increased core table space. // As an earlier work-around, could have it as non-core. // That makes sense because it's specific to that system.
// Core records will be distributed over the system. It's the structure of the DB. // All core records are system records I think.
// Then there will be node-level non-distributed records. // Prime example being the current idea of sync tracking records. That is similar to server-level logs. Server-level info about what data other clients hold. Data about other peers on the network. // Useful to keep it in the DB so that it can get resumed on start-up. // Want to make this without breaking the current system. // Adding the syncing table itself will change the core. // Be able to add a syncing table to the remote dbs as part of maintenance // But they don't need it.
// Rather than checking for identical models, could check for relevant values being the same for each table. // Specific table IDs being the same.
// Could load up the servers so that they create the sync tables. // Could work around the models being different by not doing such low level syncing, or doing further tests first.
// Could keep it out of the core. // That way the core comparisons stay the same. That's the distributed core. // The trouble is that it would be referenced within the core because it exists in the DB. // Could check for those specific differences and OK them. We really don't want differences in the tables that are about to be synced on a low level. // That seems like a decent way of doing it. Still ll_sync when there are some core differences, so long as the differences won't cause problems.
// Though, an identical distributed core makes sense. // That would mean the same tables on all machines. Would mean we could not use incrementors there, or not the core kind. // Local_Incrementor?
// Separating out the distributed parts from non-distributed parts.
// Making another db could work. // a new sub-db called local. Keep things very separate.
// Possibly globally shared core tables would work OK. // Incrementors would not be such an issue.
// Changing table ids would be cool. // Shifting all table IDs up by one. Would need to know which field is ever a table id. // That seems like it would be a decent way to have a new table added at a lower ID, or for syncing when the table id changed.
// Go through every single ll record, including indexes and incrementors, and update (+1 or +n) every table id that is at or above a certain number. // Think this would need to suspend db writes, and then inform all clients that its model has changed. // Notification to clients of db model changes would be useful. // Only send it when the changes have been completed. // model_change_update_subscribing_clients(); // clients would need to specifically open the subscription to db model changes. // then the issue of changing the db records in correspondance to the model changes.
// A new version that has got more table space for system tables would be useful. // The separate db could handle records such as syncing records, and per-server security.
// Handling DB upgrades would be nice. // All DBs would get this syncing table.
// Have another field within the table record saying if it is a dist-core table, if it is a node-core table. // An unconnected sub-database would be useful for recording local logs. Wont be available through the normal API. // The p2p version would have the local database. Would list which ranges which have been downloaded.
// Local_System. // That would be a part of the p2p server. // Sync operations table // Could log read frequencies to arrange caching - though I think Level handles that anyway. // Could store blocks which are put into the local db / have been put into the local db. // Soring row range blocks in a separate local db would definitely be cool. // A task queue, including completed tasks and task status would definitely be of use. // Having it in a separate but accessible DB would be very useful. // It would have its own OO interface, it would not be synced with other DBs.
// Tasks // Completed already - timestamp completed // Running - timestamp started // Yet to run / queued. Definitely have the queued items going in sequence. // Not so sure about different priorities for the queue. Priorities mean some tasks could jump ahead of others. // The queue could be more about monitoring the tasks that are set. May want various different sequences, with blocks of tasks to have in the queue. // Probably just stick with an order of addition queue, and keep track of it. // That should be enough.
// Tasks would cover the sync operations. // If it has another sync operation to do, it could note the ranges of records synced in previous sync operations. // Could do more work on the partial syncing without this, checking the latest key values.
// Definitely will do more syncing of tables. // Will change the way bittrex data is added to make it more general.
// Generalising the bittrex case to other cases. // Probably worth re-doing some code, specifically the tables. // Maybe retire crypto-data-model, as we now use declarations that are loaded into the normal model.
// Returning hashes of data could be an output transformation / encoding. // That way we could get hashes as output for any query, or one hash that covers all results given.
// Daily record blocks could be of a lot of use. // Would log that it has downloaded all of the records for a given day.
// Client connection status, depending on how syncing is going?
// nextleveldb-sync-client // would keep track of the sync status to some extent.
// Getting inter-table ranges would definitely be of use for syncing.
// core-server seems important // worth moving isomorphic funcitions out of here, they could be executed on the client too.
// nextleveldb-isomorphic // maybe nextleveldb-maintain // but we may want to make the function calls to the server as a single function call, if possible // so could do it the client-side way if there is no available server function.
// This would allow further functionality to first be developed and tested on the client-side, but it will use the same core functions.
// Need to define the core client and server functions.
// Core set // Server callable functions // Server side functions
// Core client functions will include functions that are made availale by the server?
// Binance, Bitfinex, HitBTC.
// exchange_id, exchange_trade_id, currency_id, value, volume, was_buy
// market snapshot data // trade data // candlestick data // obs_to_cb
// a map function may be of use. // could possibly do the mapping within? or makes another version with the mapped data?