Right now sync is done by chunk period, each chunk process in parallel ( as workers as configured ) each one of the measurements , if one measurement fails all the chuck are marked as bad chunk (even though all other measurements has been synced/copies ok) .
Our DB's usually could be one big measurement and others smaller, if processed by chunks all data will be affected if one big measurements impact on all other data. Perhaps per measurment parallel process data will be fastest copied and also recovered by measurement.
Right now sync is done by chunk period, each chunk process in parallel ( as workers as configured ) each one of the measurements , if one measurement fails all the chuck are marked as bad chunk (even though all other measurements has been synced/copies ok) .
Our DB's usually could be one big measurement and others smaller, if processed by chunks all data will be affected if one big measurements impact on all other data. Perhaps per measurment parallel process data will be fastest copied and also recovered by measurement.
This change requires a big refactor.