... especially but not only if we need to increase the number of plans collected.
There are 3 levels to this:
Using a more compact index representations of precinct geoids and district ids that track back to the user-friendly versions
Storing successive plans as deltas from the previous plan incrementally -- I can currently "pack" an ensemble in this way but only as a batch process after an ensemble is stored on disk which is how it currently needs to be stored in order to be read back in
Zipping the resulting file -- i.e., store the unzipped file in a temp location and zip it
Then reverse the process when reading an ensemble from disk.
This would allow us to store much larger ensembles in GitHub w/o having to resort to LFS (which can be very tricky, in my experience) and reduce transfer times (back from the cluster).
... especially but not only if we need to increase the number of plans collected.
There are 3 levels to this:
Then reverse the process when reading an ensemble from disk.
This would allow us to store much larger ensembles in GitHub w/o having to resort to LFS (which can be very tricky, in my experience) and reduce transfer times (back from the cluster).