At the moment, loader.py's recovery (time taken to continue a partially completed upload) is pretty slow, because it goes through each feature individually while checking it against status.sqlite3 (as per issue #48). Instead, we should download the number of features in every file (either at one shot at the start of the upload, or file-by-file as the upload proceeds), then come up with some way of figuring out the number of features in each of our shapefiles. If the feature count matches, we can skip the file without converting it into JSON or otherwise processing it in any way, saving a whole bunch of time.
At the moment, loader.py's recovery (time taken to continue a partially completed upload) is pretty slow, because it goes through each feature individually while checking it against status.sqlite3 (as per issue #48). Instead, we should download the number of features in every file (either at one shot at the start of the upload, or file-by-file as the upload proceeds), then come up with some way of figuring out the number of features in each of our shapefiles. If the feature count matches, we can skip the file without converting it into JSON or otherwise processing it in any way, saving a whole bunch of time.