I updated the README. Skimming the scripts makes it pretty easy to see what they do but a few sentences in the README will help newer users (like me!)
I altered trycopy to suit my own needs.
Added more logging - I want it to log something whenever it hits any error condition, even errors it can recover from
I write each batch out to its own file - I have a case where I am recovering data from a large (1.6TB) table with limited free disk space and non-technical challenges to getting more disk. Therefore having files split per batch helps me guard that space
Add a "start at page" option - This is related to the space. If I run an extraction and have to stop it for some reason or it fails, this option lets me restart at a specific point and avoid having to start over
gzip compression of the output files - the nature of my data compresses VERY well, I'm getting about 8x compression
I tested this script by:
reading the gzip output with common utilities like zcat and zless
starting with a higher page number and asserting that the script started at that page and not a lower page
I tested this script by: