Some of our tools have a "dry run" mode that goes through all the necessary steps and verifies inpurs but makes no permanent changes. This would be extremely useful in the case of index_netcdf:
output SQL commands to add the files to the database, which could be saved and used later, for super-quick indexing with the minimum and maximum calculations already done
However I think that - while extremely useful - this would be a lot of work to implement, possibly more than would be worth it. One of the functions of index_netcdf is to check whether a file with the same metadata attributes as the file currently being indexed is already in the database, and if so make a decision about whether to update the entry, update only the index time, or throw an error. This functionality would be difficult to do as a dry run, since we wouldn't actually be adding file1 to the database to check file2 against during the dry run.
Some of our tools have a "dry run" mode that goes through all the necessary steps and verifies inpurs but makes no permanent changes. This would be extremely useful in the case of
index_netcdf
:However I think that - while extremely useful - this would be a lot of work to implement, possibly more than would be worth it. One of the functions of
index_netcdf
is to check whether a file with the same metadata attributes as the file currently being indexed is already in the database, and if so make a decision about whether to update the entry, update only the index time, or throw an error. This functionality would be difficult to do as a dry run, since we wouldn't actually be addingfile1
to the database to checkfile2
against during the dry run.