Tags can be used more than once. Each deployment is completely independent of the others, might occur on the same voyage or different, and might be used on the same type of animal or different.
As it currently stands, if this were to occur the system would upload the first dataset without trouble, but when it encountered the second dataset it would either:
Determine that the tag for the second dataset had already been uploaded and skip uploading the second dataset
Upload the second dataset, but overwrite the metadata for the tag. At this point this really only affects the DATE_DEPLOYED / DATE_RECOVERED fields.
Not clear how best to reconcile this, but multiple ideas were proposed:
Input/output directories - data directories are automatically moved from an 'input' directory to am 'output' directory, leaving only those dataset which were not uploaded successfully.
Pre-processing report - an input menu which is displayed to the user prior to uploading. This report would detail all of the datafiles which were present in the directory, would indicate which tags were already present in the database, any additional relevant metadata, and would allow the user to specify which tags to process
With the refactor of the package to single-directory-processing, the metadata which was the cause of this problem is now supplied directly by the user. As such, this issue is now moot.
Tags can be used more than once. Each deployment is completely independent of the others, might occur on the same voyage or different, and might be used on the same type of animal or different.
As it currently stands, if this were to occur the system would upload the first dataset without trouble, but when it encountered the second dataset it would either:
Upload the second dataset, but overwrite the metadata for the tag. At this point this really only affects the DATE_DEPLOYED / DATE_RECOVERED fields.
Not clear how best to reconcile this, but multiple ideas were proposed: