Onre way to address the problem of not being able to keep large data collections online is to create archival copies, send them to an offline storage provider and delete the online files. The archival copy needs to include the datafiles AND all of the relevant metadata at the experiment, dataset and datafile levels. Ideally, the metadata would be kept in the MyTardis database so that the user can still find out about his data holdings. However, an attempt to access a data file that had been archived would result in a "do you want to restore it" dialog, and the user would need to wait ... or come back later.
Some of the secondary requirements include:
archiving with and without deleting local data files
multiple archive locations
ability to merge an archive copy into the online copy (e.g. merging the metadata)
ability to load an archival copy that was created on a different MyTardis instance
various UI changes to allow the user to see what is archived (and where), to indicate what should be archived, and to request that stuff be brought back from archive.
Onre way to address the problem of not being able to keep large data collections online is to create archival copies, send them to an offline storage provider and delete the online files. The archival copy needs to include the datafiles AND all of the relevant metadata at the experiment, dataset and datafile levels. Ideally, the metadata would be kept in the MyTardis database so that the user can still find out about his data holdings. However, an attempt to access a data file that had been archived would result in a "do you want to restore it" dialog, and the user would need to wait ... or come back later.
Some of the secondary requirements include:
original LH ticket
This ticket has 0 attachment(s).