Open wandell opened 8 years ago
This idea has some features in common with a feature in RenderToolbox called a recipe, which packages up all the instructions and data needed to reproduce a rendering into a tarball. Here we probably want something more generic, but there might be some use in thinking about the two concepts together, particularly since on Ben’s list is making RTB play nice with the archiva server.
I have been thinking about this as something we would do on the upload side, but I can see how doing it at the download side would have advantages.
Best,
David
On Apr 28, 2016, at 7:57 PM, Brian Wandell notifications@github.com<mailto:notifications@github.com> wrote:
For backup and reproducibility, we should probably have a function that downloads the files.
rdt = RdtClient ... rdt.listArtifacts loop to download each of them compress the folder Put it somewhere when you publish a paper.
Do we ignore subfolders? Or ... recursively? Or take a list of subfolders ...
This seems useful when we publish a paper that needs the files and we want to put the data on a permanent repository (e.g., the stanford digital repository). Then if the AWS site goes away, we still have the files available.
Probably other uses, too.
— You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHubhttps://github.com/isetbio/RemoteDataToolbox/issues/70
For backup and reproducibility, we should probably have a function that downloads the files.
rdt = RdtClient ... rdt.listArtifacts loop to download each of them compress the folder Put it somewhere when you publish a paper.
Do we ignore subfolders? Or ... recursively? Or take a list of subfolders ...
This seems useful when we publish a paper that needs the files and we want to put the data on a permanent repository (e.g., the stanford digital repository). Then if the AWS site goes away, we still have the files available.
Probably other uses, too.