The current procedure of steps to bag and archive large datasets and make the required entries in the database is quite cumbersome and not all steps work for everybody (access privilege issues?). There seem to be some in between steps that need to be done by hand including final updates to the dataset_archive database table.
It would be good to revisit these steps and scripts and to establish a more simple and robust workflow.
The current procedure of steps to bag and archive large datasets and make the required entries in the database is quite cumbersome and not all steps work for everybody (access privilege issues?). There seem to be some in between steps that need to be done by hand including final updates to the dataset_archive database table. It would be good to revisit these steps and scripts and to establish a more simple and robust workflow.