Open mcrot opened 5 years ago
We should use the dtool API for this, see https://github.com/jic-dtool/dtoolcore
Now we can save containers for surfaces which can also be imported again later. We could use them to store all meta data for surfaces and topographies.
Or should we still use the dtool API for this? I suppose we don't want to save the data in our bucket for datasets but in topobank's bucket.
We should probably not switch to dtool for this (although I like the idea). This would require a bit more thought on how to integrate dtool with Django rather than just sticking it on top of it.
I like the above proposal which should address this issue.
We could automatically create a ZIP container for each digital surface twin. This would close this issue and also make ZIP files available for downloads instantaneously, see #249
This issue has two purposes:
Therefore we decided to restructure the object names in the S3 storage. Currently we have "folders" for each user, e.g.
contains all data files for all topographies added to a surface from user_3, so e.g. there are objects
We decided to use extra subfolders for each surface, together with an YAML file containing all meta data for this surface and all its topographies, e.g.
In order to implement #48, a celery task can be generated which takes all these files and creates a ZIP file for download.