as our volume increases and our data integrity issues multiply, do we need to move beyond pasting in csv files to some some tabular app that collects the data in the right format?
what would it cost to run such a server?
how would we fund it? (hint: bank of timm)
do we need this anyway, regardless of volume and data integrity issues, since soon (hopefully) we will get authors coming to the web site asking that we record their one-off example of reuse?
Currently, there is no need to move towards a dedicated backend (i.e. running our own server)
Costs would be negligible, as I can just run it of university infrastructure. Keeping that thing running is a different story, whether we run it at the resources at hand or if the pay for a server. There needs to be someone taking care of it.
I would rather keep the workflow we have and integrate such requests into the infrastructure we currently have. (cf. email conversation earlier)
as our volume increases and our data integrity issues multiply, do we need to move beyond pasting in csv files to some some tabular app that collects the data in the right format?
what would it cost to run such a server?
how would we fund it? (hint: bank of timm)
do we need this anyway, regardless of volume and data integrity issues, since soon (hopefully) we will get authors coming to the web site asking that we record their one-off example of reuse?