-
### Summary
We would like to add the ability to store data at different locations. This is primarily driven by our plan to allow users from multiple institutes to share a SciCat instance. The physi…
-
the script "archiveUSAPDC_batch.py" runs using a cron job once a day and will check which datasets have been set to "ready to be archived" and then will run the archiceUSAPDC.py script to archive each…
-
### Self Checks
- [X] I have searched for existing issues [search for existing issues](https://github.com/langgenius/dify/issues), including closed ones.
- [X] I confirm that I am using English to su…
-
# Task Name
Audio Super-Resolution / High-Frequency Band Reconstruction.
## Task Objective
Due to the constraint of transmission/storage/recording, the typical audio sampling rate is 8/16/24kHz. …
-
We currently collect URL's for the app through the Chrome extension (https://github.com/edgi-govdata-archiving/eot-nomination-tool). We use the same tool to collect "seeds" for nomination to the Inter…
-
The real goal here is to figure out which chunking algorithm(s) are ideal for archiving datasets so that we can recommend those chunking algorithms in any instructions we provide to archivists and dat…
-
Both this repo and the nwp consumer can pull down and process nwp data, and on the surface do very similar work, although the nwp consumer does have a lot more features. Should we consolidate on nwp c…
-
**Impact of the new feature**
Validity of output datasets
**Is your feature request related to a problem? Please describe.**
Currently, when a workflow is aborted, its output datasets are invalid…
-
Hello! I notice that you provide two scripts for 3d pose estimation networks, human3.6m only or multiple datasets (coco, mpii...), respectively. Would you mind sharing the performance difference betwe…
-
Hello QLever Team,
I've been exploring the capabilities of QLever and its control script, qlever-control, for managing SPARQL queries and datasets. To the best of my knowledge, I couldn't find a fe…