-
Link in
* "Read more about each of these personas in our User Personas documentation."
* "(as documented in user personas and use cases generated through our community’s participatory research pr…
-
Let's generate a dataset with same chunking and compression to both netcdf4 and zarr, and then test these access approaches:
* using HSDS, which can do metadata preloading and then parallel byte-rang…
-
We can use the python client wrapped in flask to retrieve the results. however, this would be annoying to get working with the existing datamodel as they would need to be kept in sync manually and all…
-
I have started testing out this approach on some of the Australian Integrated Marine Observing System datasets that are stored in NetCDF format on S3 (eg. s3://imos-data/IMOS/SRS/OC/gridded/aqua/P1D/…
-
Getting the following error for a results dataset that seems okay from the logs:
```
{
"error": [
{
"status": "500",
"title": "Internal Server Error",
"meta": {
…
-
When fetching a slice with `select` with integers/indices, I would expect the number of dimensions to be reduced.
Ex: I want to fetch the first row `0,:` from a 2D array of dimensions `[2, 3]`. Cur…
-
Data.biosimulations.dev, Data.biosimulations.org => point to hsds
Files.biosimulations.dev, Files.biosimulations.org. => point to s3
-
Hi there
I recently tried to run the following python code on eagle:
```
nsrdb_file = '/datasets/NSRDB/v3/nsrdb_2018.h5'
from rex import NSRDBX
nrel = (39.741931, -105.169891)
with NSRDBX(nsrd…
-
Thank you for your work on this API! It is a very useful bridge between the HDF5 format and web applications.
The OpenAPI definitions for the API provided in the 'openapi' branch currently have th…
-
Hi, while reading existing files from s3 storage works like a charm (replace 'ab' with 'rb' and 'a' with 'r' in the below example) with s3fs, trying to write files or append to existing files fails.
…