netCDF supports compression and is a binary data format, so I would have naively expected the resulting files to be smaller than e.g. a plain text json.
But a simple example like:
o = ODS().sample_equilibrium()
o.save('test.nc')
o.save('test.json')
shows that the netcdf file is almost 2x the size. This scales worse for larger ODS.
I have a 20MB json that is > 100MB in netcdf and takes minutes to save to netcdf.
I'm not too familiar with the netcdf python API but am wondering if maybe data is stored in a suboptimal manner.
@kalling @orso82 would you consider just disabling the closing of issues by the stale issue bot? The bot keeps closing issues that describe unfixed problems.
netCDF
supports compression and is a binary data format, so I would have naively expected the resulting files to be smaller than e.g. a plain text json.But a simple example like:
shows that the netcdf file is almost 2x the size. This scales worse for larger ODS. I have a 20MB json that is > 100MB in netcdf and takes minutes to save to netcdf.
I'm not too familiar with the netcdf python API but am wondering if maybe data is stored in a suboptimal manner.