When loading an ensemble of thousands of time steps stored in HDF5 volumes, they (seemingly) use an enormous memory overhead when compared to VVD volumes. This might be caused by the fact that each HDF5 VolumeDisk has its own H5 file handle which is being held open throughout its lifetime.
This could be solved by only opening the file when a brick is being requested and closing it afterwards. All meta data should be read beforehand.
This additionally allows for file watching of HDF5 volumes under windows. This is currently not working, since it's not possible to replace an already opened file.
When loading an ensemble of thousands of time steps stored in HDF5 volumes, they (seemingly) use an enormous memory overhead when compared to VVD volumes. This might be caused by the fact that each HDF5 VolumeDisk has its own H5 file handle which is being held open throughout its lifetime.
This could be solved by only opening the file when a brick is being requested and closing it afterwards. All meta data should be read beforehand.
This additionally allows for file watching of HDF5 volumes under windows. This is currently not working, since it's not possible to replace an already opened file.