Closed GoogleCodeExporter closed 8 years ago
Original comment by ble...@gmail.com
on 8 Oct 2011 at 1:24
Why does alembic open and close the data sets and data spaces for each sample?
In hdf5, if you keep a hid open for each, it stays a part of the internal hdf5
10mB file cache (for the file that the dataset and dataspace belong to), and
then removes the overhead of creating or destroying the object.
I would advise however, against adding an API to read all the data at once.
The only thing this will give you is a very very large opportunity for memory
usage when people try and scale this solution to a lot of data.
I think it would be better to just keep open the hid for the dataset and
dataspace, and simply call h5read again with the same hid. This way if you hit
the same dataspace twice, the hdf5 internal cache will automatically speed-up
the access.
I've tried both, in a production before, and reading everything in one go
quickly brought Maya to it's knees as people loaded more and more characters
into a scene at once.
Original comment by evolutio...@gmail.com
on 16 Feb 2012 at 1:45
I haven't checked this for reading in a while, but keeping that many hid_t open
while writing was causing a massive amount of memory to be used.
Independent of the HDF5 internal cache, loading that much data into memory
would not be prudent with anything other than simplest scenes.
Marking the issue invalid.
Original comment by miller.lucas
on 16 Feb 2012 at 5:14
Original issue reported on code.google.com by
cookingw...@gmail.com
on 8 Oct 2011 at 1:07